Advanced search
1 file | 1.39 MB Add to list

Learning keypoints for robotic cloth manipulation using synthetic data

Thomas Lips (UGent) , Victor-Louis De Gusseme (UGent) and Francis wyffels (UGent)
Author
Organization
Project
Abstract
Assistive robots should be able to wash, fold or iron clothes. However, due to the variety, deformability and self-occlusions of clothes, creating robot systems for cloth manipulation is challenging. Synthetic data is a promising direction to improve generalization, but the sim-to-real gap limits its effectiveness. To advance the use of synthetic data for cloth manipulation tasks such as robotic folding, we present a synthetic data pipeline to train keypoint detectors for almost-flattened cloth items. To evaluate its performance, we have also collected a real-world dataset. We train detectors for both T-shirts, towels and shorts and obtain an average precision of 64% and an average keypoint distance of 18 pixels. Fine-tuning on real-world data improves performance to 74% mAP and an average distance of only 9 pixels. Furthermore, we describe failure modes of the keypoint detectors and compare different approaches to obtain cloth meshes and materials. We also quantify the remaining sim-to-real gap and argue that further improvements to the fidelity of cloth assets will be required to further reduce this gap. The code, dataset and trained models are available here.
Keywords
Robots, Synthetic data, Pipelines, Detectors, Semantics, Flexible printed circuits, Deformation, Data sets for robotic vision, deep learning for visual perception, simulation and animation

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.39 MB

Citation

Please use this url to cite or link to this publication:

MLA
Lips, Thomas, et al. “Learning Keypoints for Robotic Cloth Manipulation Using Synthetic Data.” IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 9, no. 7, 2024, pp. 6528–35, doi:10.1109/LRA.2024.3405335.
APA
Lips, T., De Gusseme, V.-L., & wyffels, F. (2024). Learning keypoints for robotic cloth manipulation using synthetic data. IEEE ROBOTICS AND AUTOMATION LETTERS, 9(7), 6528–6535. https://doi.org/10.1109/LRA.2024.3405335
Chicago author-date
Lips, Thomas, Victor-Louis De Gusseme, and Francis wyffels. 2024. “Learning Keypoints for Robotic Cloth Manipulation Using Synthetic Data.” IEEE ROBOTICS AND AUTOMATION LETTERS 9 (7): 6528–35. https://doi.org/10.1109/LRA.2024.3405335.
Chicago author-date (all authors)
Lips, Thomas, Victor-Louis De Gusseme, and Francis wyffels. 2024. “Learning Keypoints for Robotic Cloth Manipulation Using Synthetic Data.” IEEE ROBOTICS AND AUTOMATION LETTERS 9 (7): 6528–6535. doi:10.1109/LRA.2024.3405335.
Vancouver
1.
Lips T, De Gusseme V-L, wyffels F. Learning keypoints for robotic cloth manipulation using synthetic data. IEEE ROBOTICS AND AUTOMATION LETTERS. 2024;9(7):6528–35.
IEEE
[1]
T. Lips, V.-L. De Gusseme, and F. wyffels, “Learning keypoints for robotic cloth manipulation using synthetic data,” IEEE ROBOTICS AND AUTOMATION LETTERS, vol. 9, no. 7, pp. 6528–6535, 2024.
@article{01J14VDWRRM9GS9HQ9BS1PNBXT,
  abstract     = {{Assistive robots should be able to wash, fold or iron clothes. However, due to the variety, deformability and self-occlusions of clothes, creating robot systems for cloth manipulation is challenging. Synthetic data is a promising direction to improve generalization, but the sim-to-real gap limits its effectiveness. To advance the use of synthetic data for cloth manipulation tasks such as robotic folding, we present a synthetic data pipeline to train keypoint detectors for almost-flattened cloth items. To evaluate its performance, we have also collected a real-world dataset. We train detectors for both T-shirts, towels and shorts and obtain an average precision of 64% and an average keypoint distance of 18 pixels. Fine-tuning on real-world data improves performance to 74% mAP and an average distance of only 9 pixels. Furthermore, we describe failure modes of the keypoint detectors and compare different approaches to obtain cloth meshes and materials. We also quantify the remaining sim-to-real gap and argue that further improvements to the fidelity of cloth assets will be required to further reduce this gap. The code, dataset and trained models are available here.}},
  author       = {{Lips, Thomas and De Gusseme, Victor-Louis and wyffels, Francis}},
  issn         = {{2377-3766}},
  journal      = {{IEEE ROBOTICS AND AUTOMATION LETTERS}},
  keywords     = {{Robots,Synthetic data,Pipelines,Detectors,Semantics,Flexible printed circuits,Deformation,Data sets for robotic vision,deep learning for visual perception,simulation and animation}},
  language     = {{eng}},
  number       = {{7}},
  pages        = {{6528--6535}},
  title        = {{Learning keypoints for robotic cloth manipulation using synthetic data}},
  url          = {{http://doi.org/10.1109/LRA.2024.3405335}},
  volume       = {{9}},
  year         = {{2024}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: