
Learning keypoints from synthetic data for robotic cloth folding
- Author
- Thomas Lips (UGent) , Victor-Louis De Gusseme (UGent) and Francis wyffels (UGent)
- Organization
- Abstract
- Robotic cloth manipulation is challenging due to its deformability, which makes determining its full state infeasible. However, for cloth folding, it suffices to know the position of a few semantic keypoints. Convolutional neural networks (CNN) can be used to detect these keypoints, but require large amounts of annotated data, which is expensive to collect. To overcome this, we propose to learn these keypoint detectors purely from synthetic data, enabling low-cost data collection. In this paper, we procedurally generate images of towels and use them to train a CNN. We evaluate the performance of this detector for folding towels on a unimanual robot setup and find that the grasp and fold success rates are 77\% and 53\%, respectively. We conclude that learning keypoint detectors from synthetic data for cloth folding and related tasks is a promising research direction, discuss some failures and relate them to future work. A video of the system, as well as the codebase, more details on the CNN architecture and the training setup can be found at https://github.com/tlpss/workshop-icra-2022-cloth-keypoints
Downloads
-
fears 2022 poster.pdf
- full text (Author's original)
- |
- open access
- |
- |
- 910.00 KB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-01GMVB3FBNZJPX7T2WKTB5G5E9
- MLA
- Lips, Thomas, et al. “Learning Keypoints from Synthetic Data for Robotic Cloth Folding.” Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts, 2022, doi:10.5281/zenodo.7405459.
- APA
- Lips, T., De Gusseme, V.-L., & wyffels, F. (2022). Learning keypoints from synthetic data for robotic cloth folding. Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts. Presented at the Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Ghent, Belgium. https://doi.org/10.5281/zenodo.7405459
- Chicago author-date
- Lips, Thomas, Victor-Louis De Gusseme, and Francis wyffels. 2022. “Learning Keypoints from Synthetic Data for Robotic Cloth Folding.” In Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts. https://doi.org/10.5281/zenodo.7405459.
- Chicago author-date (all authors)
- Lips, Thomas, Victor-Louis De Gusseme, and Francis wyffels. 2022. “Learning Keypoints from Synthetic Data for Robotic Cloth Folding.” In Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts. doi:10.5281/zenodo.7405459.
- Vancouver
- 1.Lips T, De Gusseme V-L, wyffels F. Learning keypoints from synthetic data for robotic cloth folding. In: Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts. 2022.
- IEEE
- [1]T. Lips, V.-L. De Gusseme, and F. wyffels, “Learning keypoints from synthetic data for robotic cloth folding,” in Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts, Ghent, Belgium, 2022.
@inproceedings{01GMVB3FBNZJPX7T2WKTB5G5E9, abstract = {{Robotic cloth manipulation is challenging due to its deformability, which makes determining its full state infeasible. However, for cloth folding, it suffices to know the position of a few semantic keypoints. Convolutional neural networks (CNN) can be used to detect these keypoints, but require large amounts of annotated data, which is expensive to collect. To overcome this, we propose to learn these keypoint detectors purely from synthetic data, enabling low-cost data collection. In this paper, we procedurally generate images of towels and use them to train a CNN. We evaluate the performance of this detector for folding towels on a unimanual robot setup and find that the grasp and fold success rates are 77\% and 53\%, respectively. We conclude that learning keypoint detectors from synthetic data for cloth folding and related tasks is a promising research direction, discuss some failures and relate them to future work. A video of the system, as well as the codebase, more details on the CNN architecture and the training setup can be found at https://github.com/tlpss/workshop-icra-2022-cloth-keypoints}}, author = {{Lips, Thomas and De Gusseme, Victor-Louis and wyffels, Francis}}, booktitle = {{Faculty of Engineering and Architecture Research Symposium 2022 (FEARS 2022), Abstracts}}, language = {{eng}}, location = {{Ghent, Belgium}}, pages = {{1}}, title = {{Learning keypoints from synthetic data for robotic cloth folding}}, url = {{http://doi.org/10.5281/zenodo.7405459}}, year = {{2022}}, }
- Altmetric
- View in Altmetric