Advanced search
1 file | 7.26 MB Add to list

Video dataset of human demonstrations of folding clothing for robotic folding

Author
Organization
Abstract
General-purpose clothes-folding robots do not yet exist owing to the deformable nature of textiles, making it hard to engineer manipulation pipelines or learn this task. In order to accelerate research for the learning of the robotic clothes-folding task, we introduce a video dataset of human folding demonstrations. In total, we provide 8.5 hours of demonstrations from multiple perspectives leading to 1,000 folding samples of different types of textiles. The demonstrations are recorded in multiple public places, in different conditions with a diverse set of people. Our dataset consists of anonymized RGB images, depth frames, skeleton keypoint trajectories, and object labels. In this article, we describe our recording setup, the data format, and utility scripts, which can be accessed at https://adverley.github.io/folding-demonstrations
Keywords
Deformable objects, robotic manipulation, clothing, learning from demonstration, crowdsourcing

Downloads

  • IJRR19 Video dataset human folding demonstrations 2nd version open version .pdf
    • full text (Author's original)
    • |
    • open access
    • |
    • PDF
    • |
    • 7.26 MB

Citation

Please use this url to cite or link to this publication:

MLA
Verleysen, Andreas, et al. “Video Dataset of Human Demonstrations of Folding Clothing for Robotic Folding.” The International Journal of Robotics Research, 2020, doi:10.1177/0278364920940408.
APA
Verleysen, A., Biondina, M., & wyffels, F. (2020). Video dataset of human demonstrations of folding clothing for robotic folding. The International Journal of Robotics Research. https://doi.org/10.1177/0278364920940408
Chicago author-date
Verleysen, Andreas, Matthijs Biondina, and Francis wyffels. 2020. “Video Dataset of Human Demonstrations of Folding Clothing for Robotic Folding.” The International Journal of Robotics Research. https://doi.org/10.1177/0278364920940408.
Chicago author-date (all authors)
Verleysen, Andreas, Matthijs Biondina, and Francis wyffels. 2020. “Video Dataset of Human Demonstrations of Folding Clothing for Robotic Folding.” The International Journal of Robotics Research. doi:10.1177/0278364920940408.
Vancouver
1.
Verleysen A, Biondina M, wyffels F. Video dataset of human demonstrations of folding clothing for robotic folding. The International Journal of Robotics Research. 2020;
IEEE
[1]
A. Verleysen, M. Biondina, and F. wyffels, “Video dataset of human demonstrations of folding clothing for robotic folding,” The International Journal of Robotics Research, 2020.
@article{8669919,
  abstract     = {General-purpose clothes-folding robots do not yet exist owing to the deformable nature of textiles, making it hard to engineer manipulation pipelines or learn this task. In order to accelerate research for the learning of the robotic clothes-folding task, we introduce a video dataset of human folding demonstrations. In total, we provide 8.5 hours of demonstrations from multiple perspectives leading to 1,000 folding samples of different types of textiles. The demonstrations are recorded in multiple public places, in different conditions with a diverse set of people. Our dataset consists of anonymized RGB images, depth frames, skeleton keypoint trajectories, and object labels. In this article, we describe our recording setup, the data format, and utility scripts, which can be accessed at https://adverley.github.io/folding-demonstrations},
  articleno    = {027836492094040},
  author       = {Verleysen, Andreas and Biondina, Matthijs and wyffels, Francis},
  issn         = {0278-3649},
  journal      = {The International Journal of Robotics Research},
  keywords     = {Deformable objects,robotic manipulation,clothing,learning from demonstration,crowdsourcing},
  language     = {eng},
  title        = {Video dataset of human demonstrations of folding clothing for robotic folding},
  url          = {http://dx.doi.org/10.1177/0278364920940408},
  year         = {2020},
}

Altmetric
View in Altmetric