MMH-GGCNN: Multi-Modal Hierarchical Generative Grasping Convolutional Neural Network

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 319
  • Download : 0
The advent of deep learning has changed the trends of various research areas including robotics. Especially, with the fast developing computer vision technology, deep learning based grasp pose detection algorithms have been presented. Unlike traditional algorithms, deep learning based ones can generalize well to unknown environment. However, there still exists the problem of computation time due to common pipeline of sampling and ranking grasp candidates. To deal with this problem, recently, lightweight network with moderate performances called GG-CNN has been developed. To further boost the performance by exploiting multi-modality and hierarchy of grasp components, we propose multi-modal hierarchical generative grasping CNN (MMH-GGCNN) with a small number of parameters. In the experiments, MMH-GGCNN achieves the improved accuracy of 91.9679% accuracy on the Cornell Grasping Dataset.
Publisher
SPRINGER INTERNATIONAL PUBLISHING AG
Issue Date
2021-12
Language
English
Citation

9th International Conference on Robot Intelligence Technology and Applications (RiTA), pp.422 - 430

ISSN
2367-3370
DOI
10.1007/978-3-030-97672-9_38
URI
http://hdl.handle.net/10203/298279
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0