Semantic Grasping via a Knowledge Graph of Robotic Manipulation: A Graph Representation Learning Approach

Cited 5 time in webofscience Cited 0 time in scopus
  • Hit : 263
  • Download : 0
Semantic grasping aims to make stable robotic grasps suitable for specific object manipulation tasks. While existing semantic grasping models focus only on the grasping regions of objects based on their affordances, reasoning about which gripper to use for grasping, e.g., a rigid parallel-jaw gripper or a soft gripper, and how strongly to grasp the target object allows more sophisticated robotic manipulation. In this paper, we create a knowledge graph of robotic manipulation named roboKG to represent information about objects (e.g., the material and the components of an object), tasks, and appropriate robotic manipulation such as which component of an object to grasp, which gripper to use, and how strongly to grasp. Using knowledge graph embedding, we generate semantic representations of the entities and relations in roboKG, enabling us to make predictions on robotic manipulation. Based on the predicted gripper type, grasping component, and grasping force, a real robot performs seven different real-world tasks on 42 household objects, achieving an accuracy of 95.21%.
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Issue Date
2022-10
Language
English
Article Type
Article
Citation

IEEE ROBOTICS AND AUTOMATION LETTERS, v.7, no.4, pp.9397 - 9404

ISSN
2377-3766
DOI
10.1109/LRA.2022.3191194
URI
http://hdl.handle.net/10203/297853
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0