Semantic grasping aims to make stable robotic grasps suitable for specific object manipulation tasks. While existing semantic grasping models focus only on the grasping regions of objects based on their affordances, reasoning about which gripper to use for grasping, e.g., a rigid parallel-jaw gripper or a soft gripper, and how strongly to grasp the target object allows more sophisticated robotic manipulation. In this paper, we create a knowledge graph of robotic manipulation named roboKG to represent information about objects (e.g., the material and the components of an object), tasks, and appropriate robotic manipulation such as which component of an object to grasp, which gripper to use, and how strongly to grasp. Using knowledge graph embedding, we generate semantic representations of the entities and relations in roboKG, enabling us to make predictions on robotic manipulation. Based on the predicted gripper type, grasping component, and grasping force, a real robot performs seven different real-world tasks on 42 household objects, achieving an accuracy of 95.21%.