Camera-laser fusion sensor system and environmental recognition for humanoids in disaster scenarios

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 218
  • Download : 0
This research aims to develop a vision sensor system and a recognition algorithm to enable a humanoid to operate autonomously in a disaster environment. In disaster response scenarios, humanoid robots that perform manipulation and locomotion tasks must identify the objects in the environment from those challenged by the call by the United States' Defense Advanced Research Projects Agency, e.g., doors, valves, drills, debris, uneven terrains, and stairs, among others. In order for a humanoid to undertake a number of tasks, we construct a camera-laser fusion system and develop an environmental recognition algorithm. Laser distance sensor and motor are used to obtain 3D cloud data. We project the 3D cloud data onto a 2D image according to the intrinsic parameters of the camera and the distortion model of the lens. In this manner, our fusion sensor system performs functions such as those performed by the RGB-D sensor generally used in segmentation research. Our recognition algorithm is based on super-pixel segmentation and random sampling. The proposed approach clusters the unorganized cloud data according to geometric characteristics, namely, proximity and co-planarity. To assess the feasibility of our system and algorithm, we utilize the humanoid robot, DRC-HUBO, and the results are demonstrated in the accompanying video.
Publisher
KOREAN SOC MECHANICAL ENGINEERS
Issue Date
2017-06
Language
English
Article Type
Article
Keywords

IMAGE SEGMENTATION

Citation

JOURNAL OF MECHANICAL SCIENCE AND TECHNOLOGY, v.31, no.6, pp.2997 - 3003

ISSN
1738-494X
DOI
10.1007/s12206-017-0543-0
URI
http://hdl.handle.net/10203/226646
Appears in Collection
ME-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0