DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kweon, In-So | ko |
dc.contributor.author | Kanade, Takeo | ko |
dc.date.accessioned | 2011-03-07T05:52:34Z | - |
dc.date.available | 2011-03-07T05:52:34Z | - |
dc.date.created | 2012-02-06 | - |
dc.date.created | 2012-02-06 | - |
dc.date.issued | 1992-02 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.14, no.2, pp.278 - 292 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10203/22446 | - |
dc.description.abstract | This paper presents 3-D vision techniques for incrementally building an accurate 3-D representation of rugged terrain using multiple sensors. We have developed the locus method to model the rugged terrain. The locus method exploits sensor geometry to efficiently build a terrain representation from multiple sensor data. Incrementally modeling the terrain from a sequence of range images requires an accurate estimate of motion between successive images. In rugged terrain, estimating motion accurately is difficult because of occlusions and irregularities. We show how to extend the locus method to pixel-based terrain matching, called the iconic matching method to solve these problems. To achieve the required accuracy in the motion estimate, our terrain matching method combines feature matching, iconic matching, and inertial navigation data. Over a long distance of robot motion, it is difficult to avoid error accumulation in a composite terrain map that is the result of only local observations. However, a prior digital elevation map (DEM) can reduce this error accumulation if we estimate the vehicle position in the DEM. We apply the locus method to estimate the vehicle position in the DEM by matching a sequence of range images with the DEM. Experimental results from large-scale real and synthetic terrains demonstrate the feasibility and power of our 3-D mapping techniques for rugged terrain. In real world experiments, we built a composite terrain map by merging 125 real range images over a distance of 100 m. Using synthetic range images, we produced a composite map of 150 m from 159 images. In this work, we demonstrate a 3-D vision system for modeling rugged terrain. With this system, mobile robots operating in rugged environments can build accurate terrain models from multiple sensor data. | - |
dc.description.sponsorship | The authors would like to thank W. Whittaker, M. Hebert, and E. Krotkov for their helpful discussions throughout this work. We would also like to thank K. Olin of Hughes Research Laboratories for providing range images and the DEM. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NASA or the United States Government. | en |
dc.language | English | - |
dc.language.iso | en_US | en |
dc.publisher | IEEE COMPUTER SOC | - |
dc.title | High-Resolution Terrian Map from Multiple Sensor Data | - |
dc.type | Article | - |
dc.identifier.wosid | A1992HC02900014 | - |
dc.type.rims | ART | - |
dc.citation.volume | 14 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 278 | - |
dc.citation.endingpage | 292 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.identifier.doi | 10.1109/34.121795 | - |
dc.embargo.liftdate | 9999-12-31 | - |
dc.embargo.terms | 9999-12-31 | - |
dc.contributor.localauthor | Kweon, In-So | - |
dc.contributor.nonIdAuthor | Kanade, Takeo | - |
dc.type.journalArticle | Letter | - |
dc.subject.keywordAuthor | AUTONOMOUS ROBOTS | - |
dc.subject.keywordAuthor | MATCHING | - |
dc.subject.keywordAuthor | RANGE IMAGES | - |
dc.subject.keywordAuthor | RUGGED TERRAIN | - |
dc.subject.keywordAuthor | SENSOR FUSION | - |
dc.subject.keywordAuthor | TERRAIN MAPS | - |
dc.subject.keywordAuthor | 3-D VISION | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.