High-Resolution Terrian Map from Multiple Sensor Data

Cited 85 time in webofscience Cited 109 time in scopus
  • Hit : 414
  • Download : 729
DC FieldValueLanguage
dc.contributor.authorKweon, In-Soko
dc.contributor.authorKanade, Takeoko
dc.date.accessioned2011-03-07T05:52:34Z-
dc.date.available2011-03-07T05:52:34Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued1992-02-
dc.identifier.citationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.14, no.2, pp.278 - 292-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10203/22446-
dc.description.abstractThis paper presents 3-D vision techniques for incrementally building an accurate 3-D representation of rugged terrain using multiple sensors. We have developed the locus method to model the rugged terrain. The locus method exploits sensor geometry to efficiently build a terrain representation from multiple sensor data. Incrementally modeling the terrain from a sequence of range images requires an accurate estimate of motion between successive images. In rugged terrain, estimating motion accurately is difficult because of occlusions and irregularities. We show how to extend the locus method to pixel-based terrain matching, called the iconic matching method to solve these problems. To achieve the required accuracy in the motion estimate, our terrain matching method combines feature matching, iconic matching, and inertial navigation data. Over a long distance of robot motion, it is difficult to avoid error accumulation in a composite terrain map that is the result of only local observations. However, a prior digital elevation map (DEM) can reduce this error accumulation if we estimate the vehicle position in the DEM. We apply the locus method to estimate the vehicle position in the DEM by matching a sequence of range images with the DEM. Experimental results from large-scale real and synthetic terrains demonstrate the feasibility and power of our 3-D mapping techniques for rugged terrain. In real world experiments, we built a composite terrain map by merging 125 real range images over a distance of 100 m. Using synthetic range images, we produced a composite map of 150 m from 159 images. In this work, we demonstrate a 3-D vision system for modeling rugged terrain. With this system, mobile robots operating in rugged environments can build accurate terrain models from multiple sensor data.-
dc.description.sponsorshipThe authors would like to thank W. Whittaker, M. Hebert, and E. Krotkov for their helpful discussions throughout this work. We would also like to thank K. Olin of Hughes Research Laboratories for providing range images and the DEM. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NASA or the United States Government.en
dc.languageEnglish-
dc.language.isoen_USen
dc.publisherIEEE COMPUTER SOC-
dc.titleHigh-Resolution Terrian Map from Multiple Sensor Data-
dc.typeArticle-
dc.identifier.wosidA1992HC02900014-
dc.type.rimsART-
dc.citation.volume14-
dc.citation.issue2-
dc.citation.beginningpage278-
dc.citation.endingpage292-
dc.citation.publicationnameIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.identifier.doi10.1109/34.121795-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorKweon, In-So-
dc.contributor.nonIdAuthorKanade, Takeo-
dc.type.journalArticleLetter-
dc.subject.keywordAuthorAUTONOMOUS ROBOTS-
dc.subject.keywordAuthorMATCHING-
dc.subject.keywordAuthorRANGE IMAGES-
dc.subject.keywordAuthorRUGGED TERRAIN-
dc.subject.keywordAuthorSENSOR FUSION-
dc.subject.keywordAuthorTERRAIN MAPS-
dc.subject.keywordAuthor3-D VISION-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 85 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0