What Do Pedestrians See?: Left Right Pose Classification in Pedestrian-View using LRPose recognizer

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 62
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorChoi, Seungminko
dc.contributor.authorLee, Jae-Youngko
dc.date.accessioned2023-09-07T10:00:30Z-
dc.date.available2023-09-07T10:00:30Z-
dc.date.created2023-09-07-
dc.date.issued2021-07-12-
dc.identifier.citation2021 18th International Conference on Ubiquitous Robots (UR)-
dc.identifier.issn2325-033X-
dc.identifier.urihttp://hdl.handle.net/10203/312328-
dc.description.abstractA robot moving on an outdoor sidewalk can recognize its location using GPS signals. However, in urban environments surrounded by skyscrapers, GPS signals are often inaccurate. Or, even if it is accurate, it is not easy to know in which lane the robot is located on a road that is several tens of meters wide. In this article, an interesting image-based neural network is proposed to recognize the position of a moving robot on the sidewalk. In detail, we propose a classifier, Left Right Pose (lrpose) recognizer, that determines whether the pedestrian is on the left side or on the right side of the road in pedestrian-view. The image is assumed to be a frontal image taken from the sidewalk. The lrpose recognizer converts the input image into a features map through convolution layers, and classifies the features into three classes: left, right, and uncertain. About 36,000 ground truth images were collected for training the network. In order for the lrpose recognizer to work robustly against changes in illumination, weather, and environment, images acquired in downtown and suburbs, night and day were included. In the experiment, the proposed lrpose recognizer showed an accuracy of 94.7 % in suburban areas, 74.75 % in urban areas with very high population density, and 84.7 % in combination.-
dc.languageEnglish-
dc.publisherIEEE-
dc.titleWhat Do Pedestrians See?: Left Right Pose Classification in Pedestrian-View using LRPose recognizer-
dc.typeConference-
dc.identifier.wosid000706970000002-
dc.identifier.scopusid2-s2.0-85112441678-
dc.type.rimsCONF-
dc.citation.publicationname2021 18th International Conference on Ubiquitous Robots (UR)-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationGangneung-
dc.identifier.doi10.1109/ur52253.2021.9494654-
dc.contributor.nonIdAuthorLee, Jae-Young-
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0