3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint

Cited 62 time in webofscience Cited 64 time in scopus
  • Hit : 1087
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorJang, Youngkyoonko
dc.contributor.authorNoh, Seung-Takko
dc.contributor.authorChang, Hyung Jinko
dc.contributor.authorKim, Tae-Kyunko
dc.contributor.authorWoo, Woon-Tackko
dc.date.accessioned2015-04-29T01:27:52Z-
dc.date.available2015-04-29T01:27:52Z-
dc.date.created2015-04-27-
dc.date.created2015-04-27-
dc.date.created2015-04-27-
dc.date.created2015-04-27-
dc.date.created2015-04-27-
dc.date.issued2015-04-
dc.identifier.citationIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, v.21, no.4, pp.501 - 510-
dc.identifier.issn1077-2626-
dc.identifier.urihttp://hdl.handle.net/10203/198310-
dc.description.abstractIn this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD.-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.title3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint-
dc.typeArticle-
dc.identifier.wosid000351757000010-
dc.identifier.scopusid2-s2.0-84961289622-
dc.type.rimsART-
dc.citation.volume21-
dc.citation.issue4-
dc.citation.beginningpage501-
dc.citation.endingpage510-
dc.citation.publicationnameIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS-
dc.identifier.doi10.1109/TVCG.2015.2391860-
dc.contributor.localauthorKim, Tae-Kyun-
dc.contributor.localauthorWoo, Woon-Tack-
dc.contributor.nonIdAuthorJang, Youngkyoon-
dc.contributor.nonIdAuthorNoh, Seung-Tak-
dc.contributor.nonIdAuthorChang, Hyung Jin-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle; Proceedings Paper-
dc.subject.keywordAuthorHand tracking-
dc.subject.keywordAuthorspatio-temporal forest-
dc.subject.keywordAuthorselection-
dc.subject.keywordAuthoraugmented reality-
dc.subject.keywordAuthorcomputer vision-
dc.subject.keywordAuthorself-occlusion-
dc.subject.keywordAuthorclicking action detection-
dc.subject.keywordAuthorfingertip position estimation-
Appears in Collection
CS-Journal Papers(저널논문)GCT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 62 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0