DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jang, Youngkyoon | ko |
dc.contributor.author | Noh, Seung-Tak | ko |
dc.contributor.author | Chang, Hyung Jin | ko |
dc.contributor.author | Kim, Tae-Kyun | ko |
dc.contributor.author | Woo, Woon-Tack | ko |
dc.date.accessioned | 2015-04-29T01:27:52Z | - |
dc.date.available | 2015-04-29T01:27:52Z | - |
dc.date.created | 2015-04-27 | - |
dc.date.created | 2015-04-27 | - |
dc.date.created | 2015-04-27 | - |
dc.date.created | 2015-04-27 | - |
dc.date.created | 2015-04-27 | - |
dc.date.issued | 2015-04 | - |
dc.identifier.citation | IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, v.21, no.4, pp.501 - 510 | - |
dc.identifier.issn | 1077-2626 | - |
dc.identifier.uri | http://hdl.handle.net/10203/198310 | - |
dc.description.abstract | In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD. | - |
dc.language | English | - |
dc.publisher | IEEE COMPUTER SOC | - |
dc.title | 3D Finger CAPE: Clicking Action and Position Estimation under Self-Occlusions in Egocentric Viewpoint | - |
dc.type | Article | - |
dc.identifier.wosid | 000351757000010 | - |
dc.identifier.scopusid | 2-s2.0-84961289622 | - |
dc.type.rims | ART | - |
dc.citation.volume | 21 | - |
dc.citation.issue | 4 | - |
dc.citation.beginningpage | 501 | - |
dc.citation.endingpage | 510 | - |
dc.citation.publicationname | IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS | - |
dc.identifier.doi | 10.1109/TVCG.2015.2391860 | - |
dc.contributor.localauthor | Kim, Tae-Kyun | - |
dc.contributor.localauthor | Woo, Woon-Tack | - |
dc.contributor.nonIdAuthor | Jang, Youngkyoon | - |
dc.contributor.nonIdAuthor | Noh, Seung-Tak | - |
dc.contributor.nonIdAuthor | Chang, Hyung Jin | - |
dc.description.isOpenAccess | N | - |
dc.type.journalArticle | Article; Proceedings Paper | - |
dc.subject.keywordAuthor | Hand tracking | - |
dc.subject.keywordAuthor | spatio-temporal forest | - |
dc.subject.keywordAuthor | selection | - |
dc.subject.keywordAuthor | augmented reality | - |
dc.subject.keywordAuthor | computer vision | - |
dc.subject.keywordAuthor | self-occlusion | - |
dc.subject.keywordAuthor | clicking action detection | - |
dc.subject.keywordAuthor | fingertip position estimation | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.