Im2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 46
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLee, Jihyunko
dc.contributor.authorSung, Minhyukko
dc.contributor.authorChoi, Honggyuko
dc.contributor.authorKim, Tae-Kyunko
dc.date.accessioned2023-11-28T08:04:49Z-
dc.date.available2023-11-28T08:04:49Z-
dc.date.created2023-11-27-
dc.date.created2023-11-27-
dc.date.created2023-11-27-
dc.date.issued2023-06-20-
dc.identifier.citationIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.21169 - 21178-
dc.identifier.issn1063-6919-
dc.identifier.urihttp://hdl.handle.net/10203/315357-
dc.description.abstractWe present Implicit Two Hands (Im2Hands), the first neural implicit representation of two interacting hands. Unlike existing methods on two-hand reconstruction that rely on a parametric hand model and/or low-resolution meshes, Im2Hands can produce fine-grained geometry of two hands with high hand-to-hand and hand-to-image coherency. To handle the shape complexity and interaction context between two hands, Im2Hands models the occupancy volume of two hands - conditioned on an RGB image and coarse 3D keypoints - by two novel attention-based modules responsible for (1) initial occupancy estimation and (2) context-aware occupancy refinement, respectively. Im2Hands first learns per-hand neural articulated occupancy in the canonical space designed for each hand using query-image attention. It then refines the initial two-hand occupancy in the posed space to enhance the coherency between the two hand shapes using query-anchor attention. In addition, we introduce an optional keypoint refinement module to enable robust two-hand shape estimation from predicted hand keypoints in a single-image reconstruction scenario. We experimentally demonstrate the effectiveness of Im2Hands on two-hand reconstruction in comparison to related methods, where ours achieves state-of-the-art results. Our code is publicly available at https://github.com/jyunlee/Im2Hands.-
dc.languageEnglish-
dc.publisherCVF-
dc.titleIm2Hands: Learning Attentive Implicit Representation of Interacting Two-Hand Shapes-
dc.typeConference-
dc.identifier.wosid001062531305049-
dc.type.rimsCONF-
dc.citation.beginningpage21169-
dc.citation.endingpage21178-
dc.citation.publicationnameIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)-
dc.identifier.conferencecountryCN-
dc.identifier.conferencelocationVancouver-
dc.identifier.doi10.1109/CVPR52729.2023.02028-
dc.contributor.localauthorSung, Minhyuk-
dc.contributor.localauthorKim, Tae-Kyun-
dc.contributor.nonIdAuthorLee, Jihyun-
dc.contributor.nonIdAuthorChoi, Honggyu-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0