Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view

Cited 2 time in webofscience Cited 0 time in scopus
  • Hit : 25
  • Download : 0
From an image of a person, we can easily infer the natural 3D pose and shape of the person even if ambiguity exists. This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference. However, existing human mesh recovery methods only consider the direction in which the image was taken due to their structural limitations. Hence, we propose 'Implicit 3D Human Mesh Recovery (ImpHMR)' that can implicitly imagine a person in 3D space at the feature-level via Neural Feature Fields. In ImpHMR, feature fields are generated by CNN-based image encoder for a given image. Then, the 2D feature map is volume-rendered from the feature field for a given viewing direction, and the pose and shape parameters are regressed from the feature. To utilize consistency with pose and shape from unseen-view, if there are 3D labels, the model predicts results including the silhouette from an arbitrary direction and makes it equal to the rotated ground-truth. In the case of only 2D labels, we perform self-supervised learning through the constraint that the pose and shape parameters inferred from different directions should be the same. Extensive evaluations show the efficacy of the proposed method.
Publisher
IEEE
Issue Date
2023-06
Language
English
Citation

2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pp.21148 - 21158

ISSN
1063-6919
DOI
10.1109/cvpr52729.2023.02026
URI
http://hdl.handle.net/10203/315721
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0