Generating unified model for dressed virtual humans

Cited 5 time in webofscience Cited 0 time in scopus
  • Hit : 459
  • Download : 0
Scenes with crowds of dressed virtual humans are getting more attention and importance in 3D games and virtual reality applications. Crowd scenes, which include huge amounts of virtual humans, require complex computation for animation and rendering. In this research, new methods are proposed to generate efficient virtual human models by unifying a body and a garment into an animatable model, which has skinning parameters for the common skeleton-driven animation. The generated model has controlled complexity in geometry and semantic information. The unified model is constructed by using the correspondence between the body and the garment meshes. To establish the correspondence, two opposite optimization methods are proposed and compared: the first is to fit the body onto the garment and the second is to fit the garment onto the body. The innovative aspect of our method lies in supporting multiple correspondences between body and cloth parts. This enables us to handle the skirt model which is difficult to be processed by using previous works, due to its topological differences to the body model.
Publisher
SPRINGER
Issue Date
2005-09
Language
English
Article Type
Article; Proceedings Paper
Citation

VISUAL COMPUTER, v.21, no.8-10, pp.522 - 531

ISSN
0178-2789
URI
http://hdl.handle.net/10203/88420
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 5 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0