VDR-AM: View-dependent representation of articulated models

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 304
  • Download : 0
We present a novel, View-Dependent Representation of Articulated Models (VDR-AM), and show its main benefits in the context of view-dependent rendering integrated with occlusion culling for large-scale crowd scenes. In order to provide varying resolutions on each animated, articulated model, we propose to use a cluster hierarchy in the VDR-AM for an articulated model. The cluster hierarchy serves as a dual representation for both view-dependent rendering and occlusion culling. For a high-performance view-dependent rendering and occlusion culling, we construct each cluster of the cluster hierarchy to contain a spatially coherent portion of the mesh that also has similar simplification errors. To achieve our goal, we present an error-aware clustering method for articulated models. We also identify a subset of animation poses that well represents the original pose data and perform the well-known quadrics-based simplification to efficiently compute our representation, while achieving a high quality simplification. At runtime, we choose a LOD cut from the cluster hierarchy given a user specified error bound in the screen space and render all the visible clusters in the LOD cut. We implement our method in GPU and achieve interactive performance (e.g., 40 frames per second) for large-scale crowd scenes that consist up to thousands of articulated models and 242 M triangles, without noticeable visual artifacts.
Publisher
Vaclav Skala Union Agency
Issue Date
2013
Language
English
Citation

Journal of WSCG, v.21, no.3, pp.183 - 192

ISSN
1213-6972
URI
http://hdl.handle.net/10203/201651
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0