HCCMeshes: Hierarchical-Culling oriented Compact Meshes

Cited 6 time in webofscience Cited 7 time in scopus
  • Hit : 301
  • Download : 434
DC FieldValueLanguage
dc.contributor.authorKim, Tae-Joonko
dc.contributor.authorByun, Yongyoungko
dc.contributor.authorKim, Yongjinko
dc.contributor.authorMoon, Bochangko
dc.contributor.authorLee, Seungyongko
dc.contributor.authorYoon, Sung-Euiko
dc.date.accessioned2011-07-08T07:20:53Z-
dc.date.available2011-07-08T07:20:53Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2010-
dc.identifier.citationCOMPUTER GRAPHICS FORUM, v.29, no.2, pp.299 - 308-
dc.identifier.issn0167-7055-
dc.identifier.urihttp://hdl.handle.net/10203/24523-
dc.description.abstractHierarchical culling is a key acceleration technique used to efficiently handle massive models for ray tracing, collision detection, etc. To support such hierarchical culling, bounding volume hierarchies (BVHs) combined with meshes are widely used. However, BVHs may require a very large amount of memory space, which can negate the benefits of using BVHs. To address this problem, we present a novel hierarchical-culling oriented compact mesh representation, HCCMesh, which tightly integrates a mesh and a BVH together. As an in-core representation of the HCCMesh, we propose an i-HCCMesh representation that provides an efficient random hierarchical traversal and high culling efficiency with a small runtime decompression overhead. To further reduce the storage requirement, the in-core representation is compressed to our out-of-core representation, o-HCCMesh, by using a simple dictionary-based compression method. At runtime, o-HCCMeshes are fetched from an external drive and decompressed to the i-HCCMeshes stored in main memory. The i-HCCMesh and o-HCCMesh show 3.6:1 and 10.4:1 compression ratios on average, compared to a naively compressed (e.g., quantized) mesh and BVH representation. We test the HCCMesh representations with ray tracing, collision detection, photon mapping, and non-photorealistic rendering. Because of the reduced data access time, a smaller working set size, and a low runtime decompression overhead, we can handle models ten times larger in commodity hardware without the expensive disk I/O thrashing. When we avoid the disk I/O thrashing using our representation, we can improve the runtime performances by up to two orders of magnitude over using a naively compressed representation.-
dc.description.sponsorshipWe would like to thank Christian Lauterbach and anonymous reviewers for their constructive feedbacks. We also thank members of KAIST SGLab. for their helpful feedbacks. The St. Matthew, David, and Lucy models are courtesy of Stanford University. The Sponza model and CAD turbine models are courtesy of an anonymous donor and of Kitware respectively. This project was supported in part by MKE/MCST/IITA[2008-F-033-02,2008-F-030- 02], MCST/KEIT [2006-S-045-1], MKE/IITA u-Learning, MKE digital mask control, MCST/KOCCA-CTR&DP- 2009, KRF-2008-313-D00922, KMCC, and MSRA Eheritage.en
dc.languageEnglish-
dc.language.isoen_USen
dc.publisherWILEY-BLACKWELL PUBLISHING-
dc.subjectMODELS-
dc.titleHCCMeshes: Hierarchical-Culling oriented Compact Meshes-
dc.typeArticle-
dc.identifier.wosid000278182500006-
dc.identifier.scopusid2-s2.0-77952821776-
dc.type.rimsART-
dc.citation.volume29-
dc.citation.issue2-
dc.citation.beginningpage299-
dc.citation.endingpage308-
dc.citation.publicationnameCOMPUTER GRAPHICS FORUM-
dc.identifier.doi10.1111/j.1467-8659.2009.01599.x-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorYoon, Sung-Eui-
dc.contributor.nonIdAuthorKim, Yongjin-
dc.contributor.nonIdAuthorLee, Seungyong-
dc.type.journalArticleArticle-
dc.subject.keywordPlusMODELS-
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0