3DeformR: Freehand 3D Model Editing in Virtual Environments Considering Head Movements on Mobile Headsets

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 184
  • Download : 0
3D objects are the primary media in virtual reality environments in immersive cyberspace, also known as the Metaverse. Users, through editing such objects, can communicate with other individuals on mobile headsets. Knowing that the tangible controllers cause the burden to carry such addendum devices, the body-centric interaction techniques, such as hand gestures, get rid of such burdens. However, object editing with hand gestures is usually overlooked. Accordingly, we propose and implement a palm-based virtual embodiment for hand gestural model editing, namely 3DeformR. We employ three optimized hand gestures on bi-harmonic deformation algorithms that enable selecting and editing 3D models in fine granularity. Our evaluation with nine participants considers three interaction techniques (two-handed tangible controller (OMC), a naive implementation of hand gestures (SH), and 3DeformR. Two experimental tasks of planar and spherical objects imply that 3DeformR outperforms SH, in terms of task completion time (∼51%) and required actions (∼17%). Also, our participants with 3DeformR make significantly better performance than the commercial standard (OMC) - saved task time (∼43%) and actions (∼3%). Remarkably, the edited objects by 3DeformR show no discernible difference from those with tangible controllers characterised by accurate and responsive detection.
Publisher
Association for Computing Machinery, Inc
Issue Date
2022-06
Language
English
Citation

13th ACM Multimedia Systems Conference, MMSys 2022, pp.52 - 61

DOI
10.1145/3524273.3528180
URI
http://hdl.handle.net/10203/298768
Appears in Collection
IE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0