(An) on-line performance-driven approach to real-time facial motion capture and generation실시간 표정 캡쳐와 생성을 위한 온라인 연기기반 접근방법

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 484
  • Download : 0
In this thesis, we present a new approach for on-line performance-driven facial animation. Our basic idea is capturing the facial motion from a real performer and adapting it to a virtual character in real-time. For this purpose, we address three issues: facial expression capture, facial example construction and facial animation. We first propose a comprehensive solution for real-time facial expression capture without any devices such as head-mounted cameras and face-attached markers. Our scheme first captures the 2D facial features and 3D head motion exploiting anthropometrie knowledge and then captures their time-varying 3D positions only due to facial expression. We adopt a Kalman filter to track the 3D features guided by their captured 2D positions while correcting their drift due to 3D head motion as well as removing noises. For real-time facial animation, we incorporate multiple face models, called "facial examples". Each of these examples reflects both a facial expression of different types and designer``s insight to be a good guideline for animation. However, it is difficult to achieve smooth local deformation and non-uniform blending without any tools since a facial example consists of a set of vertices with few handles. In order to facilitate the local control of facial features, we present a method for extracting a set of wire curves[47] and deformation parameters from a facial example. The ex-tracted wire curves are used both for effectively representing a facial example and for deforming a face model. Provided with those facial examples, we present a novel example-based approach for creating facial expressions of model to mimic those of face performer. The main issue is to find the best blend of facial examples according to the expression of face performer. Our scheme predefines the weight functions for the parameterized target examples based on radial basis functions. At runtime, given an input expression of face performer, we evaluate the blendin...
Advisors
Shin, Sung-Yongresearcher신성용researcher
Description
한국과학기술원 : 전산학전공,
Publisher
한국과학기술원
Issue Date
2004
Identifier
240750/325007  / 000965188
Language
eng
Description

학위논문(박사) - 한국과학기술원 : 전산학전공, 2004.8, [ ix, 89 p. ]

Keywords

ON-LINE; PERFORMANCE-DRIVEN; CAPTURE; 캡쳐; 온라인; 연기기반

URI
http://hdl.handle.net/10203/32875
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=240750&flag=dissertation
Appears in Collection
CS-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0