Machine-Learned Light-Field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images

Cited 6 time in webofscience Cited 0 time in scopus
  • Hit : 218
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorBae, Sang-Inko
dc.contributor.authorLee, Sangyeonko
dc.contributor.authorKwon, Jae-Myeongko
dc.contributor.authorKim, Hyun-Kyungko
dc.contributor.authorJang, Kyung-Wonko
dc.contributor.authorLee, Doheonko
dc.contributor.authorJeong, Ki-Hunko
dc.date.accessioned2022-05-06T08:03:22Z-
dc.date.available2022-05-06T08:03:22Z-
dc.date.created2021-12-26-
dc.date.created2021-12-26-
dc.date.created2021-12-26-
dc.date.issued2022-04-
dc.identifier.citationADVANCED INTELLIGENT SYSTEMS, v.4, no.4-
dc.identifier.issn2640-4567-
dc.identifier.urihttp://hdl.handle.net/10203/296418-
dc.description.abstractFacial expression conveys nonverbal communication information to help humans better perceive physical or psychophysical situations. Accurate 3D imaging provides stable topographic changes for reading facial expression. In particular, light-field cameras (LFCs) have high potential for constructing depth maps, thanks to a simple configuration of microlens arrays and an objective lens. Herein, machine-learned NIR-based LFCs (NIR-LFCs) for facial expression reading by extracting Euclidean distances of 3D facial landmarks in pairwise fashion are reported. The NIR-LFC contains microlens arrays with asymmetric Fabry-Perot filter and NIR bandpass filter on CMOS image sensor, fully packaged with two vertical-cavity surface-emitting lasers. The NIR-LFC not only increases the image contrast by 2.1 times compared with conventional LFCs, but also reduces the reconstruction errors by up to 54%, regardless of ambient illumination conditions. A multilayer perceptron (MLP) classifies input vectors, consisting of 78 pairwise distances on the facial depth map of happiness, anger, sadness, and disgust, and also exhibits exceptional average accuracy of 0.85 (p<0.05). This LFC provides a new platform for quantitatively labeling facial expression and emotion in point-of-care biomedical, social perception, or human-machine interaction applications.-
dc.languageEnglish-
dc.publisherWILEY-
dc.titleMachine-Learned Light-Field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images-
dc.typeArticle-
dc.identifier.wosid000730701800001-
dc.type.rimsART-
dc.citation.volume4-
dc.citation.issue4-
dc.citation.publicationnameADVANCED INTELLIGENT SYSTEMS-
dc.identifier.doi10.1002/aisy.202100182-
dc.contributor.localauthorLee, Doheon-
dc.contributor.localauthorJeong, Ki-Hun-
dc.contributor.nonIdAuthorKwon, Jae-Myeong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorfacial expression reading-
dc.subject.keywordAuthorlight-field cameras-
dc.subject.keywordAuthormachine learning-
dc.subject.keywordAuthormultilayer perceptrons-
dc.subject.keywordAuthornear-infrared imaging-
dc.subject.keywordAuthor3D cameras-
dc.subject.keywordPlusPRESENTATION ATTACK DETECTION-
dc.subject.keywordPlusFACE RECOGNITION-
Appears in Collection
BiS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 6 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0