Automatic Recognition of Children Engagement from Facial Video using Convolutional Neural Networks

Cited 21 time in webofscience Cited 19 time in scopus
  • Hit : 361
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYun, Woo-Hanko
dc.contributor.authorLee, Dongjinko
dc.contributor.authorPark, Chankyuko
dc.contributor.authorKim, Jaehongko
dc.contributor.authorKim, Junmoko
dc.date.accessioned2021-01-04T06:10:11Z-
dc.date.available2021-01-04T06:10:11Z-
dc.date.created2018-11-29-
dc.date.issued2020-10-
dc.identifier.citationIEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v.11, no.4, pp.696 - 707-
dc.identifier.issn1949-3045-
dc.identifier.urihttp://hdl.handle.net/10203/279435-
dc.description.abstractAutomatic engagement recognition is a technique that is used to measure the engagement level of people in a specific task. Although previous research has utilized expensive and intrusive devices such as physiological sensors and pressure-sensing chairs, methods using RGB video cameras have become the most common because of the cost efficiency and noninvasiveness of video cameras. Automatic engagement recognition methods using video cameras are usually based on hand-crafted features and a statistical temporal dynamics modeling algorithm. This paper proposes a data-driven convolutional neural networks (CNNs)-based engagement recognition method that uses only facial images from input videos. As the amount of data in a dataset of children's engagement is insufficient for deep learning, pre-trained CNNs are utilized for low-level feature extraction from each video frame. In particular, a new layer combination for temporal dynamics modeling is employed to extract high-level features from low-level features. Experimental results on a database created using images of children from kindergarten demonstrate that the performance of the proposed method is superior to that of previous methods. The results indicate that the engagement level of children can be gauged automatically via deep learning even when the available database is deficient.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleAutomatic Recognition of Children Engagement from Facial Video using Convolutional Neural Networks-
dc.typeArticle-
dc.identifier.wosid000590759000010-
dc.identifier.scopusid2-s2.0-85046729313-
dc.type.rimsART-
dc.citation.volume11-
dc.citation.issue4-
dc.citation.beginningpage696-
dc.citation.endingpage707-
dc.citation.publicationnameIEEE TRANSACTIONS ON AFFECTIVE COMPUTING-
dc.identifier.doi10.1109/taffc.2018.2834350-
dc.contributor.localauthorKim, Junmo-
dc.contributor.nonIdAuthorLee, Dongjin-
dc.contributor.nonIdAuthorPark, Chankyu-
dc.contributor.nonIdAuthorKim, Jaehong-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorFeature extraction-
dc.subject.keywordAuthorFace recognition-
dc.subject.keywordAuthorCameras-
dc.subject.keywordAuthorFace-
dc.subject.keywordAuthorHeuristic algorithms-
dc.subject.keywordAuthorMachine learning-
dc.subject.keywordAuthorDatabases-
dc.subject.keywordAuthorAffective computing-
dc.subject.keywordAuthorartificial neural networks-
dc.subject.keywordAuthorconvolutional neural networks-
dc.subject.keywordAuthorengagement recognition-
dc.subject.keywordAuthormulti-layer neural networks-
dc.subject.keywordAuthorpattern recognition-
dc.subject.keywordPlusATTENTION-
dc.subject.keywordPlusVISION-
dc.subject.keywordPlusMODEL-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 21 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0