DC Field | Value | Language |
---|---|---|
dc.contributor.author | Leslie Ching Ow Tiong | ko |
dc.contributor.author | Kim, Seong Tae | ko |
dc.contributor.author | Ro, Yong Man | ko |
dc.date.accessioned | 2017-08-16T08:53:56Z | - |
dc.date.available | 2017-08-16T08:53:56Z | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.created | 2017-05-15 | - |
dc.date.issued | 2017-02 | - |
dc.identifier.citation | 멀티미디어학회논문지, v.20, no.2, pp.170 - 178 | - |
dc.identifier.issn | 1229-7771 | - |
dc.identifier.uri | http://hdl.handle.net/10203/225350 | - |
dc.description.abstract | Biometric recognition is one of the major challenging topics which needs high performance of recognition accuracy. Most of existing methods rely on a single source of biometric to achieve recognition. The recognition accuracy in biometrics is affected by the variability of effects, including illumination and appearance variations. In this paper, we propose a new multimodal biometrics recognition using convolutional neural network. We focus on multimodal biometrics from face and periocular regions. Through experiments, we have demonstrated that facial multimodal biometrics features deep learning framework is helpful for achieving high recognition performance. | - |
dc.language | English | - |
dc.publisher | 한국멀티미디어학회 | - |
dc.title | Multimodal Face Biometrics by Using Convolutional Neural Networks | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.citation.volume | 20 | - |
dc.citation.issue | 2 | - |
dc.citation.beginningpage | 170 | - |
dc.citation.endingpage | 178 | - |
dc.citation.publicationname | 멀티미디어학회논문지 | - |
dc.identifier.kciid | ART002203545 | - |
dc.contributor.localauthor | Ro, Yong Man | - |
dc.contributor.nonIdAuthor | Leslie Ching Ow Tiong | - |
dc.description.isOpenAccess | N | - |
dc.subject.keywordAuthor | Multimodal Biometrics Recognition | - |
dc.subject.keywordAuthor | Face Recognition | - |
dc.subject.keywordAuthor | Convolutional Neural Networks | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.