DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이승희 | ko |
dc.contributor.author | 김예은 | ko |
dc.contributor.author | 김윤수 | ko |
dc.contributor.author | 이우주 | ko |
dc.contributor.author | 명현 | ko |
dc.date.accessioned | 2020-12-01T02:30:30Z | - |
dc.date.available | 2020-12-01T02:30:30Z | - |
dc.date.created | 2020-11-26 | - |
dc.date.issued | 2020-07-02 | - |
dc.identifier.citation | 제 35회 제어•로봇•시스템학회 학술대회 (ICROS 2020), pp.186 - 187 | - |
dc.identifier.uri | http://hdl.handle.net/10203/277819 | - |
dc.description.abstract | Recently, deep learning has made remarkable success in various recognition and classification. In particular, gesture recognition performance, which is essential for human-computer interaction, is much more accurate than it used to be, but for practical use, it is necessary to increase accuracy in more diverse situations. Just as humans do not rely solely on vision but use hearing and many senses altogether when recognizing the gesture, multimodal information has been used in many studies for improving the performance of gesture recognition. This paper introduces a network using spatial and temporal attention maps for better performance on multimodal gesture recognition. The proposed network is tested with the Chalearn gesture dataset, and the results showed that the performance of multimodal gesture recognition was improved. | - |
dc.language | Korean | - |
dc.publisher | 제어•로봇•시스템학회 | - |
dc.title | Spatio-temporal 어텐션 맵을 이용한 멀티모달 제스처 인식 | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 186 | - |
dc.citation.endingpage | 187 | - |
dc.citation.publicationname | 제 35회 제어•로봇•시스템학회 학술대회 (ICROS 2020) | - |
dc.identifier.conferencecountry | KO | - |
dc.identifier.conferencelocation | 속초 델피노리조트 | - |
dc.contributor.localauthor | 명현 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.