DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Yang, Hyun Seung | - |
dc.contributor.advisor | 양현승 | - |
dc.contributor.author | Park, Gwang Been | - |
dc.date.accessioned | 2018-06-20T06:24:15Z | - |
dc.date.available | 2018-06-20T06:24:15Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=718719&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/243445 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 전산학부, 2017.8,[iii, 16 p. :] | - |
dc.description.abstract | We present novel method for image-text multi-modal representation learning. In our knowledge, this work is the first approach of applying adversarial learning concept to multi-modal learning and not exploiting image-text pair information to learn multi-modal feature. We only use category information in contrast with most previous methods using image-text pair information for multi-modal embedding. In this paper, we show that multi-modal feature can be achieved without image-text pair information and our method makes more similar distribution with image and text in multi-modal feature space than other methods which use image-text pair information. And we show our multi-modal feature has universal semantic information, even though it was trained for category prediction. Our model is end-to-end backpropagation, intuitive and easily extended to other multimodal learning work. | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | Multi-Modal Representation▼aGenerative Adversarial Network▼aDomain Adaptation▼aAdversarial Learning | - |
dc.subject | 멀티모달 표현▼a적대적 생성 신경망▼a분야 이동▼a적대적 학습 | - |
dc.title | Image-text multi-modal representation learning by adversarial backpropagation | - |
dc.title.alternative | 적대적 역전파에 의한 영상-문장 멀티모달 표현 학습 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :전산학부, | - |
dc.contributor.alternativeauthor | 박광빈 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.