Exploring the effects of non-local blocks on video captioning networks = 비디오 캡션 작성 네트워크에서의 비 지역적 블록의 영향 탐색

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 434
  • Download : 0
In addition to visual features, the video also contains temporal information that contributes to semantic meaning regarding the relationships between objects and scenes. There have been many attempts to describe spatial and temporal relationships in the video, but simple encoder-decoder models are not sufficient for capturing long-range relationships in video clips because of the limitations of the local operations in recurrent models. In other fields, including visual question answering (VQA) and action recognition, researchers began to have interests in describing visual relations between the objects. In this paper, we introduce a video captioning method to capture temporal long-range dependencies with a non-local block. The proposed model utilizes both local and non-local features. We evaluate our approach on a Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset and a Microsoft Research-Video to Text (MSR-VTT) dataset. The experimental results show that a non-local block applied along a temporal axis could compensate the long-range dependency problem of the LSTM on video captioning datasets.
Advisors
Kim, Junmoresearcher김준모researcher
Description
한국과학기술원 :로봇공학학제전공,
Publisher
한국과학기술원
Issue Date
2019
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 로봇공학학제전공, 2019.2,[iii, 26 p. :]

Keywords

Video captioning▼along short-term memory▼anon-local block▼along-range dependency problem; 비디오 캡셔닝▼a장기적 단기 기억장치▼a비 지역적 블록▼a장거리 종속성 문제

URI
http://hdl.handle.net/10203/266003
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=843066&flag=dissertation
Appears in Collection
RE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0