Improving Video Captioning with Non-Local Neural Networks

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 27
  • Download : 0
Unlike static images, a video contains not only visual features but also more semantic meanings or relationships between the objects and scenes due to its temporal attribute. There have been many attempts to describe spatial and temporal relationships in videos, but the encoder-decoder based models are not enough to capture detailed relationships in videos. Specifically, a video clip often consists of several shots that seem to be unrelated, and simple recurrent model suffer from these change of shots. Recently, some studies have introduced the approach describing visual relations with relational reasoning on visual question answering and action recognition tasks. In this paper, we introduce an approach to capture temporal relationship with non-local block and boundary-awareness system. We evaluate our approach on Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset. Experimental results show that non-local block applied along the temporal axis can improve video captioning performance on the MSVD dataset.
Publisher
Institute of Electrical and Electronics Engineers Inc.
Issue Date
2018-06
Language
English
Citation

2018 IEEE International Conference on Consumer Electronics - Asia, ICCE-Asia 2018

DOI
10.1109/ICCE-ASIA.2018.8552140
URI
http://hdl.handle.net/10203/311582
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0