CAPTURING LONG-RANGE DEPENDENCIES IN VIDEO CAPTIONING

Cited 3 time in webofscience Cited 0 time in scopus
  • Hit : 163
  • Download : 0
Most video captioning networks rely on recurrent models, including long short-term memory (LSTM). However, these recurrent models have a long-range dependency problem; thus, they are not sufficient for video encoding. To overcome this limitation, several studies investigated the relationships between objects or entities and have shown excellent performance in video classification and video captioning. In this study, we analyze a video captioning network with a non-local block in terms of temporal capacity. We introduce a video captioning method to capture long-range temporal dependencies with a non-local block. The proposed model independently uses local and non-local features. We evaluate our approach on a Microsoft Video Description Corpus (MSVD, YouTube2Text) dataset. The experimental results show that a non-local block applied along the temporal axis can solve the long-range dependency problem of the LSTM in video captioning datasets.
Publisher
IEEE Signal Processing Society
Issue Date
2019-09-24
Language
English
Citation

The 26th IEEE International Conference on Image Processing, pp.1880 - 1884

DOI
 10.1109/ICIP.2019.8803143
URI
http://hdl.handle.net/10203/269291
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 3 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0