Multiple Videos Captioning Model for Video Storytelling

Cited 2 time in webofscience Cited 2 time in scopus
  • Hit : 157
  • Download : 0
In this paper, We propose a novel video captioning model that utilizes context information of correlated clips. Unlike the ordinary "one clip - one caption" algorithms, we concatenate multiple neighboring clips as a chunk and train the network in "one chunk - multiple caption" manner. We train and evaluate our algorithm using M-VAD dataset and report the performance of caption and keyword generation. Our model is a foundation model for generating a video story using several captions. Therefore, in this paper, we focus on caption generation for several videos and trend analysis of the generated captions. In the experiments, we show the performance of intermediate results of our model in both qualitative and quantitative aspects.
Publisher
IEEE
Issue Date
2019-03-02
Language
English
Citation

The 6th IEEE International Conference on Big Data and Smart Computing (BigComp2019), pp.355 - 358

ISSN
2375-933X
DOI
10.1109/BIGCOMP.2019.8679213
URI
http://hdl.handle.net/10203/274726
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0