Zero-shot dense video captioning by jointly optimizing text and moment문장과 시점의 동시 최적화를 통한 제로샷 고밀도 캡션 생성

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 3
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor서민준-
dc.contributor.authorJo, Yongrae-
dc.contributor.author조용래-
dc.date.accessioned2024-07-25T19:30:46Z-
dc.date.available2024-07-25T19:30:46Z-
dc.date.issued2023-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045730&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/320542-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iv, 26 p. :]-
dc.description.abstractinstead, it localizes and describes events within each input video at test time by optimizing solely on the input. This is accomplished by introducing a soft moment mask that represents a temporal segment in the video and jointly optimizing it with the prefix parameters of a language model. This joint optimization aligns a frozen language generation model (i.e., GPT-2) with a frozen vision-language contrastive model (i.e., CLIP) by maximizing the matching score between the generated text and a moment within the video. We also introduce a pairwise temporal IoU loss to let a set of soft moment masks capture multiple distinct events within the video. Our method effectively discovers diverse significant events within the video, with the resulting captions appropriately describing these events. The empirical results demonstrate that ZeroTA surpasses zero-shot baselines and even outperforms the state-of-the-art few-shot method on the widely-used benchmark ActivityNet Captions. Moreover, our method shows greater robustness compared to supervised methods when evaluated in out-of-domain scenarios. This research provides insight into the potential of aligning widely-used models, such as language generation models and vision-language models, to unlock a new capability—understanding temporal aspects of videos.-
dc.description.abstractDense video captioning, a task of localizing meaningful moments and generating relevant captions for videos, often requires a large, expensive corpus of annotated video segments paired with text. In an effort to minimize the annotation cost, we propose ZeroTA, a novel method for dense video captioning in a zero-shot manner. Our method does not require any videos or annotations for training-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject고밀도 비디오 캡션 생성▼a제로샷▼a멀티 모달▼a언어 생성 모델▼a비전 언어 모델-
dc.subjectDense video captioning▼aZero-shot▼aMulti-modal▼aLanguage generation models▼aVision-language models-
dc.titleZero-shot dense video captioning by jointly optimizing text and moment-
dc.title.alternative문장과 시점의 동시 최적화를 통한 제로샷 고밀도 캡션 생성-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorSeo, Minjoon-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0