DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | 서민준 | - |
dc.contributor.author | Jo, Yongrae | - |
dc.contributor.author | 조용래 | - |
dc.date.accessioned | 2024-07-25T19:30:46Z | - |
dc.date.available | 2024-07-25T19:30:46Z | - |
dc.date.issued | 2023 | - |
dc.identifier.uri | http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1045730&flag=dissertation | en_US |
dc.identifier.uri | http://hdl.handle.net/10203/320542 | - |
dc.description | 학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2023.8,[iv, 26 p. :] | - |
dc.description.abstract | instead, it localizes and describes events within each input video at test time by optimizing solely on the input. This is accomplished by introducing a soft moment mask that represents a temporal segment in the video and jointly optimizing it with the prefix parameters of a language model. This joint optimization aligns a frozen language generation model (i.e., GPT-2) with a frozen vision-language contrastive model (i.e., CLIP) by maximizing the matching score between the generated text and a moment within the video. We also introduce a pairwise temporal IoU loss to let a set of soft moment masks capture multiple distinct events within the video. Our method effectively discovers diverse significant events within the video, with the resulting captions appropriately describing these events. The empirical results demonstrate that ZeroTA surpasses zero-shot baselines and even outperforms the state-of-the-art few-shot method on the widely-used benchmark ActivityNet Captions. Moreover, our method shows greater robustness compared to supervised methods when evaluated in out-of-domain scenarios. This research provides insight into the potential of aligning widely-used models, such as language generation models and vision-language models, to unlock a new capability—understanding temporal aspects of videos. | - |
dc.description.abstract | Dense video captioning, a task of localizing meaningful moments and generating relevant captions for videos, often requires a large, expensive corpus of annotated video segments paired with text. In an effort to minimize the annotation cost, we propose ZeroTA, a novel method for dense video captioning in a zero-shot manner. Our method does not require any videos or annotations for training | - |
dc.language | eng | - |
dc.publisher | 한국과학기술원 | - |
dc.subject | 고밀도 비디오 캡션 생성▼a제로샷▼a멀티 모달▼a언어 생성 모델▼a비전 언어 모델 | - |
dc.subject | Dense video captioning▼aZero-shot▼aMulti-modal▼aLanguage generation models▼aVision-language models | - |
dc.title | Zero-shot dense video captioning by jointly optimizing text and moment | - |
dc.title.alternative | 문장과 시점의 동시 최적화를 통한 제로샷 고밀도 캡션 생성 | - |
dc.type | Thesis(Master) | - |
dc.identifier.CNRN | 325007 | - |
dc.description.department | 한국과학기술원 :김재철AI대학원, | - |
dc.contributor.alternativeauthor | Seo, Minjoon | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.