HBoP: Hierarchical bag of phrases계층적 구문의 모음

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 4
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor제임스 손-
dc.contributor.advisorJames, Thorne-
dc.contributor.advisor정송-
dc.contributor.authorWaheed, Sania-
dc.contributor.authorSania Waheed-
dc.date.accessioned2024-07-30T19:30:39Z-
dc.date.available2024-07-30T19:30:39Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1096067&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/321362-
dc.description학위논문(석사) - 한국과학기술원 : 김재철AI대학원, 2024.2,[iii, 17 :]-
dc.description.abstractVisual-Language Models (VLMs) play a crucial role in connecting the gap that exists between understanding visual and linguistic data collectively. However, the success of current models is hindered by the extensive pre-training and fine-tuning required, often making them difficult to employ for downstream tasks. To address this limitation, large language models were introduced as an alternative to fine-tuning VLMs due to their zero-shot applicability in downstream tasks, but the effective utilization of LLMs for vision-language tasks demands comprehensive textual representations of the visual data in the form of captions. Unfortunately, the textual representations generated by current VLMs are repetitive and do not provide a detailed understanding of the data. To address this gap, we propose a novel framework, Hierarchical Bag of Phrases (HBoP), that effectively connects visual and textual data by generating a comprehensive understanding of all pertinent information in the image. Our proposed framework not only enables the use of LLMs in multi-modal tasks but also helps produce image-patch/text pairs that could be useful for training vision-language models for better image representation. To evaluate its performance, we conduct experiments comparing HBoP results to state-of-the-art VLMs in terms of semantic integrity, image-text retrieval, and the diversity of generated captions. Our results demonstrate a diversity score significantly close to human-generated captions and a substantial increase in performance for text-retrieval tasks, showcasing the effectiveness of the HBoP framework.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject멀티 모달 작업▼a이미지 이해▼a시각적 이해▼a정보 추출▼a이미지-텍스트 변환-
dc.subjectMulti-modal tasks▼aImage understanding▼aVisual understanding▼aInformation extraction▼aImage-to-text transformation-
dc.titleHBoP: Hierarchical bag of phrases-
dc.title.alternative계층적 구문의 모음-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :김재철AI대학원,-
dc.contributor.alternativeauthorChong, Song-
Appears in Collection
AI-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0