Network fusion based video summarization using visual-semantic features시각적 의미적 특징점을 이용한 네트워크 퓨전 기반 비디오 요약

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 290
  • Download : 0
This paper proposes a video summarization method based on network fusion. The goal of this method is to create a meaningful video summary consisting of representative scenes without duplication by using visual and semantic features. To achieve this goal, a final summary is generated by considering the visual and semantic similarity among shots. More specifically, our method uses Convolutional neural networks (CNNs) to extract visual and semantic features. Visual features are the image features from the top layer of CNNs. Semantic features are the word vectors represented by the Word2Vec descriptor. A number of key frames are generated by a shot segmentation and used as input of the CNNs. A visual or semantic network is then constructed by computing the cosine similarity among visual or semantic features. After the similarity is computed, two networks are combined into a fused network by the network fusion process. An optimal video summary is then computed using spectral clustering on the fused network. The performance of this method is evaluated on two datasets, and the results show that it achieves better performance than state-of-the-art video summarization methods.
Advisors
Yoo, Chang Dongresearcher유창동researcher
Description
한국과학기술원 :전기및전자공학부,
Publisher
한국과학기술원
Issue Date
2018
Identifier
325007
Language
eng
Description

학위논문(석사) - 한국과학기술원 : 전기및전자공학부, 2018.2,[iii, 26 p. :]

Keywords

Video summarization▼aNetwork fusion▼aConvolutional neural networks▼aWord2Vec; Spectral clustering; 비디오 요약▼a네트워크 퓨전▼a합성곱 신경망▼a워드투벡터▼a스펙트럼 군집화

URI
http://hdl.handle.net/10203/266853
Link
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=734007&flag=dissertation
Appears in Collection
EE-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0