D3: Recognizing dynamic scenes with deep dual descriptor based on key frames and key segments

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 3088
  • Download : 0
Dynamic scene recognition is a challenging problem in recognizing a collection of static appearances and dynamic patterns in moving scenes. While existing methods focus on reliable capturing of static patterns, few works have explored frame selection from a dynamic scene sequence and temporal modeling. In this paper, we propose dynamic scene recognition using a deep dual descriptor based on "key frames" and "key segments". Key frames that reflect the feature distribution of the sequence with a small number are used for capturing salient static appearances. Key segments, which are captured from the area around each key frame, provide an additional discriminative power by dynamic patterns. To this end, two types of transferred convolutional neural network features are used in our approach. A fully connected layer is used to select the key frames and key segments, while the convolutional layer is used to describe them. The evaluation results on numerous public datasets demonstrated the state-of-the-art performance of the proposed method. (C) 2017 Elsevier B.V. All rights reserved.
Publisher
ELSEVIER SCIENCE BV
Issue Date
2018-01
Language
English
Article Type
Article
Keywords

TEXTURE CLASSIFICATION; RECOGNITION; FEATURES; REPRESENTATION

Citation

NEUROCOMPUTING, v.273, pp.611 - 621

ISSN
0925-2312
DOI
10.1016/j.neucom.2017.08.046
URI
http://hdl.handle.net/10203/227174
Appears in Collection
CS-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0