Substitutive Skeleton Fusion for Human Action Recognition

Cited 4 time in webofscience Cited 0 time in scopus
  • Hit : 273
  • Download : 26
Advancement of RGB-D cameras that are capable of tracking human body movement in the form of a skeleton has contributed to growing interest in skeleton-based human action recognition. However, the tracking performance of a single camera is prone to occlusion and is view dependent. In this study, we use fusion skeletal data obtained from two views for recognizing human action.We perform a substitutive fusion based on joint tracking status and build a view-invariant action recognition system. The resulting fusion skeletal data are transformed into histogram of cubes as a frame level feature. Clustering is applied to build a dictionary of frame representatives, and actions are encoded as sequences of frame representatives. Finally, recognition is performed as a sequence matching task by using Dynamic Time Warping with K-nearest neighbor. Experimental results show that fusion skeletal data consistently give better recognition performance than their single view counterpart.
Publisher
Korean Institute of Information Scientists and Engineers
Issue Date
2015-02-10
Language
English
Citation

The 2nd International Conference on Big Data and Smart Computing (BigComp2015)

URI
http://hdl.handle.net/10203/198910
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0