DynaLAP: Human Activity Recognition in Fixed Protocols via Semi-Supervised Variational Recurrent Neural Networks With Dynamic Priors

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 206
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorAn, Sungtaeko
dc.contributor.authorGazi, Asim H.ko
dc.contributor.authorInan, Omer T.ko
dc.date.accessioned2022-11-28T08:01:21Z-
dc.date.available2022-11-28T08:01:21Z-
dc.date.created2022-11-28-
dc.date.created2022-11-28-
dc.date.issued2022-09-
dc.identifier.citationIEEE SENSORS JOURNAL, v.22, no.18, pp.17963 - 17976-
dc.identifier.issn1530-437X-
dc.identifier.urihttp://hdl.handle.net/10203/301161-
dc.description.abstractLearning the route and order of tasks can be critical to human activity recognition (HAR) for fixed protocols of movement. In this article, we propose a novel framework, DynaLAP, a semi-supervised variational recurrent neural network (VRNN) with a dynamic prior distribution, to perform activity recognition in fixed protocols. DynaLAP takes single tri-axial accelerometry data as input and causally classifies the activity of 10-30-s windows at a time. DynaLAP learns not only a window-specific short-term state, but also a long-term dynamic state iteratively updated throughout the protocol's measurements. Additionally, instead of using a stationary prior distribution of activity classes, DynaLAP learns a dynamic prior that updates for each window. DynaLAP thereby learns protocol-specific dynamics when trained on data from subjects abiding by a fixed protocol. Two datasets from previously published literature were used to evaluate DynaLAP: the fully labeled MotionSense dataset of 24 subjects and a weakly labeled dataset of 17 subjects collected at the Georgia Institute of Technology. For each dataset, we varied the number of training labels used from a single subject's data to the entire dataset. DynaLAP outperformed previous supervised and semi-supervised HAR approaches by 6-42 percentage points, with F1 scores that remained above 80%. These results suggest that DynaLAP can achieve state-of-the-art HAR performance in fixed protocols by learning protocol-specific dynamics, especially in weakly and scarcely labeled settings. DynaLAP could ultimately reduce the necessity for labor-intensive annotation efforts in HAR applications involving routine activities (e.g., military training).-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.titleDynaLAP: Human Activity Recognition in Fixed Protocols via Semi-Supervised Variational Recurrent Neural Networks With Dynamic Priors-
dc.typeArticle-
dc.identifier.wosid000880106500070-
dc.identifier.scopusid2-s2.0-85135762329-
dc.type.rimsART-
dc.citation.volume22-
dc.citation.issue18-
dc.citation.beginningpage17963-
dc.citation.endingpage17976-
dc.citation.publicationnameIEEE SENSORS JOURNAL-
dc.identifier.doi10.1109/JSEN.2022.3194677-
dc.contributor.localauthorAn, Sungtae-
dc.contributor.nonIdAuthorGazi, Asim H.-
dc.contributor.nonIdAuthorInan, Omer T.-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorActivity recognition-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorsemi-supervised learning-
dc.subject.keywordAuthorvariational recurrent neural networks (VRNNs)-
dc.subject.keywordPlusACCELEROMETER-
dc.subject.keywordPlusSENSORS-
Appears in Collection
RIMS Journal Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0