Multiclass autoencoder-based active learning for sensor-based human activity recognition

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 237
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorPark, Hyunseoko
dc.contributor.authorLee, Gyeong Hoko
dc.contributor.authorHan, Jaeseobko
dc.contributor.authorChoi, Jun Kyunko
dc.date.accessioned2023-11-14T02:00:15Z-
dc.date.available2023-11-14T02:00:15Z-
dc.date.created2023-11-14-
dc.date.created2023-11-14-
dc.date.issued2024-02-
dc.identifier.citationFUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, v.151, pp.71 - 84-
dc.identifier.issn0167-739X-
dc.identifier.urihttp://hdl.handle.net/10203/314548-
dc.description.abstractLeveraging the enormous amounts of real-world data collected through Internet of Things (IoT) technologies, human activity recognition (HAR) has become a crucial component of numerous human-centric applications, with the aim of enhancing the quality of human life. While the recent advancements in deep learning have significantly improved HAR, the process of labeling data continues to remain a significant challenge due to the substantial costs associated with human annotation for supervised model training. Active learning (AL) addresses this issue by strategically selecting informative samples for labeling during model training, thereby enhancing model performance. Although numerous approaches have been proposed for sample selection, which consider aspects of uncertainty and representation, the difficulties in estimating uncertainty and exploiting distribution of high-dimensional data still pose a major issue. Our proposed deep learning-based active learning algorithm, called Multiclass Autoencoder-based Active Learning (MAAL), learns latent representation leveraging the capacity of Deep Support Vector Data Description (Deep SVDD). With the multiclass autoencoder which learns the normal characteristics of each activity class in the latent space, MAAL provides an informative sample selection for model training by establishing a link between the HAR model and the selection model. We evaluate our proposed MAAL using two publicly available datasets. The performance results demonstrate the improvements across the overall active learning rounds, achieving enhancements up to 3.23% accuracy and 3.67% in the F1 score. Furthermore, numerical results and analysis of sample selection are presented to validate the effectiveness of the proposed MAAL compared to the alternative comparison methods.-
dc.languageEnglish-
dc.publisherELSEVIER-
dc.titleMulticlass autoencoder-based active learning for sensor-based human activity recognition-
dc.typeArticle-
dc.identifier.wosid001088488800001-
dc.identifier.scopusid2-s2.0-85173266775-
dc.type.rimsART-
dc.citation.volume151-
dc.citation.beginningpage71-
dc.citation.endingpage84-
dc.citation.publicationnameFUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE-
dc.identifier.doi10.1016/j.future.2023.09.029-
dc.contributor.localauthorChoi, Jun Kyun-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorInternet of Things-
dc.subject.keywordAuthorActive learning-
dc.subject.keywordAuthorHuman activity recognition-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorMulticlass autoencoder-
dc.subject.keywordAuthorMultivariate time series-
dc.subject.keywordPlusWEARABLE SENSOR-
dc.subject.keywordPlusUNCERTAINTY-
dc.subject.keywordPlusINTERNET-
dc.subject.keywordPlusTHINGS-
Appears in Collection
EE-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0