Cross-modal approach for conversational well-being monitoring with multi-sensory earables

Cited 4 time in webofscience Cited 5 time in scopus
  • Hit : 219
  • Download : 0
We propose a cross-modal approach for conversational well-being monitoring with a multi-sensory earable. It consists of motion, audio, and BLE models on earables. Using the IMU sensor, the microphone, and BLE scanning, the models detect speaking activities, stress and emotion, and participants in the conversation, respectively. We discuss the feasibility in qualifying conversations with our purpose-built cross-modal model in an energy-efficient and privacy-preserving way. With the cross-modal model, we develop a mobile application that qualifies on-going conversations and provides personalised feedback on social well-being.
Publisher
Association for Computing Machinery, Inc
Issue Date
2018-10
Language
English
Citation

2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018, pp.706 - 709

DOI
10.1145/3267305.3267695
URI
http://hdl.handle.net/10203/271974
Appears in Collection
RIMS Conference Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 4 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0