A usability study of multimodal input in an augmented reality environment

Cited 57 time in webofscience Cited 66 time in scopus
  • Hit : 950
  • Download : 466
DC FieldValueLanguage
dc.contributor.authorLee, Minkyungko
dc.contributor.authorBillinghurst, Markko
dc.contributor.authorBaek, Woonhyukko
dc.contributor.authorGreen, Richardko
dc.contributor.authorWoo, Woontackko
dc.date.accessioned2014-08-28T08:19:41Z-
dc.date.available2014-08-28T08:19:41Z-
dc.date.created2013-11-19-
dc.date.created2013-11-19-
dc.date.created2013-11-19-
dc.date.issued2013-11-
dc.identifier.citationVIRTUAL REALITY, v.17, no.4, pp.293 - 305-
dc.identifier.issn1359-4338-
dc.identifier.urihttp://hdl.handle.net/10203/188483-
dc.description.abstractIn this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.-
dc.languageEnglish-
dc.publisherSPRINGER LONDON LTD-
dc.titleA usability study of multimodal input in an augmented reality environment-
dc.typeArticle-
dc.identifier.wosid000325997200004-
dc.identifier.scopusid2-s2.0-84886395485-
dc.type.rimsART-
dc.citation.volume17-
dc.citation.issue4-
dc.citation.beginningpage293-
dc.citation.endingpage305-
dc.citation.publicationnameVIRTUAL REALITY-
dc.identifier.doi10.1007/s10055-013-0230-0-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorWoo, Woontack-
dc.contributor.nonIdAuthorLee, Minkyung-
dc.contributor.nonIdAuthorBillinghurst, Mark-
dc.contributor.nonIdAuthorBaek, Woonhyuk-
dc.contributor.nonIdAuthorGreen, Richard-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorMultimodal interface-
dc.subject.keywordAuthorAugmented reality-
dc.subject.keywordAuthorUsability-
dc.subject.keywordAuthorEfficiency-
dc.subject.keywordAuthorEffectiveness-
dc.subject.keywordAuthorSatisfaction-
dc.subject.keywordPlusINTERFACE-
dc.subject.keywordPlusSPEECH-
dc.subject.keywordPlusGESTURES-
dc.subject.keywordPlusSYSTEM-
Appears in Collection
GCT-Journal Papers(저널논문)
Files in This Item
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 57 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0