Improving accessibility to web documents for the aurally challenged with sign language animation

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 367
  • Download : 11
DC FieldValueLanguage
dc.contributor.authorChung, Jin-Wooko
dc.contributor.authorLee, Ho-Joonko
dc.contributor.authorPark, Jong Cheolko
dc.date.accessioned2013-03-28T10:46:16Z-
dc.date.available2013-03-28T10:46:16Z-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.created2012-02-06-
dc.date.issued2011-05-25-
dc.identifier.citation1st International Conference on Web Intelligence, Mining and Semantics, WIMS'11, pp.33:1 - 33:8-
dc.identifier.urihttp://hdl.handle.net/10203/164996-
dc.description.abstractIn this paper, we describe how to improve accessibility for the aurally challenged in a web environment, focusing on utilizing a signing avatar for web pages. Many systems were previously proposed to make a web environment more accessible for the deaf people by providing signed expressions, i.e. translating written text into sign language animations and presenting them in a proper way, based on the observation that deaf users normally have much difficulty understanding text-based information as well as audio contents. We analyze the strengths and weaknesses of these systems with respect to discussed design criteria, and propose a system that presents a signing avatar for web page documents via a mobile device, which is expected to overcome the shortcomings of the previous systems and to improve the accessibility of deaf users to textual contents in a web environment. The proposed system has three main parts based on a client-server architecture: 1) a client that executes a web browser and transmits selected text to the server, 2) a server that takes text as input and translates it into signed expressions through a sign language generation module, and 3) a mobile device that displays signing animation transmitted from the server by streaming. We also present some linguistic issues raised by the difference between Korean and Korean Sign Language. To the best of our knowledge, this is the first approach to the use of a mobile device for web document access by the aurally challenged people. We discuss implications of our study and future directions.-
dc.languageEnglish-
dc.publisherWIMS-
dc.titleImproving accessibility to web documents for the aurally challenged with sign language animation-
dc.typeConference-
dc.identifier.scopusid2-s2.0-79960584049-
dc.type.rimsCONF-
dc.citation.beginningpage33:1-
dc.citation.endingpage33:8-
dc.citation.publicationname1st International Conference on Web Intelligence, Mining and Semantics, WIMS'11-
dc.identifier.conferencecountryNO-
dc.identifier.conferencelocationSogndal-
dc.identifier.doi10.1145/1988688.1988727-
dc.embargo.liftdate9999-12-31-
dc.embargo.terms9999-12-31-
dc.contributor.localauthorPark, Jong Cheol-
dc.contributor.nonIdAuthorChung, Jin-Woo-
dc.contributor.nonIdAuthorLee, Ho-Joon-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0