Prediction of vowel identification for cochlear implant using a computational model

Cited 1 time in webofscience Cited 0 time in scopus
  • Hit : 234
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorYang, Hyejinko
dc.contributor.authorWon, Jong Hoko
dc.contributor.authorKang, Soojinko
dc.contributor.authorMoon, Il Joonko
dc.contributor.authorHong, Sung Hwako
dc.contributor.authorWoo, Jihwanko
dc.date.accessioned2022-06-14T03:00:39Z-
dc.date.available2022-06-14T03:00:39Z-
dc.date.created2022-06-13-
dc.date.created2022-06-13-
dc.date.created2022-06-13-
dc.date.issued2016-12-
dc.identifier.citationSPEECH COMMUNICATION, v.85, pp.19 - 28-
dc.identifier.issn0167-6393-
dc.identifier.urihttp://hdl.handle.net/10203/296932-
dc.description.abstractA computational biophysical auditory nerve fiber model along with mathematical algorithms are presented that predict vowel identification for cochlear implant (CI) users based on the predicted peripheral neural representations of speech information (i.e., neurogram). Our model simulates the discharge patterns of electrically-stimulated auditory nerve fibers along the length of the cochlea and quantifies the similarity between the neurograms for different speech signals. The effects of background noise (+15, +10, +5, 0, and 5 dB SNR) and stimulation rate (900, 1200, and 1800 pps/ch) on vowel identification were evaluated and compared to CI subject data to demonstrate the performance of our model. Results from both the computational modeling and clinical test showed that vowel identification performance decreased as background noise increased while vowel identification was not significantly influenced by the stimulation rate. The proposed method, both objective and automated, can be used for a wide range of stimulus conditions, signal processing, and different biological conditions in the implanted ears. (C) 2016 Elsevier B.V. All rights reserved.-
dc.languageEnglish-
dc.publisherELSEVIER SCIENCE BV-
dc.titlePrediction of vowel identification for cochlear implant using a computational model-
dc.typeArticle-
dc.identifier.wosid000390507000003-
dc.identifier.scopusid2-s2.0-84994275471-
dc.type.rimsART-
dc.citation.volume85-
dc.citation.beginningpage19-
dc.citation.endingpage28-
dc.citation.publicationnameSPEECH COMMUNICATION-
dc.identifier.doi10.1016/j.specom.2016.10.005-
dc.contributor.localauthorKang, Soojin-
dc.contributor.nonIdAuthorYang, Hyejin-
dc.contributor.nonIdAuthorWon, Jong Ho-
dc.contributor.nonIdAuthorMoon, Il Joon-
dc.contributor.nonIdAuthorHong, Sung Hwa-
dc.contributor.nonIdAuthorWoo, Jihwan-
dc.description.isOpenAccessN-
dc.type.journalArticleArticle-
dc.subject.keywordAuthorCochlear implant-
dc.subject.keywordAuthorComputational modeling, Neurogram-
dc.subject.keywordAuthorVowel identification-
dc.subject.keywordPlusSTIMULATED AUDITORY-NERVE-
dc.subject.keywordPlusSPEECH RECOGNITIONUSERSINTELLIGIBILITYDISCRIMINATIONPSYCHOPHYSICSPERFORMANCEPERCEPTIONSIMULATIONFREQUENCY-
Appears in Collection
RIMS Journal Papers
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 1 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0