Evaluating Visual Representations for Topic Understanding and Their Effects on Manually Generated Topic Labels

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 37
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorSmith, Alisonko
dc.contributor.authorLee, Tak Yeonko
dc.contributor.authorPoursabzi-Sangdeh, Foroughko
dc.contributor.authorBoyd-Graber, Jordanko
dc.contributor.authorElmqvist, Niklasko
dc.contributor.authorFindlater, Leahko
dc.date.accessioned2021-03-06T07:50:05Z-
dc.date.available2021-03-06T07:50:05Z-
dc.date.created2021-03-06-
dc.date.issued2017-
dc.identifier.citationTransactions of the Association for Computational Linguistics, v.5, pp.1 - 16-
dc.identifier.issn2307-387X-
dc.identifier.urihttp://hdl.handle.net/10203/281305-
dc.description.abstractProbabilistic topic models are important tools for indexing, summarizing, and analyzing large document collections by their themes. However, promoting end-user understanding of topics remains an open research problem. We compare labels generated by users given four topic visualization techniques—word lists, word lists with bars, word clouds, and network graphs—against each other and against automatically generated labels. Our basis of comparison is participant ratings of how well labels describe documents from the topic. Our study has two phases: a labeling phase where participants label visualized topics and a validation phase where different participants select which labels best describe the topics’ documents. Although all visualizations produce similar quality labels, simple visualizations such as word lists allow participants to quickly understand topics, while complex visualizations take longer but expose multi-word expressions that simpler visualizations obscure. Automatic labels lag behind user-created labels, but our dataset of manually labeled topics highlights linguistic patterns (e.g., hypernyms, phrases) that can be used to improve automatic topic labeling algorithms.-
dc.languageEnglish-
dc.publisherThe MIT Press-
dc.titleEvaluating Visual Representations for Topic Understanding and Their Effects on Manually Generated Topic Labels-
dc.typeArticle-
dc.type.rimsART-
dc.citation.volume5-
dc.citation.beginningpage1-
dc.citation.endingpage16-
dc.citation.publicationnameTransactions of the Association for Computational Linguistics-
dc.identifier.doi10.1162/tacl_a_00042-
dc.contributor.localauthorLee, Tak Yeon-
dc.contributor.nonIdAuthorSmith, Alison-
dc.contributor.nonIdAuthorPoursabzi-Sangdeh, Forough-
dc.contributor.nonIdAuthorBoyd-Graber, Jordan-
dc.contributor.nonIdAuthorElmqvist, Niklas-
dc.contributor.nonIdAuthorFindlater, Leah-
dc.description.isOpenAccessY-
Appears in Collection
ID-Journal Papers(저널논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0