(A) study on noisy sentence classification compensating performance deviations of relation extractor관계추출기의 성능 편차를 보완하는 강화학습 기반의 오류 문장 분류기법에 대한 연구

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 172
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisorChoi, Key-Sun-
dc.contributor.advisor최기선-
dc.contributor.authorYoon, Sooji-
dc.date.accessioned2021-05-11T19:34:16Z-
dc.date.available2021-05-11T19:34:16Z-
dc.date.issued2019-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=875469&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/283093-
dc.description학위논문(석사) - 한국과학기술원 : 전산학부, 2019.8,[iv, 39 p :]-
dc.description.abstractRelation extraction is a work that inferring semantic relations of two entities identified in natural language text. The extracted relation is saved in the knowledge base in triple form. The knowledge base is widely used in the field of Natural Language Processing, such as Question-and-Answering systems and Information Retrieval. Therefore, research to augment Knowledge Base through various methods of relation extraction is essential. Distant Supervision data, used to train machine learning based relation extraction models, can be obtained by annotating a predefined Knowledge Base to the target corpus. With this method, we can easily obtain data, but it has the disadvantage of having many errors. In this paper, we propose an improved method in the methodology of handling error data through existing reinforcement learning based relation extractor. The existing reinforcement learning based relation extraction could not overcome the limitations of the performance of the relation extractor itself since it reflected the reward for its agent dependent on relation extractor. In this study, we propose to supplement the limits of performance of the relation extractor by adding reward independent to relation extractor to the agent. In addition, because of the characteristic of a defined knowledge base, there are cases where multiple relations are classified in distant supervision data. In this case, the agent cannot get the optimal reward at a specific state. To improve this, this study presents a way to obtain the optimal reward for each state by separating the state by relation.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subjectRelation extraction▼adistant supervision data▼areinforcement learning-
dc.subject관계추출▼a원격지도 학습데이터▼a강화학습-
dc.title(A) study on noisy sentence classification compensating performance deviations of relation extractor-
dc.title.alternative관계추출기의 성능 편차를 보완하는 강화학습 기반의 오류 문장 분류기법에 대한 연구-
dc.typeThesis(Master)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전산학부,-
dc.contributor.alternativeauthor윤수지-
Appears in Collection
CS-Theses_Master(석사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0