Towards practical model fairness for trustworthy and safe AI신뢰할 수 있는 인공지능을 위한 실용적인 모델 공정성

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 44
  • Download : 0
DC FieldValueLanguage
dc.contributor.advisor황의종-
dc.contributor.authorRoh, Yuji-
dc.contributor.author노유지-
dc.date.accessioned2024-08-08T19:31:43Z-
dc.date.available2024-08-08T19:31:43Z-
dc.date.issued2024-
dc.identifier.urihttp://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=1100094&flag=dissertationen_US
dc.identifier.urihttp://hdl.handle.net/10203/322188-
dc.description학위논문(박사) - 한국과학기술원 : 전기및전자공학부, 2024.2,[xi, 183 p. :]-
dc.description.abstractAs artificial intelligence (AI) has an increasing societal impact in our world, developing fair AI becomes important to avoid adapting or even amplifying social biases and discrimination. Although many techniques for training fair models have been proposed, most of them face significant limitations that make it challenging to apply them in practice. To address these challenges, this thesis provides fundamental solutions for 1) lowering the technical barriers of fair AI development and 2) achieving high fairness even when the training and test data contain errors or change over time, and thus lays the foundation for practical and trustworthy AI. Furthermore, we aim to extend our techniques to mitigate ethical concerns associated with foundation models, which are recently being used in many applications at an explosive pace, and suggest new opportunities for making foundational models fair and safe to use.-
dc.languageeng-
dc.publisher한국과학기술원-
dc.subject신뢰할 수 있는 인공지능▼a안전한 인공지능▼a모델 공정성▼a모델 견고성-
dc.subjectTrustworthy AI▼aAI Safety▼aModel Fairness▼aModel Robustness-
dc.titleTowards practical model fairness for trustworthy and safe AI-
dc.title.alternative신뢰할 수 있는 인공지능을 위한 실용적인 모델 공정성-
dc.typeThesis(Ph.D)-
dc.identifier.CNRN325007-
dc.description.department한국과학기술원 :전기및전자공학부,-
dc.contributor.alternativeauthorWhang, Euijong-
Appears in Collection
EE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0