A fair classifier using mutual information

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 15
  • Download : 0
As machine learning becomes prevalent in our daily lives involving a widening array of applications such as medicine, finance, job hiring and criminal justice, one morally legally motivated need for machine learning algorithms is to ensure fairness for disadvantageous against advantageous groups. Fairness in machine learning aims at guaranteeing the irrelevancy of a prediction output to sensitive attributes like race, sex and religion. To this end, we take an information- theoretic approach using mutual information (MI) which can fully capture such independence. Inspired by the fact that MI between prediction and the sensitive attribute being zero is the "sufficient and necessary condition" for independence, we develop an MI-based algorithm that well trades off prediction accuracy for fairness performance often quantified as Disparate Impact (DI) or Equalized Odds (EO). Our experiments both on synthetic and benchmark real datasets demonstrate that our algorithm outperforms prior fair classifiers in tradeoff performance both w.r.t. DI and EO.
Publisher
International Symposium on Information Theory
Issue Date
2020-06-24
Language
English
Citation

IEEE International Symposium on Information Theory, ISIT 2020, pp.2521 - 2526

ISSN
2157-8095
DOI
10.1109/ISIT44484.2020.9174293
URI
http://hdl.handle.net/10203/278699
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0