Bias mitigation with AIF360: A comparative study

Authors

  • Knut T. Hufthammer University of Bergen
  • Tor H. Aasheim University of Bergen
  • Sølve Ånneland University of Bergen
  • Håvard Brynjulfsen University of Bergen
  • Marija Slavkovik University of Bergen

Abstract

The use of artificial intelligence for decision making raises concerns about the societal impact of such systems. Traditionally, the product of a human decision-maker are governed by laws and human values. Decision-making is now being guided - or in some cases, replaced by machine learning classification which may reinforce and introduce bias. Algorithmic bias mitigation is explored as an approach to avoid this, however it does come at a cost: efficiency and accuracy. We conduct an empirical analysis of two off-the-shelf bias mitigation techniques from the AIF360 toolkit on a binary classification task. Our preliminary results indicate that bias mitigation is a feasible approach to ensuring group fairness.

Published

2020-11-23