Mitigating Emergent Biases in Online Learning

The field of online learning and bandits deals with sequential decision-making problems, where a learner performs a series of decisions aimed at minimizing (or maximizing) a loss (or reward) signal. Online learning algorithms are the basis of many data driven systems used to drive consequential decisions in internet commerce, finance and even policing. There has been some work that documents instances where inadvertent biases are generated, propagated, and perpetuated via the use of data-driven decision systems. Unfortunately, these works are mainly from the economics and social science literature, focusing on providing evidence of these phenomena, not on  developing algorithmic mitigation.

Researchers

Overview

Data used to train machine learning systems often contain human and societal biases that can lead to treating individuals unfavorably (unfairly) on the basis of characteristics such as race, gender, disabilities, etc. This has motivated researchers to investigate techniques to ensure models satisfy fairness properties. One way to mitigate biases and prevent discrimination is via introducing appropriate constraints.  The extension of this line of work to the online setting has been less studied, although some works do treat the problem of imposing online population or individual fairness constraints on the predictions of a model.  

A financial institution issuing loans only to individuals that are predicted to repay them may inadvertently learn a model that avoids issuing loans to people that may repay them, thus introducing unintended bias.  

Problem Definition -Because the online setting has been seldom studied in the fairness literature, there is a huge lack of understanding as to what kind of adverse discriminatory effects if any can emerge merely from the continual interaction between a learner, a model, and its decisions over time. Bandit algorithms explore with the purpose of exploiting, so when applied in sequential domains like advertising, they may exploit any statistical pattern, no matter how discriminatory, to squeeze out a reward. We posit that (*) careless model use, akin to a police department training (while using)  a crime recidivism predictive model only over apprehended individuals, can cause serious emergent discriminatory behavior, that could result in adverse treatment (for example artificially higher incarceration rates) of underrepresented minorities. Recent work has started to ask tantalizing questions regarding the convergence equilibria of models whose decisions can change the data distribution that is used to further train them, although not specialized to the fairness domain, nor concerned with defining, characterizing and mitigating discriminatory effects through time. Some of the situations we study can be framed as problems with one sided feedback, which have been surprisingly little studied in the online learning literature.

Current Work - We start by formalizing our intuition (*) regarding emergent discrimination as a result of the use of predictive machine learning models. We use the framework of online learning to define what this means mathematically by modeling the problem we study as a generalized linear contextual bandit problem with two actions. We have initial preliminary results using synthetic data and showing that emergent discrimination is a real phenomenon. We are working on extending these results by testing on existing benchmark datasets that are used in the fairness literature. Ideally we would like to document this phenomenon in a real large scale data driven decision making system.

Desired Outcomes - We expect to gain a better understanding of what types of fairness constraints would be desirable to impose on online learning algorithms. Of particular interest would be to bring our results to the reinforcement learning setting.

Conclusion - In today's world many data driven decision systems are tasked with making important decisions about individuals. As a field, we really should devise ways to avoid them perpetuating or even introducing undesirable biases that may amplify historical iniquities.  

Closing Report Sept 2021 Mitigating Emergent Biases in Online Learning 

Contact