Introduction to Machine Learning and Potential for Bias

Introduction

Machine learning is all the rage these days. With the proliferation of available information through the internet and a society that is more technologically connected than ever before, machine learning was the natural next step in automation and artificial intelligence. If you are an avid technology user, you are probably affected by machine learning more frequently than you could imagine. From determining what ads we see while shopping on Amazon to tracking our commute habits, machine learning pervades many aspects of our lives.

Of course, with great power comes great responsibility. Machine learning, at its core, is statistics – it involves the use of probabilistic models that are built using sample data. Simply put, it uses math and the past to "guess" the future. If you've browsed various types of cameras on Amazon – Amazon will predict that you'd like to see more cameras in your recommendations.

Machine learning algorithms have advanced tremendously in the past decade, perhaps almost to the fringe of science fiction A.I. In 2015, Google's AI AlphaGo smashed a seemingly unsurpassable hurdle - soundly defeating Lee Sedol, one of professional Go's best players, in most of its 5 matches. Go was considered to be one of the most difficult games for AI to learn due to sheer complexity of the possible game choices, and AlphaGo’s development is consequently an incredible feat of engineering.

Yet, while we can reliably depend on complex machine learning (ML) algorithms for chess and perhaps Go, we can not do so blindly. Can we always trust what ML tells us? What happens when we train a ML program to perform the role of a judge or a doctor? People have tried, and it is clear that the ethical stakes are orders of magnitude higher when ML is incorporated into the criminal justice system or medicine.

First, we'll examine the very profound imperfections of learning algorithms.

A Toy Example: Recognizing Birds

Let's suppose that we want to train a ML algorithm to distinguish between two types of birds: Crows and Blackbirds (of the common variety). (Example drawn from [1].)

The top image depicts a crow, and the bottom a blackbird. The question then is, how do we train an ML program to distinguish these two images? Furthermore, how do we train it to correctly identify new pictures?

Suppose that we have a black box feature extractor that can extract key details about an image (e.g. the time of day the picture was taken, the contents of the background, the orientation and color of the bird, etc.).

Then, we could build a couple of simple (not necessarily good) learning models that function as follows:

Both models only use two features to differentiate images: the time of day during which the images were taken, and how light/dark the images are. The points on each graph indicate where each image lies on both spectrums.

In the left model, a single rectangle is used to classify the points. In other words, any points falling into the shaded rectangle are classified as crows, and all others are blackbirds.

In the right model, more complex shape is used as a classifier.

The first thing we notice is that the left model is erroneous on some points! It incorrectly classifies some blackbirds as crows, and some crows as blackbirds. However, the right model has no error on the training data.

Yet, the right model is also flawed. Recall that the purpose of a good ML model is to classify new data correctly.

On the left is the same graph as before, and on the right some new points have been added. We can see that this model does not adapt very well to new data! The total error is now pretty high, and it appears that no satisfactory shape can accurately separate the blackbirds from the crows in the above graph.

We call the above phenomenon overfitting. Data is overfit when the classifier is too complex and does not adapt well. It is also important to recognize another source of error: hidden bias.

What if when taking photos, the photographer was more inclined to take darker pictures of crows and lighter pictures of blackbirds?

In the above graphs, we see a clear trend that darker images tended to contain crows, and lighter ones blackbirds. However, if the data given to the learning model itself was biased based on how it was collected, that bias would be reflected in the final model. A significant problem in machine learning is thus apparent:

Learning algorithms themselves may not generate bias, but without adequate preparation, they may perpetuate it.

The ethical question then is, should machine learning algorithms be held responsible for perpetuating bias?

This topic is subject to an ongoing debate, and has drawn some popular attention.

COMPAS

Earlier this year, ProPublica published an article titled "Machine Bias," discussing how machine learning was being used to forecast the likelihood of a criminal being a repeat offender. Shockingly, the authors note that not only were there insufficient studies of the success of risk assessments, especially with respect to racial discrimination, but there was a clear skew in the data to profile white defendants as less dangerous as opposed to black defendants.

It is difficult to grasp the ethical weight of such risk assessments. Many times, these learning models are opaque, black boxes. How can one claim that a learning algorithm is itself racist or discriminatory without knowing how it works?

Another interesting ethical point is a notion of granularity. For example, Aristotelian ethics poises subjects of ethical review in the context of means. Yet, this framework utterly fails in the case of using risk assessment on the individual basis! One could claim that it is strictly unfair to the individual to base an ethical review of a single risk assessment over a spectrum of a machine's prior decisions.

Furthermore, above we saw a simple example of how bias can be propagated from as early on a the data collection process. How can we separate out bias in data from bias of an algorithm?

Fortunately, there is an ongoing academic effort to develop learning frameworks whose fairness can be quantified. For example, Joseph et al. have proposed a machine learning framework that satisfies a parameterized fairness constraint. Zliobaite, among many others, has performed a survey of a variety of techniques that may be used to profile hidden, structural, and other types of bias in learning models, as well.

Further Issues

Professor Aaron Roth at the University of Pennsylvania has frequently discussed the dilemma of unfair machine learning algorithms. In addition to training data encoding existing biases, he also poses several other causes of unfairness:

  • Data collection feedback loops
  • Different populations with different properties
  • Less data about minority populations

Data Collection Feedback Loops

Suppose you build a credit-loan service that uses machine learning to determine how trustworthy a customer is. That is, instead of using FICO, one might use machine learning and a select number of features to verify whether a customer is likely to default on a loan. However, the only way for your machine learning algorithm to collect new data is by giving loans and determining whether its initial prediction was correct. However, this service inherently favors those it already considers to be good customers. "Bad" customers that might actually be fully capable of paying back their loans would be perpetually rejected, never given the chance to prove themselves.

Minority Populations and Different Populations with Different Properties

As mentioned in the ProPublica article, to date there has been an insufficient analysis of the success of machine learning in the criminal justice system and of potential bias. In many cases of discrimination in risk assessments, it is often that minority populations are the victims. Insufficient data is collected on misclassifications involving minorities - resulting in models that are biased simply due to lack of information. However, with more light being shed on the topic, there is an ongoing effort to collect more data and to rigorously probe risk assessments for potential discrimination. (see Goel et al., Combatting Police Discrimination in the Age of Big Data)

Conclusion

Overall, despite the immense progress made in machine learning, the daunting ethical challenge of identifying and mitigating bias and discrimination in learning algorithms remains. The use of machine learning in the court of justice and in other industries has sparked key discussion of hidden racial bias – and there are still many open questions and challenges regarding data collection to be analyzed. However, there is an active front led by top companies such as Google and Microsoft as well as academic researchers to resolve these ethical dilemmas, and progress will continue to be made.

Citations

  1. Kovacs, Tim, and Andy J. Wills. "Generalization Versus Discrimination in Machine Learning." Encyclopedia of the Sciences of Learning. Ed. Norbert M. Seel. 2012 ed. Vol. 2. Berlin: Springer, 2012. 1352-355. Print.

  2. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks." ProPublica. ProPublica, 23 May 2016. Web. 03 Nov. 2016.

  3. Goel, Sharad et al. “Combatting Police Discrimination in the Age of Big Data.” New Criminal Law Review, New Criminal Law Review, 2016.

  4. Joseph, Matthew et al. “Rawlsian Fairness for Machine Learning.” 2 Nov. 2016, arxiv.org/pdf/1610.09559v2.pdf.

  5. Zliobaite, Indre. "A Survey on Measuring Indirect Discrimination in Machine Learning." (2015): 1-21. 31 Oct. 2015. Web. 3 Nov. 2016.

  6. What Is Machine Learning? And Why Might It Be Unfair? Perf. Aaron Roth. University of Pennsylvania Law School, n.d. Web. 29 Nov. 2016.