Categories
Issue 3 Understanding Science

Program and Prejudice: On algorithmic bias

🕒 9 min

We rely on computers to make all sorts of decisions these days: computers can decide faster and better than humans, for the most part. Whether you swipe a card in a store, open up Netflix to take a look at its recommendations or search your computer for that one file you just can’t find, a computer takes a look at the data available, runs an algorithm and gives you a result. That data can be your account balance, past viewing activity or an index of your files, but it can be something much more important – and as we grow to trust computers to perform more and more tasks that humans have traditionally been doing, how can we make sure that they make the right decisions? And are they making good decisions right now? Or are they actually amplifying sexism and racism?

To understand these questions better, here’s a crash course into the ways computers think. If you’re familiar with algorithms and machine learning, feel free to skip to the next section.

Traditional algorithms vs. machine learning

Computers aren’t especially smart machines on their own. On the low, processor level, they can only execute pretty simple instructions, which are mostly various simple computations (such as adding two numbers or giving the value of sin(x)) or a storage instruction – such as “put this number at that place in the memory”). As anybody who took Computer Architecture in university knows, this made writing programs for them very, very tedious. So, we came up with programming languages to make things easier. However, programming languages – while giving us “higher-level logic” to operate with, thus making writing programs easier – didn’t change the nature of programming that much: we were still giving computers a series of instructions to perform on a set of data it was given – an algorithm.

As computers evolved, in traditional human nature, we expected more of them. However, some of the problems that were really easy for us – such as recognizing objects in a picture, driving a car or understanding human language – seemed to be hard to solve. Like, really hard. Like, almost impossible to accurately solve in programming languages we had, hard.

To solve these problems, we took an idea from science fiction and turned to artificial intelligence, thinking about ways how to teach machines to learn, instead of teaching them to do a particular task. I know that this sounds very abstract, but think of it as giving them a fish vs. teaching them how to fish.

One of the more prominent subsets of artificial intelligence is machine learning, which Wikipedia defines as “the study of algorithms that improve automatically through experience”. While it’s a large field with multiple approaches to building such algorithms, in a nutshell, it uses special algorithms (machine learning algorithms) which build out mathematical models around sample (training) data. Without getting too deep into machine learning, this means that we don’t have much say in how these algorithms operate, apart from working on the algorithms generating models and the training data.

So, how do we even know that there is bias in algorithms and how can we fix it?


The trouble with detecting bias in algorithms

Of course, when we look at the code of a traditional computer program, we can usually detect bias by just looking:

if (cat.hasHair == false) cat.price = cat.price * 20;

These biases can be caught just by reading the code and usually are predictable in their (biased) behaviour, so it’s clearly easy to detect bias in code that’s written by a human.

However, code written by computers – which machine learning algorithms are – aren’t really that readable to a human: they’re sort of a black box. We’re now presented with the task of determining whether an algorithm we can’t really wrap our head around is biased.

In some cases, biases seem obvious, but they aren’t simple to prove: Apple Card’s algorithms gave lower limits to women (which David Heinemeier Hansson quite vulgarly described in his popular tweet last year). However, as training data didn’t include gender as a variable and as Apple and Goldman Sachs, their partnering bank tried to explain the situation, it seems that the situation is less than black and white, with no clear (or at least public) consensus whether the algorithm is (or was) biased. (I’ll keep my thoughts to myself, as I’m really not fond of angering any party here, but there’s much more to gender bias than looking at gender, as there are many variables that correlate with a person’s gender, such as retailers they shop at, which is certainly a factor in any credit score algorithm, and there’s a significant chance that some of these variables influenced the algorithm’s decisions to be biased).

In other cases, they really are obvious: Amazon’s hiring algorithm discriminated against women by penalizing the word “woman”, ultimately leading to the project being scrapped after challenges in making it fair and useful, especially at the same time: the last iteration was less discriminating, but returned seemingly random decisions. Google’s Photos app labelled photos of black people as gorillas, resulting in Google blocking the ‘gorilla’ label entirely until it had enough time to fix the issue.

Companies claim their algorithms can tell if you’re a terrorist, an academic researcher or an extrovert just by looking at your face. Researchers claim they can guess your sexual orientation from your facial features. Seemingly, their algorithms have decent accuracy – somewhere around 80 to 90 percent. How? Isn’t this just physiognomy on steroids?

Physiognomy is the practice of assessing a person’s character or personality from their outer appearance — especially the face.

Wikipedia

Where does bias originate from?

To understand how we can detect bias, we need to find out how an algorithm becomes biased.

There are three basic types of algorithmic bias: pre-existing, technical and emerging.

Technical biases tend to be the easiest to spot and fix: they emerge due to the limitations of an algorithm, its computational power or other constraints on the system. For instance, an algorithm that sorts candidate names alphabetically is biased in favour of those whose names begin with A. Turnitin, a plagiarism detection tool was found to be biased in favour of native English speakers because they were better at changing up words and structures when copying, triggering the plagiarism detector less often than non-native speakers in cases when both groups plagiarized.

Pre-existing biases emerge when algorithms are developed using biased assumptions or trained on biased data: coding human biases into algorithms. Social and institutional biases spill over into code, potentially influencing the algorithm’s decisions down the road, especially if it takes into account its previous decisions.

Emerging biases happen when algorithms don’t change, but context changes. They aren’t adapted to consider new forms of knowledge, but perhaps the cultural norms have shifted. Perhaps the data they were trained on is wildly different from the real-world data they have to evaluate. Perhaps they get stuck in a feedback loop, which can happen to predictive policing algorithms, amplifying the racial bias they were supposed to lower. When scientists built an algorithm that learns word associations from the Internet, it learned prejudice along the way.


Detecting, preventing and fixing bias

As you’ve seen in this article, we aren’t really good at determining whether an algorithm is truly biased. Sometimes it’s easy to spot, but difficult to fix, such as in the Google Photos case. Other times, it’s difficult to wrap your head around, such as in the “guessing if you’re a terrorist” case which makes you think “what the…”

In a Brookings Institution paper based on the insights of 40 experts (who are called “thought leaders” in the paper, in very 1984-esque phrasing), researchers suggest numerous ways of combatting bias in algorithms, but no clear answer on how researchers can detect and prevent bias easily. That’s understandable: building fair algorithms is a challenging task.

Of course, we should do our best to fix these issues, building algorithms we can trust to make good decisions. However, this is something algorithm developers and operators have to work on all the time, not just at some point.

Building fair algorithms starts by examining the bias in the training data, making sure it’s diverse and free of historical human bias. Sensitive data (such as race, gender, sexual orientation etc.) in training data should also be carefully handled.

However, this can be difficult: blinding an algorithm to a certain variable can be inadequate: gender can be extrapolated from your name, for instance. In large cities, some neighbourhoods can be predominantly populated by a certain race, which could make the neighbourhood you live in a good predictor of your race. It could also introduce additional bias to the algorithm (by forcing the algorithm to ignore truly relevant, but sensitive variables). This makes building algorithms that are fair towards traditionally marginalized groups much more complicated, but it’s an unavoidable step in algorithm design. Having a diverse team of developers seems to help build fair(er) algorithms.

Ensuring fairness continues after an algorithm is trained. Developers should examine the set of results for anomalies: for instance, by comparing outcomes between different groups of people, which can even be done in simulations. This continues to be a task even after the algorithm rolls out into production and starts to get used in the real world.

However, a new question arises with this approach: are unequal outcomes unfair? Of course, Jeff Bezos has a higher credit score than I do: this is an unequal outcome, but not really unfair.

It seems that some fairness and accuracy tradeoffs are inherent in every kind of decision making, algorithmic or not. So, how do we set standards regarding what kind of discrimination is okay?

In case of human biases, these are usually regulated through laws which forbid certain data being used in decision making, but we’ve seen how difficult this can get: algorithms can discriminate much more subtly and very differently from humans. AI policy thinkers suggest building ethical frameworks around algorithmic decision making which would try to ensure that algorithms avoid “unfair discrimination”.

The European Union has already built one such framework, called Ethical Guidelines for Trustworthy AI, which requires AI to have human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability taken into account to be considered trustworthy by the EU.

This seems to be a great starting place, but leaves us with one big question: how do we measure fairness? There isn’t an intuitive answer or even a hint of consensus on this topic. Some researchers suggest asking the question “Will we leave some groups of people worse off as a result of the algorithm’s design or its unintended consequences?”. Some other recommendations include bias impact statements, which are basically questionnaires designed to guide algorithm developers through the design, implementation and monitoring of their algorithms.

In the end, this discussion leaves us with more questions than answers. Sure, we have to properly design training data and analyze algorithms, but without additional human involvement (particularly from humans trained to spot their own biases), developer self-regulation, public policy and especially transparency, our algorithms will be at least just as biased as we are.


References and additional reading:

https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/

https://en.wikipedia.org/wiki/Algorithmic_bias

https://www.wired.com/story/the-real-reason-tech-struggles-with-algorithmic-bias/

https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

https://towardsdatascience.com/algorithm-bias-in-artificial-intelligence-needs-to-be-discussed-and-addressed-8d369d675a70

https://en.wikipedia.org/wiki/Machine_learning

https://en.wikipedia.org/wiki/Artificial_intelligence

https://en.wikipedia.org/wiki/Physiognomy

https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a

By Mario Borna Mjertan

Mario Borna Mjertan is a student of mathematics at the Department of Mathematics, Faculty of Science, University of Zagreb. He serves as Project Director for Znanstvenik u meni! and actively works on science popularisation projects such as ZUM, S3++ and other projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.