What is Unconscious Bias? How Does it Affect You?
May 04, 2021 | 4 minutes read
Unconscious bias is defined as social stereotypes about certain groups of people or demographics that are formed outside of one’s personal conscious awareness. To give a common example, a hiring manager may assume that a younger candidate is more likely to fulfill their job duties when compared to an older one, despite the fact that age has nothing to do with one’s ability to effectively participate in the workforce.
Another prevalent example is confirmation bias, in which a person seeks out opinions and validations from others who already agree with their viewpoints and reject any information to the contrary, regardless of how true or applicable it may be. The obvious crux of the issue is it is often extremely difficult for people to recognize, much less take steps to address their instances of unconscious bias.
A large reason for this is that people who have any form of bias towards another person or group, whether it be conscious or unconscious, often pass this bias on to their family members and friends. Moreover, many prejudices that permeate society today, particularly involving race in gender, have become so ingrained into the minds of many people that it is difficult for them to see any alternative to their beliefs.
How does unconscious bias affect me in the context of machine learning?
As the computer algorithms that are needed to enable machine learning are created by people, their implicit biases can seep into the programming. As deep-learning algorithms are being now used to make life altering decisions such as hiring or firing employees and determinations in the criminal justice system, it is important that these methods are as biased free as possible. There are many benefits to machine learning in the context of unconscious bias. For example, machine learning systems will disregard variables that do not accurately predict outcomes. This is in stark contrast to human beings, who have the cognizant ability to lie at any moment, regardless of their training or ability. Additionally, it is far easier to probe AI for bias when it is recognized when compared to identifying bias in human beings.
Alternatively, machine learning can actually amplify unconscious biases in some situations. For example, In Rachel Thomas’s keynote presentation “Analyzing and & Preventing Unconscious Bias in Machine Learning ” found several instances of bias in algorithms. To illustrate, research used in the presentation found that an algorithm reading photos of people cooking labeled 84% of the people women, while in reality only 67% were actually women.
Another example is content suggested to online viewers after watching a clip or video. Zeypey Tufecki, who conducts research on the intersection between society and technology, noted disinformation and propaganda videos are often suggested to viewers who were watching content about a completely different subject, such as sports or entertainment.
What’s more, even if bias were to be completely removed from the outset of model development, bias can still be present in word embedding. As word libraries such as Word2Vec by Google are created with human inputs, their bias can seep into the words and word associations of these libraries. Word libraries like Word2Vec place similar words in closer distance to dissimilar words. As such, research into this library found that the words the distance between the words “man” and “genius” were significantly smaller than the distance between the words “woman” and “genius”.
What can be done to combat unconscious bias in machine learning?
The first step to combating unconscious bias in machine learning is identify what bias in the data is being fed to machine learning. The algorithms that machine learning is predicated upon can only function based on the data being given. If a data set has inherently marginalized a certain demographic of people, that will obviously be made apparent through machine learning as well. Another angle is to analyze the processes designed to catch unconscious bias present in machine learning and determine whether they are practical or effective enough to reduce bias.
Finally, diversity in the teams who make machine learning models is ultimately needed. It is very difficult for someone to adequately represent a social group or demographic outside of themselves, and in order to truly tackle bias in machine learning we must ensure that people from all races, backgrounds, sexual orientations, and gender have a say in the process.