Design | A brief history of how racism manifests itself in design and how we can learn from it.

The technology related design industry is far reaching and all encompassing. At its broadest interpretation, it includes everything from interface designers working on singular features to vice presidents of product design working on an entire fleet of handheld devices. In more recent years, the design industry has risen to prominence for the role it’s played in creating widely used technology products such as iOS devices (i.e the iPhone 11) and conversational user interfaces (i.e the Amazon Echo Dot).
However, since its inception, user centered design has paved the way for racial biases to manifest themselves in consumer products and designers have become beholden to practices that allow such manifestations. We saw it in the 70’s when Kodak coated it’s film to favor white skin tones and we still see it today, with the technology sector being no exception. Let’s take a look at some more recent examples of this.
Google’s Gorilla Mishap
Let’s jump back all the way to July, 2015. Computer programmer Jacky Alcine logged on to his Google Photos to discover a new album labelled “gorillas” and upon clicking into it found pictures of himself and his friend. Yes, Google Photos had mistakenly identified and labelled two African Americans as “gorillas”. Upon posting the shocking incident on twitter, Alcine received an immediate response from Google’s Chief Social Architect saying “Holy f**k. G+CA here. No, this is not how you determine someone’s target market. This is 100% Not OK” (source). Despite the immediate attempts to control the PR fall out, the truth remained that Google had, to some degree, been complicit in systemic racism. Three years later in 2018, Google’s solution had been to simply block its image recognition algorithm from detecting gorillas altogether (source). Although it may have been unintentional, the equating of two black men to gorillas raised some serious questions about the design and engineering processes that Google was employing.

Microsoft’s Racist Twitter AI Chatbot
Now let’s jump a little later to March, 2016. Microsoft’s Technology and Research division had just excitedly released their new bot, Tay, meant to mimic the language of a 19 year old girl. Within 24 hours, Tay started tweeting things like “I f***ing hate n***ers”. Turns out that internet trolls had exploited the machine learning algorithms that Tay used to learn human language by spamming Tay with racist and offensive tweets. While the poor outcome of the project could be blamed on the high level of toxicity present on social media, it’s vital that we look at the role the people who created Tay played. At some point, they made a dire miscalculation about how their intended users would interact with their product, in this case, Tay. They didn’t ask themselves the questions “Can Tay’s messages hurt real users?“ or even more specifically, “Could our product beget racial and offensive behavior?” The malfunction of the bot did not occur simply because Twitter users decided to misguide the algorithms in play. It occurred, in part, because the people who created Tay did not account for the fact that racism was a macro factor present on social media and that it could be used to manipulate the machine learning algorithms.

Racialized Code in Facial Recognition
A prominent example of facial recognition software gone wrong can be seen in Amazon’s “Rekognition”. In July of 2018, the ACLU conducted a test of Amazon’s latest gadget in which they searched a database of 25,000 arrest photos against every current member of Congress. The test resulted in 28 incorrect matches, 40% of those incorrect matches being members of Congress who were POC. However only 20% of Congress is even POC in the first place, indicating that the facial recognition software falsely matched people of color to arrest photos at a higher rate. A similar study conducted by MIT researchers in 2019 reached the same conclusions about Rekognition. The study concluded that while the software had little to no problem accurately matching light skinned men, it falsely identified darker skinned females about 30% of the time (source). When asked about these studies, Amazon has claimed that its internal testing has concluded no such results, even though the study has been widely replicated. To top it all off, Amazon is actively promoting their software to be used by law enforcement. Yes, Amazon is actively marketing a software that disproportionately misidentifies people of color to police departments across the country. Once again, consumer facing products seem to be failing people of color.
Joy Buolamwini, a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League, says that facial recognition software is subject to unconscious biases. She points out that code for facial recognition software is mainly written by white software engineers who work off of code written by other white software engineers (source). This code is required to meet standards that are set by mostly white managers, therefore perpetuating a workforce cycle that lacks diversity. Unconscious biases prompt the people working on these products to create code that’s better at recognizing features more prominent in caucasians and ultimately to produce a product that is better at identifying faces with lighter skin tones.
Racially Divided Speech Recognition Systems
Now let’s skip ahead to just a few months ago, March, 2020. At a time when conversational user interfaces were (and are) becoming all the rage, Stanford University published a study about the racial divide in speech recognition systems. The researchers tested systems from all five big tech companies and found that they identify words spoken by white users at a much higher rate than words spoken by black users. These speech recognition systems misidentified 19% of words spoken by white people and 2% of the audio snippets from white people were classified as unreadable. These stats respectively jump to 35% and 20% for black users (source).
Simply put, these speech recognition systems were clearly working better for white users than for black users. More importantly, the user centered design processes employed to create these speech recognition systems had failed at catering to the needs of ALL of the target users.
So essentially …
Time and time again, modern tech has failed to meet the standards of equality and diversity that we so highly pride ourselves on. Oftentimes, racially biased products are created unintentionally as a by-product of years and years of systemic racism and practices that are influenced by inherent biases — and that’s the problem. It’s a big problem that we, as designers and more broadly as “design thinkers”, need to start addressing more aggressively.
How might we do that?
Increase workforce diversity
Since 2014, the percentage of Facebook employees that are black has gone up from 2% to 3.8%, a mere 1.8% increase in 5 years (source). Similarly, the percentage of Google employees that are black has gone up from 2.4% to 3.3%, less than 1% (source). According to the design census, the number of black designers practicing in 2019 was a mere 3% of the industry total. In fact, overall in big tech there have been little to no gains in the last five years for black employees (source).
One of the biggest ways to combat systemic racism is to change the system from within, and that means recruiting and retaining more black employees — not just in big tech but also for the whole industry. We need more black voices, perspectives, and talent to create products that cater to a diverse and inclusive range of user needs.
Design for accessibility
Especially when designing consumer facing products, we need to think a little more deeply about what we mean when we say “design for accessibility”. Typically we think of people who are hard of hearing, color blind, the elderly, etc. and whether or not they would be able to easily use the product. However what we often overlook is whether people of all races would be able to use our product and achieve the same degree of success. The speech recognition systems mentioned earlier are a perfect example of this, where black people aren’t able to achieve the same level of success with the product that white people are able to. More so, it’s not just about creating a product that black people can use successfully, but creating an atmosphere where they feel comfortable and valued as individual users. This certainly wasn’t the case with Microsoft’s racist Twitter bot tweeting out the n-word or Google Photos labelling black people as gorillas. In the most extreme cases, unchecked racial biases in tech products can even destroy lives, like in the case of Amazon’s facial recognition software potentially helping to wrongly arrest people of color. Whether it’s a chatbot or facial recognition, the effects of racial biases can range from reducing usability to wrongfully imprisoning innocent people.
How do we fix this? We should always consider how the different cultures of POC might change the way they interact with products. The first step is to bring the conversation to the table and ask those questions during white-boarding sessions, brainstorming activities, and design roundtables. When giving feedback to fellow designers on your team, prompt them to think about race as a factor — flat out ask them, “how do you think a black person’s experience might differ when using this?”. Although these tactics might seem simple, they’re not being used as often as they should and as a result, tech products often have racism literally built into their code.
Conduct research and testing with inclusion in mind
We all know exactly how difficult it is to conduct user research and user testing, especially with time and resource constraints. However, it is key to building inclusive products. You won’t know if your product has the tendency to be exclusionary unless you test them with a diverse range of participants. For example, if Google had done extensive user testing with more African American participants for their Google Photos categorization feature, maybe they would have discovered the problem before the feature hit the streets.
What are some ways we can make user research and user testing more diverse? Let’s aim high and get creative. When embarking upon a user research journey, set benchmarks for yourself and/or your team for what you consider to be a diverse range of participants and strive to meet those benchmarks. If your company doesn’t have the resources, conduct in-house testing with POC within your company. There are lots of ways to test consumer facing products with POC, and although they might not yield perfect results, it’s a step in the right direction.
Strive for empathy
Last but not least, ask the uncomfortable questions. Being a primarily non black industry, we work in a bubble where race isn’t usually a factor. However, that shouldn’t stop us from learning about what it means to be black in today’s America, how black people interact with technology, and how we can better design consumer facing products to suit their needs too. At work, at home, and even with friends, we need to start having conversations where we break down the wall between how we experience the world and how black people experience the world.
We need to begin by recruiting a more diverse workforce, building a better understanding of the black experience in America and then we need to start designing for it, not as an afterthought, but as a core part of the process.
Some books and resources where others are writing and studying similar topics:
Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin
Algorithms of Oppression: How Search Engines Reinforce Racism by Safia Umoja Noble
Why its so hard for white people to talk about racism by Robin Diangelo
Erasing Institutional Bias: How to Create Systemic Change for Organizational Inclusion by Tiffany Jana & Ashley Diaz Mejias
Inclusion: Diversity, the New Workplace, and the Will to Change by Jennifer Brown