Designing Ethically with AI

Notes from Katja Forbes’ presentation at DesignUp Conference 2019, Singapore, with additional notes and research by author (me)

Leow Hou Teng
UX Planet

--

Image from Freepik

What happens tomorrow is designed today.

Artificial intelligence increasingly powers many real-world applications, from facial and image recognition to language translations and smart assistants. Companies are deploying AI in their operations from frontline services to in-house support functions, to achieve productivity growth and innovation. While AI promises benefits, it poses urgent challenges. Hence, we need to discuss designing ethically with AI.

Key statistics on AI

According to Gartner, 25 per cent of customer service operations will use virtual customer assistants by 2020 (in six months’ time!). Management consulting firm McKinsey says that ‘AI has the potential to deliver additional global economic activity of around $13 trillion by 2030.’ With investment in AI increasing, and its immense ability to impact human and society, perhaps designers are too late to be involved with AI?

Building trust with machines

“Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.”

Microsoft AI Principles

AI and machines are intertwined with our lives; We are relying on machines to complete our tasks; We allow machines to make key decisions for us; We trust it with our personal data for a service from an app, website, or machine.

As our interactions with machines are becoming invisible, these decisions made should not be based on a biased data set that is sexist, racist, etc. People must be able to engage in it with the confidence that their data shared on these platforms, will be utilised for the right purposes, and will not be judged or penalised. Being transparent on the invisible logic behind these decisions, and setting expectations of what these systems we designed could or could not do, would help to foster trust in using these automated systems.

Amplifying human biases

“When you call something an edge case, you’re really just defining the limits of what you care about.”

— Eric Meyer

AI and other emerging technologies have the power to empower and include more people. For example, voice-enabled devices can allow the blind to access the internet while allowing a parent who is carrying a baby to place orders for groceries. Yet, due to limitations with the present technology, AI has excluded people on the ‘edge cases’ instead of providing accessibility.

The present technology is described as Narrow AI–an agent, such as a chatbot, that serves human without understanding the context. This seems worrying as it means that AI will only be as good as the information that we feed it. A biased set of data provided to the machine will return a biased set of analysis from the machine, or worst, an amplification of the biases by the machine.

Screenshot from jackyalcine on Twitter

In 2015, a black software developer tweeted that Google’s Photos service had labelled photos of him with a black friend as “gorillas.” Google apologised, but instead of fixing the problem, they blocked “gorilla,” “chimp,” “chimpanzee,” and “monkey” on Google Photos.

Screenshots from TayTweets

In another case, Microsoft debuted Tay on Twitter in 2016, intended as a teenage girl with typical teenage fantasies. Within 16 hours it had to be taken down when Tay started becoming sexist and racist in its Tweets. Many users took advantage of Tay’s machine learning capabilities and coaxed it into saying racist, sexist, and awful things. Months later, in a second attempt, Microsoft launched Zo, as a chatbot. This time, when users mention any of her triggers (race, religion, politics, country keywords etc.), she refuses to continue the conversation.

Screenshots from Zo Chatbot

Both cases from two of the largest tech companies in the world are worrying–instead of trying to solve the issue on hand, both responded by censoring the issue. AI performs perfectly the way we designed it and lacks the neuroplasticity to retrain what is inherently biased from the beginning.

Empathy vs diversity

Could we then address AI’s inherent biases? Perhaps we could if we practice empathy in our designs, and co-create with a more diverse team.

Empathy in design refers to our understanding of the users whom we are designing for. When we practice empathy, we’re in the person’s head and understands how they feel and what they think. Applying empathy to our designs, we approach the project by thinking from our users’ perspective and have their best interests in mind.

Yet, empathy has its limitations. Our users and we may have very little in common; things that are a given in our country may not be the case in other countries. Our own privilege, background, environment, etc., may form our own biases that hinder our ability to be empathetic.

In a recent article, Don Norman says that ‘We can’t get into the heads and minds of millions of people, and moreover we don’t have to: we simply have to understand what people are trying to do and then make it possible.’ It may not be possible to be empathetic and design for anyone and everyone.

Instead of recommending solutions based on what we think the users need, we should help to ‘facilitate, guide, and mentor’ the users to come out with their own designs. When we co-create with a team that is more diverse in race, religion, gender, abilities, age, perspective, etc., it could help to assess what the problems are with the design and identify who is being excluded. Through diversity, AI and machine learning could be free from an individual (or the designer’s) biases.

Design leadership

As designers get a seat at the table (or have I been mistaken?), we’re in a great position to make a positive impact on how businesses run and how the company’s products are built. The role comes with great responsibility–it allows us to step up as an ethical design leader and empathise and advocate for what is right for both the users and the business.

In the book, Ruined by Design, author Mike Monteiro suggests that designers should practice their craft and adhere to a set of principles and conduct. He suggested in his ‘Designer Code of Ethics’,

‘A designer owes the people who hire them not just their labour, but their counsel’.

Designers should evaluate the economic, sociological, and ecological impact of their design and provide guidance to the client or business in the public’s best interest.

Managing polarities and unintended consequences

The more we automate tasks with AI, the more we are deskilling people from their basic craft.

The more we demand personalised services, the more personal data we are giving away.

The more we demand the servitude of AI, the more disservice a functionally incomplete AI might harm humans.

The practice of design is inherently about managing tradeoffs, polarities, and unintended or unplanned consequences from our design. As we seek to resolve a problem with a new solution, the new design or process may create another problem that was not originally planned. For example, Henry Ford did not design cars and imagine it to cause traffic jams. Products perform the way they were designed ideally; humans being humans, will find ways to break and misuse them.

New solutions are also meant to balance the lesser of two evils. For example, cities adopting the use of smart cameras with facial recognition technology could help to ensure greater public security by identifying criminals, but also leads to an increase in the monitoring of its citizens. Should freedom be forgone for greater public safety?

The Human-Machine Handoff

AI promises benefits, particularly in performing complex calculations and handling routine tasks. However, humans should still have their hands on the steering wheel and think about what’s right for others.

As artificial intelligence becomes deeply embedded in our lives, we need to think about how to design ethically with AI. Being empathetic and co-creating with a diverse group could help to address and identify ethical issues with a design. This duty does not fall only on a person with a job title ‘designer’, but anyone involved in the process of designing a product, process, or a service. Let’s design something positive for a better world today.

About DesignUp Asia Conference, Singapore

The DesignUp Asia conference was held in Singapore from 18–19 June 2019 with a line-up of top international speakers. Katja Forbes is an Australian expert in experience design and Director on the Interaction Design Association Global Board (IXDA). Katja spoke on the topic, ‘Being Human in the Age of AI’ and conducted a masterclass on ‘Trusting Invisibility — Ethics & Design for AI’ which I attended.

Hou Teng is a Singapore-based UX designer. He is actively engaged with the UX and design community in Singapore.

If you like this article, do 👏 👏 👏 👏 👏 👏 👏 for it.

Follow me on: Leow Hou Teng | Portfolio | LinkedIn

--

--