Human-Centric Conversation Design: Designing Unbiased Chatbots

Nare K.
UX Planet
Published in
9 min readMay 22, 2020

--

According to a study by Twilio, an average person has 3 messaging apps on their smartphone home screen and uses 3 different messaging apps per week. The number is increasing. Messaging, though completely technology-enabled has become a fundamental part of human experience. Within this context understanding conversational experience and interface design is of utmost relevance. Chatbots are an example of conversational interfaces that are becoming more and more used in multiple contexts.

Understanding Chatbots

Chatbots are computer programs that conduct conversations via audio or text methods. They can be used for various purposes — from customer support to mental wellbeing assistance.

Two interesting examples of chatbots are the Facebook Messenger bot Walk with Yeshi and mental wellbeing bot Woebot.

Walk With Yeshi

Walk with Yeshi is designed to bring awareness to Ethiopia’s water crisis. The chatbot takes individuals on a 2.5 hour journey, matching the length of the average walk for water in Ethiopia

Woebot

Woebot was created by leading clinical psychology experts. It helps reduce depression symptoms through active listening. The bot guides people through brief daily conversations, and sends videos and other useful tools depending on the person’s mood and needs at a given moment.

Two types of chatbots exist: scripted chatbots and chatbots powered by artificial intelligence (AI).

Scripted Chatbots

In scripted chatbots, the conversation is mapped out in a tree-like diagram format and each conversation has a defined number of flows. These chatbots:

  • Ask pre-determined questions.
  • Accept responses with pre-determined buttons — every interaction is planned in advance.
  • Don’t accept typed input, thus, they don’t go ‘off-script’
  • Are simpler than AI chatbots and require less technical development

AI Chatbots

  • AI chatbots recognize keywords by using a form of AI, Natural Language Processing (NLP), which helps computers understand, process and communicate human language. They understand what the user asks and decide how to respond. These chatbots:
  • Accept typed user input.
  • Have a memory of the conversations and give a personalized experience.
  • Require in-depth technical development and are more complicated than scripted chatbots.

Understanding Conversation Design

Conversation design is fundamental to building chatbots. The conversation between the person and the chatbot is the primary way that lets a person evaluate the effectiveness of the chatbot. If people don’t enjoy the conversation, they won’t interact with the bot.

Conversation design is not a simple act of writing down texts in conversational formats. Conversational UI design is a combination of several disciplines including UX design, copywriting, interaction design, visual design, motion design and, in many cases, voice and audio design.

The typical process of chatbot design follows 5 steps:

  • Defining the audience
  • Defining the role and the chatbot type
  • Creating a chatbot persona
  • Outlining the conversational role
  • Writing the script

Three pillars upon which conversation designers often rely throughout the chatbot design process are cooperation (a.k.a identifying and responding to the real intention of the user by drawing upon the shared “world knowledge” that underlines all our conversations), turn-taking (a.k.a making transitional prompts, such as questions explicit yet natural) and context (a.k.a focusing on the user’s physical and emotional context).

Google describes the role of a conversation designer as like that of an architect.

“Mapping out what users can do in a space, while considering both the user’s needs and the technological constraints. They curate the conversation, defining the flow and its underlying logic in a detailed design specification that represents the complete user experience. They partner with stakeholders and developers to iterate on the designs and bring the experience to life.”

While going through this process, conversation designers play with multiple conversational user interface (UI) elements, such as discourse markers, errors, buttons, audio-visual elements, acknowledgements, commands, confirmations, suggestions, informational statements, apologies, questions, greetings and endings.

Well-thought conversational interface design can make human-computer interactions feel natural and engaging, as the interactions are delivered using the familiarity of human language in everyday speech. However, there are many challenges that designers have to deal with before diving into the interface design. One of these challenges is bias.

In chatbot design, a famous case study of failure caused by multiple biases is the case of Microsoft’s Tay bot.

One of Tay’s now deleted “repeat after me” tweets

Tay was an AI chatbot designed to learn from its interactions with Twitter users. The idea was that the more people interacted with Tay, the more it would learn about language, the smarter it would become, and the more realistic and human-like its personality would appear.

Tay became popular very quickly. It gained 50K followers and produced more than 93K tweets in a few hours. However, the problem was that not only it got ‘smarter’, it also got more racist, homophobic and politically extreme.

As the example of Tay perfectly illustrates, if designers fail to make conscious decisions during the experience design process of chatbot design, aspects of human identity, such as gender, race, social class, body size, religion, accent and height can all be impacted by bias.

Understanding Bias

In statistics, bias refers to the systematic error that can be present in processes of data collection. In law, bias refers to a predisposition that prevents a person from evaluating the facts impartially. It can also be understood as a mental and social system by which people make decisions. In common language, bias refers to being interested in one thing more than another (e.g. favoring people who look like you or share your values). Human bias is often unconscious, so combatting it requires conscious effort.

Two types of harms that researchers have shown to arise from AI bias are representational harms and allocation harms.

Allocation harms

This type of harm occurs when a system allocates opportunities or resources to certain groups, or withholds them. These harms can be, for example, economic, relating to things like mortgages, loans or insurance.

Representational harms

This type of harm occurs when systems reinforce discrimination against some groups because of identity markers such as gender, race, class, age, ability or belief. These harms might take place regardless of whether resources are being allocated.

Conversational interfaces, especially, AI-based ones are particularly vulnerable to representational harms caused by biases. Accent bias in Google Home is one example.

The Washington Post

A Washington Post study tested thousands of voice commands dictated by 100+ people across around 20 cities in the US. The results showed that people with Southern accents were 3% less likely to get accurate responses from a Google Home device than those with Western accents. Moreover, things were worse for people with non-native accents. One of the tests revealed 30% more inaccuracies for this group.

This happened because the data sets used to train device were missing poor, uneducated, rural, non-white, non-native English voices. As a result, devices did not recognize these kinds of voices.

De-biasing Chatbots: The Feminist Chatbot Design Tool

One of the existing tools/frameworks to help designers minimize the destructive impact of biases is The Feminist Chatbot Design Tool developed by Josie Young, AI researcher and The Feminist Internet.

It’s important to understand that the approach should not necessarily be applied solely to the cases, where gender bias is involved. It is an approach that guides the designers through a process of inquiry that can help them carefully analyze the entire context of the design and minimize many of the biases that can potentially be embedded in chatbots.

“A feminist approach to conversation design means using empathic, inclusive, accessible language, and providing opportunities for the user to specify how they would like to be addressed.”

The Feminist Design Tool has eight sections. Each section includes questions to help designers consider the values they are embedding in their products or services.

1. Stakeholders

  • Rather than design for a ‘universal user’, can you identify a stakeholder who is not currently well served, and who could benefit from your design?
  • What might be some of the specific needs, barriers and problems that they face?
  • What are their strengths and viewpoints?
  • What different participatory methods do you have available so that your stakeholder can co-create or have direct input into the development of your design?

2. Purpose

  • Does your design meet a meaningful human need or address an injustice?
  • How will your design address the problem/s experienced by your stakeholder?
  • You may want to think about your stakeholder in more detail to help you:
  • What is the problem they’re trying to overcome?
  • What obstacles prevent them overcoming the problem?

3. Context

  • Do you have a good understanding of the context your design will be part of and the power dynamics at play within it?
  • Do you understand the opportunities and challenges for different stakeholders within this context?
  • Who is not well served in this context and why?
  • Does your design exacerbate problems that others are currently trying to solve in this context?

4. Team Bias

  • What are your values and position in society, individually and collectively?
  • How your values and position might lead you to choose one option over another or hold a specific perspective on the world?
  • How your values and position in society relate to the stakeholders your design seeks to engage?
  • Are there additional perspectives you need to bring into the process? How might you do this?

5. Design & Representation

  • What type of character will you give your design?
  • How might your character choice reinforce any stereotypes?
  • How will your character remind the stakeholder it’s a robot?
  • Will you assign a gender to your character? Why? In what ways might this reinforce or challenge gender stereotypes? Have you considered a genderless design? What possibilities might this open up?
  • In what ways might your choices prompt people to behave unethically or in a prejudiced way?

6. Conversation Design

  • What’s the tone of voice (physically and metaphorically)?
  • What words should the design avoid (what could trigger or be upsetting?)
  • Are there words specific to your stakeholder you need to ensure your design can understand?
  • If it receives abuse, how will the design respond?
  • What will your design say when it doesn’t understand?
  • How will you get feedback about whether the conversation is appropriate for your stakeholders?
  • Have you asked how the stakeholder would like to be addressed?

7. Data

  • How will you collect and treat data through the development of your design?
  • Are you aware of how bias might manifest itself in your training data?
  • Are you aware of how bias might manifest itself in the AI techniques that power your design (like machine learning)?
  • How could stakeholder-generated data and feedback be used to improve the design?
  • Will the design learn from the stakeholder’s behaviour, and if so, are you assuming that the design will get it right?
  • What mechanisms or features could make these assumptions visible to the stakeholder and empower them to change the assumptions if they want to?
  • How will you protect stakeholder data?

8. Architecture

  • What type of technical architecture and capabilities you will use?
  • How will you minimize the carbon and climate footprint of your design?
  • Where might unpaid or exploited labour exist in the production/supply chain of the technology you’re using?
  • Will the impact of AI and automation on the service your design aims to provide make some jobs redundant or lower status?

By going through these questions, designers minimize the chances of knowingly or unknowingly perpetuating gender biases or inequalities in the things they make.

The first chatbot that the Feminist Internet built using the Feminist Chatbot Design Tool is the F’xa bot, a guide to AI bias.

As chatbots are becoming more and more popular and as AI is becoming the inseparable part of our everyday life, it’s important that designers understand their roles in anticipating and curating conversation interfaces that do not discriminate and do not proliferate the social biases that have been the enemies of the humanity for millennia.

--

--