About Cognitive Biases and UX Design

and how UX designers have an important role in promoting privacy

Luiza Jarovsky
UX Planet

--

Photo by Lianhao Qu on Unsplash

Dark patterns in data protection, as I wrote in a previous publication, are deceptive design practices used by websites and apps to collect more — or more sensitive — personal data from you. In this article, I would like to go a little bit more into the cognitive biases that are exploited by dark patterns and how UX designers have an important role in tackling online manipulation and promoting privacy.

A cognitive bias is a ‘systematic (that is, non-random and, thus, predictable) deviation from rationality in judgment or decision-making’.1 Cognitive biases were originally discussed by cognitive psychologists such as Kahneman and Tversky, who identified that ‘people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors’.2

Cognitive biases are inherent human traits and can be empirically demonstrated.3 As I argued in my dark patterns article, if data protection law wants to protect real humans (i.e., and not distorted or outdated theoretical models of human rationality), it must correctly acknowledge cognitive biases and how they might lead to manipulative practices — such as dark patterns.

Some of the biases exploited by dark patterns are listed and explained below:

1. Anchoring bias

‘Systematic influence of initially presented numerical values on subsequent judgments of uncertain quantities, even when presented numbers are obviously arbitrary and therefore unambiguously irrelevant’.4 An offline example would be the case of a restaurant owner that adds dishes that are very expensive to the first pages of a restaurant menu; the client will then be anchored by higher values and will consider the remaining dishes’ values to be lower and advantageous, even if they are expensive when compared to those of similar restaurants.

This bias has been exploited in the privacy context for example when presenting privacy options to the user. In a privacy menu, the pool of values from which the user will have to choose from is typically arbitrary, with the designer deciding what will be the broadest- sharing option and the least-sharing option. Relying on the anchoring bias, the designer can choose a first option that is privacy-negligent and additional options that are only mildly protective. The user will be ‘anchored’ by the first option and induced to perceive the additional options as being privacy protective.

2. Contrast effect

The contrast effect to which I am referring to here involves visual aspects: exploring the relationship between two objects (or two texts) to reduce readability or to generate a desired impression in the user/ reader. An example is that ‘several studies indicate that increased contrast between the text and background results in increased readability’.5 Therefore if the designer wishes that a certain text is not noticed or well read, the designer should apply low contrast in the color scheme. A well-known example were the techniques used by search engines in the past to blur the difference between sponsored and organic results.6 This effect is also commonly observed when the designer maliciously selects low contrast schemes for privacy protective options, to steer the user’s attention off and reduce the probability that they will choose restrictive options.

3. Default Effect

The default effect is the observation that the default option ‘is chosen more often than expected if it were not labeled the default’.7 There are various explanations for the ‘stickiness’ of defaults,8 among them are the absence of effort needed and the appearance of an implied endorsement by the service provider. The default effect bias has been observed in multiple contexts, among the most famous are organ donation and retirement savings plans.9

Defaults are prevalent in the privacy field as well, and it has been shown that individuals will most commonly stick to the default privacy option instead of taking time to think and choose a more suitable alternative.10 It is known that designers have been benefiting from this bias for a long time by defining privacy invasive defaults.11 More recently, with the rise of Privacy by Design (PbD) and Data Protection by Design and by Default (DPbDD) and with the requirement for explicit consent,12 defaults that are patently detrimental to the user are not so frequently seen anymore, but it is worth noticing that there is no express ban on defaults, therefore leaving room for companies to imply that certain data is necessary or essential for the performance of the service, with no external accountability.13

4. Framing effect

‘The ‘framing effect’ is observed when a decision maker’s risk tolerance (as implied by their choices) is dependent upon how a set of options is described’.14 More specifically, ‘people appear to exhibit a general tendency to be risk seeking when confronted with negatively framed problems and risk averse when presented with positively framed problems’.15 The framing effect has been observed in multiple contexts,16 and studies have been on ways to reduce its impact.17 An example of its manifestation is the performance comparison of advertising a yogurt as 1% fat or 99% fat free. ‘The percentage-fat-free format led to stronger endorsements of healthiness than the percentage-fat format,’18 showing the impact of the framing effect in the perception of healthiness.

In the privacy context, the designer can frame choices and options in a way to elicit a less privacy-protecting option. For example, when asking if the user wants to start using a face identification service, the designer can highlight the novelty, the sophistication, and the surprises that the technology can bring, leaving problematic privacy issues as a side comment or something that does not deserve the same level of detail.

5. Hyperbolic discounting

‘People’s choices are often inter-temporally inconsistent, for example in the sense that people prefer a larger, later consumption bundle over a smaller, sooner one as long as both are sufficiently distant in time, but change their preference to the smaller, sooner bundle, as both draw near’.19 In less technical terms, an individual would value $50 now more than $100 in a month.

In a privacy related context, it means that an individual often prefers to use a service immediately, even if it involves risks or possible long term privacy impacts, instead of not using the service now and preserving his or her privacy long term. Privacy scholars have acknowledged this bias in studies about privacy policies and how people prefer to just click ‘I accept’ and be submitted to aggressive terms so that they can immediately enjoy the service.20

6. Loss aversion

‘The disutility of giving up an object is greater than the utility associated with acquiring it’.21 It can be observed in the context of ‘free trials,’ in which you allow the person to have access to a product or service for a certain, limited period and then, to allow continuity, one or more payments are requested. The loss aversion bias will make the person more likely to agree to pay.

In the privacy context, it would manifest it the sense that ‘people are willing to accept more money in exchange for disclosing personal information than they are willing to pay to regain control over the same information’.22

7. Optimism bias

‘The tendency for people to report that they are less likely than others to experience negative events and more likely than others to experience positive events’.23 For example, people rate their chances of experiencing negative events as being less than the average person’s.24

In the privacy context, it represents the tendency to think that one is less likely to suffer any privacy harm,25 therefore possibly becoming a false reassurance that negligent, careless, and risky online behaviors are harmless.

*

Learning the biases described above and how they manifest in the data protection context can help you realize that your online activity is constantly being observed, monitored and exploited for profit. When navigating online, remember that yes, some services may be ‘for free’ in monetary terms, but you are paying with the personal data you share along the way.

Unfortunately, in essence, biases are almost part of our human nature and very difficult to control. This gives a lot of power to UX designers, who can use their work to either help protect fundamental values such as privacy, autonomy, human dignity and fairness or to serve unlimited corporate power to harvest more profits, regardless of the consequences to the public.

In my view, law has an important role in restringing manipulative practices online, especially in the data protection context, where the complexity of data collection and processing activities may help disguise corporate abuse and disregard for fundamental values.

However, as I have been highlighting in previous works, data practices do not happen in a vacuum and there are multiple players involved, especially the people that work for tech companies.

UX designers are key contributors and can have a leadership role when helping to build and shape the products that we love to use. If UX designers start working with privacy in mind and designing products that foster autonomy, user empowerment, human dignity and fairness, then we will be indeed leading to a new Web. Until then, it is just more of the same, but with a different disguise (or in a different ‘metaverse’).

There is a lot to unpack here and I hope to talk more about that in future posts, especially regarding UX designers’ role in promoting privacy. See you next week!

✅ Before you go:

See you next week. All the best, Luiza Jarovsky

Footnotes

1 Fernando Blanco, Cognitive Bias, in Encyclopedia of Animal Cognition and Behavior, (Jeniffer Vonk & Todd Shackelford eds., 2017) 1.

2 Amos Tversky & Daniel Kahneman, ‘Judgement under Uncertainty: Heuristics and Biases’ (1974), 185 Science 1124, 1124.

3 For a deeper view on cognitive biases and their empirical demonstration, see Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux 2011) and Dan Ariely, Predictably Irrational (Harper Perennial 2010).

4 Predrag Teovanović, ‘Individual Differences in Anchoring Effect: Evidence for the Role of Insufficient Adjustment’ (2019) 15 Eur. J. Psychol. 8, 8.

5 Robert Moore, Claire Stammerjohan & Robin Coulter, ‘Banner Advertiser- Web site Context Congruity and Color Effects on Attention and Attitudes’ (2005) 34 J. Advertising 71, 73.

6 In 2013, the Federal Trade Commission (FTC) sent letters to search engine companies asking for more differentiation between paid and natural results, to avoid consumer deception and violation of Section 5 of the FTC Act. Similar letters had been sent in 2002, with the same argument. At https://www.ftc.gov/news-events/press- releases/2013/06/ftc-consumer-protection-staff-updates-agencys-guidance-search.

7 Isaac Dinner, Eric Johnson, Daniel Goldstein, Kaiya Liu, ‘Partitioning Default Effects: Why People Choose Not to Choose’ (2011) 17 J. Exp. Psy.-Appl. 332, 335.

8 For a deeper analysis of the properties of defaults, see Omri Ben-Shachar & John Pottow, ‘On the Stickiness of Default Rules’ (2006) 33 Fla. St. U.L. Rev 651.

9 For a broader review, see Lauren Willis, ‘When Nudges Fail: Slippery Defaults’ (2013) 80 U. Chi. L. Rev. 1155.

10 Id.

11 See Matthew Keys, ‘A Brief History of Facebook’s Ever-changing Privacy Settings’, Medium.com (2018), at https://medium.com/@matthewkeys/a-brief-history-of-facebooks-ever-changing-privacy-settings-8167dadd3bd0. On a more positive side, for an analysis on the optimization of access control defaults in online social networks, see Ron Hirschprung, Eran Toch, Hadas Schwartz-Chassidim, Tamir Mendel & Oded Maimon, ‘Analyzing and Optimizing Access Control Choice Architectures in Online Social Networks’ (2017) 8 ACM Trans. Intell. Syst. Technol., Article 57.

12 GDPR, Article 4(11). The topics of PbD and DPbDD will be dealt in more detail on Section VII.2 infra.

13 Unless it is a paid service or any specific business model that does not rely on advertising or data.

14 Cleotilde Gonzalez, Jason Dana, Hideya Koshino & Marcel Just, ‘The Framing Effect and Risky Decisions: Examining Cognitive Functions with fMRI’ (2005) 26 J. Econ. Psy. 2.

15 Id.

16 See Sammy Almashat, Brian Ayotte, Barry Edelstein & Jeniffer Margrett, ‘Framing Effect Debiasing in Medical Decision Making’ (2008) 71 Patient Educ. Couns. 102 (2008). Antony Sanford, Nicolas Fay, Andrew Stewart & Linda Moxey, ‘Perspective in Statements of Quantity, with Implications for Consumer Psychology’ (2002) 13 Psy. Sci. 130.

17 See Mathieu Cassotti, Marianne Habib, Nicolas Poirel, Ania Aïte, Olivier Houdé & Sylvain Moutier, ‘Positive Emotional Context Eliminates the Framing Effect in Decision-Making’ (2012) 12 Emotion 926.

18 Antony Sanford, Nicolas Fay, Andrew Stewart & Linda Moxey, ‘Perspective in Statements of Quantity, with Implications for Consumer Psychology’ (2002) 13 Psy. Sci. 132.

19 Till Grüne-Yanoff, ‘Models of Temporal Discounting 1937–2000: An Interdisciplinary Exchange between Economics and Psychology’ (2015) 28 Science in Context 675, 677.

20 See Alessandro Acquisti & Jens Grossklags, ‘Losses, Gains, and Hyperbolic Discounting: An Experimental Approach to Information Security Attitudes and Behavior’ (2003) UC Berkeley 2nd Annual Workshop on Economics and Information Security; Alessandro Acquisti, Leslie John, George Loewenstein, ‘What Is Privacy Worth?’ (2013) 42 J. Legal Studies 249.

21 Daniel Kahneman, Jack Knetsch & Richard Thaler, ‘Anomalies the Endowment Effect, Loss Aversion, and Status Quo Bias’ (1991) 5 J. Econ. Perspect. 193, 194.

22 Alessandro Acquisti, et al., ‘Nudges for Privacy and Security: Understanding and Assisting Users’ Choices Online’ (2017) 50 ACM Comput. Surv., Article 44, 8.

23 Marie Helweg-Larsen & James Shepperd, ‘Do Moderators of the Optimistic Bias Affect Personal or Target Risk Estimates? A Review of the Literature’ (2001) 5 Personality & Social Psychol. Rev. 74, 74.

24 Adam Harris & Ulrike Hahn, ‘Unrealistic Optimism about Future Life Events: A Cautionary Note’ (2011) 118 Psy. Rev. 135, 148.

25 For an empirical study about the manifestation of optimism bias within judgements regarding personal privacy, see Hichang Cho, Jae-Shin Lee & Siyoung Chung, ‘Optimistic Bias about Online Privacy Risks: Testing the Moderating Effects of Perceived Controllability and Prior Experience’ (2010) 26 Comput. Hum. Behav. 987.

--

--

CEO of Implement Privacy, LinkedIn Top Voice, Ph.D. Researcher, Author of Luiza's Newsletter, Host of Luiza's Podcast, Speaker, Latina, Polyglot & Mother of 3.