Skip to main content

Design Values: Hard-Coding Liberation?

Published onFeb 27, 2020
Design Values: Hard-Coding Liberation?
·

Figure 1.1 We All Belong Here. Poster art by Micah Bazant, 2017.

Technology is always a form of social knowledge, practices and products. It is the result of conflicts and compromises, the outcomes of which depend primarily on the distribution of power and resources between different groups in society.

—Judy Wajcman, Feminism Confronts Technology

Design is the process by which the politics of one world become the constraints on another.

—Fred Turner1

If your beta social network doesn’t allow blocking abusers from jump, your beta social network was probably developed by white dudes. #ello

—@AngryBlackLady2

  • “Black, Muslim, Immigrant, Queer! Everyone is welcome here!”

  • “We’re Queer! We’re Trans! No Walls! No Bans!”

  • “Let’s get free! We’ve all gotta pee!”

The chants bounce off of the brutalist concrete walls of Boston City Hall. It’s Sunday, February 5, 2017, sixteen days since Donald Trump was sworn into office as president of the United States of America. A crowd of over one thousand queer and trans* folks, immigrant rights activists, and our family, friends, and allies has assembled at Government Center. Huge rainbow LGBTQI+ flags and smaller pink, blue, and white trans* flags flap in the frigid February air, alongside the black, red, and white banners of the antifascist action contingent. I climb the steps, carrying my djembe; I’ve been asked to help drum and lead chants during the march over to Copley Square. First, though, five of the protest organizers, each from a diverse intersection of queer, trans*, and other identities—Black, Latinx, Asian, immigrant, Disabled working-class—pass a microphone around and read a collective statement about the reasons for the mobilization:

Trump claims that he wants to protect the LGBTQ community from oppression (!!!) But … the unconstitutional #MuslimBan impacts millions of people including Queer and Trans people; border walls and militarization means more deaths in the desert, including QT deaths; expanding raids and detention and deportations affects all of us and especially increases the violence that Undocuqueer folks experience; the return of the Global Gag Rule undermines reproductive justice; attacks on Native sovereignty through reopening #KeystoneXL and #NoDAPL are attacks on all Native people including Two-Spirit people; threats to implement voter ID laws in all 50 states, because Trump lost the popular vote, mean disenfranchisement for marginalized voters; Trump’s promise to take Stop and Frisk nationwide will target Black and Brown people in every community, and we know that Queer, Trans, and Gender Non-Conforming Black and Brown people are among those MOST targeted by racist police violence. … We’re here for Black, Muslim, Native, Immigrant, Queer, Disabled, Women, POC communities, and for all those who live at the intersections of many of these communities at the same time! We are here for each other, because our liberation is bound up together. We see each other and we have got each other’s backs. None of us are free until all of us are free! If we don’t ALL get it: #SHUTITDOWN!”3

The Trans* and Queer Liberation and Immigrant Solidarity Protest was part of the massive wave of street mobilizations that took place in the winter and spring of 2017 in response to the election and the first actions of the Trump administration. It was organized in less than a week through the efforts of #QTPower, an ad hoc collective of Boston-based activists, lawyers, cultural workers, and community organizers that I participated in. The primary tools that we used to organize actions so quickly were face-to-face meetings and conference calls to plan key aspects of the event; Google docs to draft framing, language, demands, and logistical details; email, phone calls, and instant messaging to gather organizational cosponsors; and Facebook to promote the action. We also employed the symbolic power of solidarity imagery by Micah Bazant, a visual artist whose work has circulated widely through social movement networks over the last decade (we used figure 1.1, above, to promote our event).

By now, activist use of Facebook as a tool to help organize political protests is a story that has been widely told in both scholarly and popular writing. The best of these accounts complicate any simplistic narrative about the relationship between social media platforms and political protest activity. Zeynep Tufekci, a media scholar and public intellectual who studies the social impacts of technology, argues that tools like Facebook enable social movements to mobilize large numbers of people quickly around a simple broad demand, even when the movement lacks capacity to do much else.4 Paolo Gerbaudo, a social movement scholar who studied the so-called Arab Spring, the Occupy movement, and the Spanish 15-M movement, describes how charismatic activists with large numbers of followers use Facebook and Twitter to lead social movements through what he calls a choreography of assembly, without developing mechanisms of representative democratic decision making.5 Ramesh Srinivasan, a scholar and design theorist who works with migrant and indigenous communities, cautions against the oversimplified US mass media narrative that tries to claim credit for the Arab Spring as an inevitable outcome of the introduction of social media platforms. Instead, he highlights the politics, history, organizations, and critical consciousness of local activists who made the uprisings and revolutions happen.6 Communication scholars Moya Z. Bailey, Sarah Jackson, and Brooke Foucalt-Welles analyze how social movement networks leverage the affordances of Twitter in hashtag activism campaigns like #SayHerName, #GirlsLikeUs, and #MeToo.7

Other scholars of media, ICTs, and social movements, such as Emiliano Treré, Bart Cammaerts, and Alessandra Renzi, provide detailed discussion of the specific ways that the designed affordances of Facebook (and other social media platforms) enable and constrain activist use.8 For example, as we organized the #QTPower actions, Facebook Events provided excellent tools for quickly circulating our call to action to thousands of people and for gauging interest through the built-in RSVP feature. During this period of heightened mobilization, event RSVPs (“I’m going,” in Facebook terms) mapped more closely to real-world turnout than usual. Protest organizers shared with one another that events with thousands of RSVPs were likely to actually have thousands of people show up, compared with far smaller turnout ratios at other times.9

Yet Facebook in general, and Facebook Events specifically, provides terrible tools for the most important task of community organizers: to move people up the ladder of engagement.10 After the #QTPower event was over, it was possible to share some additional information, photos, and feedback via the event page, but even as the event organizers we had no way to broadcast a message about our next move to all attendees. Posts to the event discussion only appeared for some people, subject to the opaque decision making of Facebook’s News Feed algorithm. Of course, we had the option to pay Facebook a fee to make it more likely that protest attendees would find additional content from the mobilization in their feeds. Yet the platform design denied us the ability to do what we most wanted to do: in this case, contact all of the protest attendees (and those who had expressed interest in the event) and invite them to the next mobilization, scheduled for a few weeks later when the Trump administration announced a rollback of Title IX protections for trans* and gender-non-conforming students across the country.11

The poor fit between Facebook’s affordances and basic activist needs partly explains the existence of an entire ecosystem of dedicated activist Constituent Relationship Management systems (CRMs), such as SalsaCommons, NationBuilder, and Action Network. These platforms, designed around the needs of community organizers and political campaigners, have built-in features, interface elements, and capabilities that match the core processes of building campaigns. They provide tools such as mass email lists, petitions, events, surveys, and fundraising, as well as ladder-of-engagement services such as activist performance tracking, list segmentation, and automated reminders and instructions. These types of tools are built into activist CRM platforms in part because the founders of these platforms typically have experience running campaigns and understand these needs, and also because most employ User-Centered Design (UCD) methods and agile development to continually improve the fit between platform affordances and user needs.

Yet such platforms remain niche services, used by only a relatively tiny group of professionalized campaigners. They typically cost money to use, often based on the number of contacts in the campaign database, and they require a significant investment of time and energy to learn. They will in all likelihood never be widely adopted by the vast majority of people who participate in social movements. Instead, most people, including social movement activists, organizers, and participants, use the most popular corporate social network sites and hosted services as tools to advance our goals. We work within the affordances of these sites and work around their limitations. We do this even when these tools are a poor fit for the specific task at hand, and even when their use exposes movement participants to a range of real harms.

Why do the most popular social media platforms provide such limited affordances for the important work of community organizing and movement building? Why is the time, energy, and brilliance of so many designers, software developers, product managers, and others who work on platforms focused on optimizing our digital world to capture and monetize our attention, over other potential goals (e.g., maximizing civic engagement, making environmentally sustainable choices, building empathy, or achieving any one of near-infinite alternate desirable outcomes)? Put another way, why do we continue to design technologies that reproduce existing systems of power inequality when it is so clear to so many that we urgently need to dismantle those systems? What will it take for us to transform the ways that we design technologies (sociotechnical systems) of all kinds, including digital interfaces, applications, platforms, algorithms, hardware, and infrastructure, to help us advance toward liberation?

Everyday Things for Whom? The Distribution of Affordances and Disaffordances under the Matrix of Domination

Let’s begin with one of the core concepts of design theory: affordances. According to the Interaction Design Foundation, affordances are “an object’s properties that show the possible actions users can take with it, thereby suggesting how they may interact with that object. For instance, a button can look as if it needs to be turned or pushed.”12 The term affordances was initially developed in the late 1970s by cognitive psychologist James Gibson, who states that “the affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill.”13 It came to be influential in various fields following design professor William W. Gaver’s much-cited article “Technology Affordances,”14 and then it moved into even wider use in human-computer interaction following the publication of cognitive scientist and interface designer Donald Norman’s The Design of Everyday Things.15 For Norman, affordance refers to “the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used.”16 For example, a chair affords sitting, a doorknob affords turning, a mouse affords moving the cursor on the screen and clicking at a particular location, and a touchscreen affords tapping and swiping.

The Design of Everyday Things is a canonical design text. It’s full of useful insights and compelling examples. However, it almost entirely ignores race, class, gender, disability, and other axes of inequality. Norman very briefly states that capitalism has shaped the design of objects,17 but says it in passing and never relates it to the key concepts of the book. Race and racism appear nowhere. He uses the term women only once, in a passage that describes the Amphitheatre Louis Laird in the Paris Sorbonne, where “the mural on the ceiling shows lots of naked women floating about a man who is valiantly trying to read a book.”18 Gay, lesbian, transgender: none of these terms appear. Disability is barely discussed, in a brief section titled “Designing for Special People.” In this three-page passage, Norman describes the problems designers face in designing for left-handed people and urges the reader to “consider the special problems of the aged and infirm, the handicapped, the blind or near-blind, the deaf or hard of hearing, the very short or very tall, or the foreign.”19 He thus firmly subscribes to the individual/medical model of disability that locates disability in “defective” bodies and as a “problem” to be solved, rather than the social/relational model (that recognizes how society actively disables people with physical or psychological differences, functional limitations, or impairments through unnecessary exclusion, rather than taking action to meet their access needs20), let alone the disability justice model, created by Disabled B/I/PoC as they fight to dismantle able-bodied supremacy as a key axis of power within the matrix of domination.21 Norman provides a single footnote about a multilingual voice message system, and another about typewriter keyboards and the English language.22 In other words, the book is a compendium of designed objects that are difficult to use that provides key principles for better design, but it almost entirely ignores questions of how race, class, gender, disability, and other aspects of the matrix of domination shape and constrain access to affordances. Design justice is an approach that asks us to focus sustained attention on these questions, beginning with “how does the matrix of domination shape affordance perceptibility and availability?”

Affordance Perceptibility and Availability

First, we might ask whether any given affordance is equally perceptible to all people, or whether it systematically privileges some kinds of people over others. Gaver does recognize, but greatly downplays, the role that standpoint (in his terms, culture, experience, and learning) plays in determining affordance perceptibility. He acknowledges that culture and experience serve to “highlight” some affordances for a given user, but states that this is not “integral to the notion” of affordances.23 However, there are much stronger claims to be made about the ways that standpoint shapes affordance perceptibility. For example, Gaver describes the carefully designed perceptual cues that reveal the affordances of scroll bars to a (abstracted, universalized) user. However, a person who is blind, visually impaired, or is interacting with a computer for the first time in their life will receive few to none of the benefits of these cues. Nor will the perceptual cue of a floppy disk icon located beside the Save option in a dropdown menu help someone who has never used a floppy disk understand the affordance on offer, at least until they have learned what it means. A person unfamiliar with the Roman alphabet will not benefit from the perceptual information offered by the text “Save,” as anyone who has ever tried to use a computer with menus set to an unfamiliar language (let alone an unfamiliar alphabet!) will know. Affordance perceptibility also often differs for people who are colorblind, blind, or have visual impairments, or for people who are deaf or hard of hearing. Standpoint thus determines whether an affordance is perceptible at all to a given user, and affordance perceptibility is always shaped by standpoint (location within the matrix of domination); every affordance is more perceptible to some kinds of users than to others.

Second, in addition to perceptibility, design justice impels us to consider whether a given affordance is equally available to all people. For example, stairs (another example provided by Gaver) afford moving between two levels of a home for most people but deny this affordance to those whose type of mobility makes stairs difficult or impossible to use. For these users, stairs may provide a perceptible but unavailable affordance. An audible alert announcing the arrival of an instant message may enhance perception of the affordances of an instant message client for some users (those who are able to hear the alert, those who have the application minimized in the background, or those who are away from the computer while engaged in another task that occupies their visual attention), but offers no perceptual advantages to other users (those who are deaf or hard of hearing, who have their computers muted, who are in a very noisy workplace, etc.). An object’s affordances are never equally perceptible to all, and never equally available to all; a given affordance is always more perceptible, more available, or both, to some kinds of people. Design justice brings this insight to the fore and calls for designers’ ongoing attention to the ways these differences are shaped by the matrix of domination.

Disaffordances and Dysaffordances

As we have discussed, design affordances match perceptual cues with actions that can be performed with an object. In contrast, design disaffordances match perceptual cues with actions that will be blocked or constrained. In a paper about discriminatory design, philosopher of technology D. E. Wittkower provides many examples of disaffordances: a fence disaffords entry to a plot of land; a lock on a door disaffords entry without a key; and a fingerprint scanner on a mobile phone affords access to the phone’s content for the owner, while it disaffords access to all others.24 Wittkower also identifies dysaffordances (a subcategory of disaffordances), a term he uses for an object that requires some users to misidentify themselves to access its functions. For example, as a nonbinary person, I experience a dysaffordance any time I interact with a system, such as air-travel ticketing, that forces me to select either Male or Female to proceed. While a graduate student, Joy Buolamwini experienced the dysaffordances of facial detection technology, which failed to detect her dark-skinned face until she donned a white mask. This led her to systematically study bias in facial analysis technology and to found the Algorithmic Justice League.25 Design justice asks us to constantly consider the distribution of affordances, disaffordances, and dysaffordances among different kinds of people.

For example, for Gaver, a doorknob affords turning, and “the interaction of a handle with the human motor system determines its affordances. When grasping a vertical bar, the hand and arm are in a configuration from which it is easy to pull; when contacting a flat plate pushing is easier.”26 Design justice, grounded in critiques developed by the disability justice movement, asks us to question the universalizing assumption that there is only one configuration of the human motor system. Instead, there are many configurations; some will be privileged (supported) by a vertical bar as a mechanism to pull a door, and others will find that particular combination of object and action difficult or nearly impossible: an affordance for some is a disaffordance for others. For example, a small child might find it extremely difficult to open a door based on pulling a vertical bar at adult chest height; a more appropriate design solution for them (if the goal is to enable door-opening) might be a door that swings in both directions. This design is also common in scenarios in which users are not expected to have the use of their hands and arms for door-opening—for example, in doors to restaurant kitchens, where waitstaff’s hands and arms are often occupied with plates and dishes. The point of a design justice analysis here is not to impose a single, “best” design solution, but to recognize that affordances, disaffordances, and dysaffordances privilege some people over others.27

Both the perception and availability of any given affordance, as well as disaffordances and dysaffordances, are shaped, in part, by the matrix of domination.

Intention and Impact

Most designers today do not intend to systematically exclude marginalized groups of people. However, power inequalities as instantiated in the affordances and disaffordances of sociotechnical systems may be intentional or unintentional, and the consequences may be relatively small, or they may be quite significant. For example, technology writer Sara Wachter-Boettcher describes how default characters in “endless runner” game apps appear as “male” 80 percent of the time and “female” just 15 percent of the time (the other 5 percent are nonhuman characters). Default avatars in this game genre are an affordance that systematically privileges masc identifying people over femme identifying people, although this difference is likely unintentional and the impact is relatively small.28 In the built environment, perhaps the most famous (and controversial) example of discriminatory design is scholar of science, technology, and society Langdon Winner’s story about NYC urban planner Robert Moses’s overhead passes that (may have) blocked public buses from reaching the Rockaway beaches.29 For Winner, this illustrates how planners can structure racism into the built environment. Some have questioned whether this designed constraint was intentionally racist, while others have questioned whether bus traffic was ever, in fact, constrained at all.30 Winner’s broader point, however, developed throughout his many articles and several books on the topic, is that technologies embody social relations (power). Through a design justice lens, we might say more specifically that under neoliberal multicultural capitalism, most of the time designers unintentionally reproduce the matrix of domination (white supremacist heteropatriarchy, capitalism, and settler colonialism).

Most designers, most of the time, do not think of themselves as sexist, racist, homophobic, xenophobic, Islamophobic, ableist, or settler-colonialist. Some may consider themselves to be capitalist, but few identify as part of the ruling class. Many feel themselves to be in tension with capitalism, and many even identify as socialist. However, design justice is not about intentionality; it is about process and outcomes. Design justice asks whether the affordances of a designed object or system disproportionally reduce opportunities for already oppressed groups of people while enhancing the life opportunities of dominant groups, independently of whether designers intend this outcome.

Of course, sometimes designers do intentionally design objects, spaces, and systems that are explicitly oppressive. For example, as surveillance scholar Simone Browne excavates in her brilliant text Dark Matters: On the Surveillance of Blackness, the designers of slave ships used for the transatlantic slave trade intentionally designed, planned, and participated in the construction of ships, ship modifications, and cargo hold instruction manuals to afford the transport of the greatest possible number of enslaved African human beings per voyage.31 Designers of prisons and detention centers today also participate in explicitly oppressive projects. One architectural firm, HOK Justice Group, boasts that it has designed prison and detention facilities with more than one hundred thousand beds.32 Most recently, there is a debate within the American Institute of Architects over whether architects should participate in the redesign of immigration detention facilities to improve conditions for people who are held in them, or whether architects have an ethical responsibility to boycott such work entirely.33 Nearly two hundred design, engineering, and construction firms bid for contracts to build the Trump administration’s xenophobic border wall.34 In the final chapter of this book, we will return to this conversation in the form of the #TechWontBuildIt movement.

Discriminatory Design

Historian and scholar of race, gender, and science and technology studies Ruha Benjamin defines discriminatory design as the normalization of racial hierarchies within the underlying design of sociotechnical systems.35 Benjamin uses the example of the spirometer, a device meant to assess lung capacity: because early pulmonologists believed that “race” determined lung capacity, spirometers were built with a “race correction” button to adjust measurements relative to an expected norm. In a 1999 class action lawsuit from fifteen thousand asbestos workers against their employer, this made it difficult for Black workers to qualify for workers compensation because “they would have to demonstrate worse lung function and more severe clinical symptoms than those for white workers due to this feature of the spirometer, whose developer, Dr. John Hutchinson, was employed by insurance companies in the mid-1800s to minimize payouts.”36 For Benjamin, the reproduction of ideas about race and racial difference in the hardware, software, and operation of the spirometer is an example of how science and technology are central sites in modern “racecraft.” In Benjamin’s (2019) book Race After Technology, she expands her arguments about discriminatory design, analyzes multiple examples, and links the current conversation about machine bias to an analysis of systemic racism. Benjamin demonstrates the ways that racial discrimination becomes hidden, buried, and “upgraded” through the deployment of new technologies that hide oppression in the default settings and that mask racist logic as consumer choice or desire.37 In the recent edited volume Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life (2019), Benjamin gathers authors who explore how sociotechnical systems developed by the carceral state to support racial hierarchy and social control, such as electronic ankle monitors and predictive policing algorithms, have now been deployed in ever-more domains of life, such as schools, hospitals, workplaces, and shopping malls.38

To take another example, an article by Soraya Chemaly for Quartz focuses on new technology that is designed for men, with the assumption that the user will be male.39 One study described by Chemaly found that virtual assistants Siri, Cortana, Google Assistant, and S Voice were all able to respond to queries about what to do in case of heart attack or thoughts of suicide, but none recognized the phrases “I’ve been raped” or “I’ve been sexually assaulted,”40 despite the high rates of rape, sexual assault, and intimate partner violence experienced by women and femmes.41 As Chemaly notes:

The underlying design assumption behind many of these errors is that girls and women are not “normal” human beings in their own right. Rather, they are perceived as defective, sick, more needy, or “wrong sized,” versions of men and boys. When it comes to health care, male-centeredness isn’t just annoying—it results in very real needs being ignored, erased or being classified as “extra” or unnecessary. To give another, more tangible example, one advanced artificial heart was designed to fit 86% of men’s chest cavities, but only 20% of women’s … the device’s French manufacturer Carmat explained that the company had no plans to develop a more female-friendly model as it “would entail significant investment and resources over multiple years.”42

Discriminatory design often operates through standardization. Everything from the average size and height of a seat in a car to the size of “ergonomic” finger depressions in tool handles to the size of frets on a guitar were all initially developed based on statistical norms that privilege one-third world male bodies (one-third world is feminist scholar Chandra Mohanty’s reformulation of the dated and hierarchical term first world).43 Such discriminatory standards are not exceptions; rather, they shape technologies in nearly every field, such as transportation, health, housing, clothing, and more.44

What’s more, although design that discriminates based on race and/or gender is often seen as problematic, social norms under capitalism do support systems design that intentionally reproduces class-based discrimination. For example, the intended purpose of a predictive algorithm used by the credit industry to determine home loan eligibility is to afford the loan officer a heightened ability to discriminate between those who are likely to be able to make loan payments and those who are likely to fall behind. Such a tool, by definition, promotes class-based discrimination, and when it does so, it is seen to be doing its job. However, when it discriminates based on a single-axis characteristic (race or gender or disability) that is explicitly protected by the law, then it is said to be biased.

In general, predictive algorithms often support and afford racist decision making. This happens constantly, although today’s algorithm developers (unlike the designers of redlining policies in the past) do not usually use race intentionally as a variable to lower loan-eligibility scores. Instead, algorithm developers and the banks that employ them use machine-learning techniques to produce risk constructs that don’t have clearly identified real-world variables but are actually stand-ins for race, class, gender, disability, and other axes of oppression. The erasure of history and the failure to consider intersectional structural inequality underpins the pretense of “fairness” in such decision-making support systems, even as they work to reproduce the matrix of domination.

Disaffordances as Microaggressions

Discriminatory design, or the unequal distribution of affordances and disaffordances, may also be experienced as microaggressions by individuals from marginalized groups. Racial microaggressions are “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color.”45 Recently, research has extended the study of microaggressions to online interactions, such as in chat rooms, on social media platforms,46 and in real-time online multiplayer games.47 Microaggressions reproduce the matrix of domination; reaffirm power inequalities; generate a climate of tension within organizations and communities; produce physical, cognitive, and emotional shifts in targeted individuals; and, over time, reduce both quality of life and life expectancy for people from marginalized groups.48

Microaggressions are (often unintentional) expressions of power and status by individuals from dominant groups, performed against individuals from marginalized groups who may also frequently experience far more severe manifestations of oppression, such as physical violence, attack, rape, or murder, as well as severe forms of institutional inequality such as discriminatory exclusion from access to employment and housing. In many contexts, an individual experiencing a microaggression has no way of knowing whether it is about to escalate into something more severe. For example, as a trans* femme individual, I was walking home after dinner one evening last year. A car with tinted windows slowed down, cruised alongside me, and a deep voice from inside yelled out, “What is it? Is it a girl or a boy? Would you fuck that?” I had no way of knowing at that moment whether the aggression would remain at the level of verbal abuse, or if the situation was going to escalate to physical violence.49 Thus, although microaggressions are often read as relatively harmless and usually unintentional expressions of racial and/or gender bias, we can also understand them as small-scale, pervasive, daily, and constant performance of power. Metaphorically, they are the fabric, molecules, or smallest-level building blocks that constantly reproduce, replenish, and strengthen larger systems of oppression. They also serve to constantly put marginalized groups “in their place.”

Looking at biased systems through the lens of microaggressions means trying to understand the impact on individuals from marginalized groups as they encounter, experience, and navigate these systems daily. For example, a Black person might experience a microaggression if their hands do not trigger a hand soap dispenser that has been (almost certainly unintentionally) calibrated to work only, or better, with lighter skin tones. This minor interruption of daily life is nevertheless an instantiation of racial bias in the specific affordances of a designed object: the dispenser affords hands-free soap delivery, but only if your hands have white skin. The user is, for a brief moment, reminded of their subordinate position within the matrix of domination.50

For many people from marginalized groups, the ways that the matrix of domination is both reproduced by and produces designed objects and systems at every level—from city planning and the built environment to everyday consumer technologies to the affordances of popular social media platforms—generates a constant feeling of alterity. The sentiment that “this world was not built for us” is regularly expressed in intellectual, artistic, poetic, musical, and other creative production by marginalized groups. It is a common refrain, for example, in Afrofuturist work. Consider Jamila Woods’s lyrics: “Just cuz I’m born here, don’t mean I’m from here; I’m ready to run, I’m rocket to sun, I’m waaaaay up!”51 Experiences of design microaggression are proximally based on a particular interaction with an object or system. However, they instantiate, recall, and point to much larger systems, histories, and structures of oppression within the matrix of domination. Even if only for a moment, the user is “put in their place” through the interaction.

Attention to the real, cumulative, and lasting effects of what seem (to those who do not experience them) like minor microaggressions should not displace attention to the many ways that biased affordances often have quite significant and life-altering effects on marginalized groups of people, as in biased pretrial detention, sentencing, or home loan algorithms. Instead, we might say that design constantly instantiates power inequality via technological affordances, across domains, in ways both big and small. Seemingly minor instances may be experienced by individuals from marginalized groups as microaggressions, and these can have significant impacts as they accumulate over a lifetime. A design justice framework can help shift the conversation so that each time an instance of racial or gender bias in technology design causes a minor scandal, it will not be seen as an “isolated incident,” a “quirky and unintentional mistake,” or even used as fodder for an argument that “someone on the design team must have been racist/sexist.” Instead, design justice argues that such moments should be read as the most visible instances of a generalized and pervasive process by which existing technology design processes systematically reproduce (and are reproduced by) the matrix of domination.

Related Approaches: Value-Sensitive Design, Universal Design, Inclusive Design

Design justice builds on, but also differs in important ways from, related approaches such as value-sensitive design, universal design, and inclusive design. The second part of this chapter briefly explores these related frameworks, both in terms of shared concepts and in terms of differences in theory and practice.

Value-Sensitive Design

Science and technology scholars have long argued that tools are never neutral and that power is reproduced in designed objects, processes, and systems.52 In the 1990s, in an effort to address unintentionally biased design in computing systems, information scientists and philosophers Batya Friedman and Helen Nissenbaum developed the concept of value-sensitive design (VSD).53 In the earliest and most widely cited book on this approach, Human Values and the Design of Computer Technology, Friedman and Nissenbaum examine bias in computer systems and propose methods for the practice of VSD.54 They analyze seventeen computer systems from varied fields, expose instances of bias, and categorize them into three groups: preexisting bias, technical bias, and emergent bias. In preexisting bias, bias that exists in broader society, culture, and/or institutions is reproduced in the computer system, either intentionally or unintentionally, by systems developers. For example, graphical user interfaces typically embody a preexisting bias against vision-impaired people because the designers do not consider their existence at all, not because they consciously decide to exclude them.55 In technical bias, some underlying aspect of the technology reproduces bias; for example, the poor performance of optical sensors on darker-skinned people. In emergent bias, a system that may not have been biased given its original context of use or original user base comes to exhibit bias when the context shifts or when new users arrive—for example, Tay, the Microsoft chatbot that was trained to be sexist and racist by Twitter users.56 VSD does not believe that most designers are intentionally racist, sexist, or malicious. Instead, this approach emphasizes that many mechanisms that introduce unintentional bias are at play. These include “unmarked” end users, biased assumptions, universalist benchmarks, lack of bias testing, limited feedback loops, and, most recently, the use of systematically biased data sets to train algorithms using machine-learning techniques.57

Designers often assume that “unmarked” users occupy the most privileged position in the matrix of domination (a point discussed further in chapter 2). Science and technology scholar Ruha Benjamin has written about how normative assumptions lead to what she calls the “New Jim Code—the employment of new technologies and social design that reflect and reproduce existing inequities but which we assume are more objective or progressive than discriminatory systems of a previous era.”58 My personal experience of design teams in many contexts is that designers often assume users to be white, male, abled, English-speaking, middle-class US citizens, unless specified otherwise. Unfortunately, this experience is supported by research. For example, Huff and Cooper (1987) found that designers of educational software for children assumed the user to be male, unless it was specified that the users were girls.59 Other studies demonstrate that even designers from marginalized groups often make the same normative assumptions about unmarked users.60 In the United States, designers tend to assume the user has broadband internet access, unless it is specified that they don’t; that the user is straight, unless it’s specified that the user is LGBTQ; that they are cisgender, unless it’s specified that they are nonbinary and/or trans*; that they speak English as a first language, unless it’s specified otherwise; that they are not Disabled, unless specified that they are; and so on.

Although much of designed bias is unintentional, Nissenbaum and Friedman also ask, “What is the responsibility of the designer when the client wants to build bias into a system?” They conclude that systems should be evaluated for “freedom from bias” and that such evaluation should be incorporated into standards, curriculum, and society-wide testing: “Because biased computer systems are instruments of injustice … we believe that freedom from bias should be counted among the select set of criteria according to which the quality of systems in use in society should be judged. … As with other criteria for good computer systems, such as reliability, accuracy, and efficiency, freedom from bias should be held out as an ideal.”61

VSD provided an important shift in design theory and practice. However, design justice seeks more than “freedom from bias.” For example, feminist and antiracist currents within science and technology studies have gone beyond a bias frame to unpack the ways that intersecting forms of oppression, including patriarchy, white supremacy, ableism, and capitalism, are constantly hard-coded into designed objects, platforms, and systems.62 STS scholars and activists, such as those affiliated with the Center for Critical Race and Digital Studies,63 have explored these dynamics across many design domains, from consumer electronics to agricultural technologies, from algorithm design in banking, housing, and policing to search engines and the affordances of popular social media platforms. To take one recent example, the organizing that took place around Facebook’s “real name” policy illustrates how white supremacy and settler colonialism become instantiated in sociotechnical systems. As feminist blogger and cartoonist Alli Kirkham notes, “Native Americans, African Americans, and other people of color are banned disproportionately because, to Facebook, a ‘real’ name sometimes means ‘traditionally European.’”64 This happens, in part, because the algorithms that are used to flag likely “fake” names were trained on “real name” datasets that overrepresent European names, using machine-learning and natural-language-processing techniques. A Native American user thus may experience a microaggression if their name is flagged as fake by Facebook’s Eurocentric fake name algorithm. This microaggression may “only” be a small inconvenience in the course of the person’s day, yet it symbolically and materially invalidates the legitimacy of their identity. The system instantiates a new, tiny instance of the erasure of Native peoples (genocide) under settler colonialism. After significant pushback from various communities, Facebook claims that it has modified the algorithm to correct for this bias. However, no external systematic study has yet verified whether the situation has improved for those with non-European names.

Together with the fight against hard-coded Eurocentricity, there have been extensive efforts to push back against various aspects of Facebook’s gender normativity.65 The LGBTQ community, and drag queens in particular, successfully organized to force Facebook to modify its real name policy. Many LGBTQ folks choose to use names that are not our given names on social media platforms for various reasons, including a desire to control who has access to our self-presentation, sexual orientation and/or gender identity (SOGI). For many, undesired “outing” of a nonhetero- and/or cis-normative SOGI may have disastrous real-world consequences, ranging from teasing, bullying, and emotional and physical violence by peers to loss of family, housing instability, and denial of access to education, among others. For years, Facebook systematically flagged and suspended accounts of LGBTQ people who it suspected of not using real names, especially drag queens—and drag queens fought back. After several prominent drag queens began to leave the hegemonic social network for startup competitor Ello, Facebook implemented some modifications to its real name flagging and dispute process and instituted a new set of options for users to display gender pronouns and gender identity, as well as more fine-grained control over who is able to see these changes. However, as scholar of data, information, and ethics Anna Lauren Hoffman notes, the diverse gender options only apply to display; on the back end, Facebook still codes users as male or female.66

Figure 1.2 Screen capture of Facebook gender options. Source: Oremus 2014.

These are examples of how dominant values and norms are typically encoded in system affordances—in this case, assumptions about names, pronouns, and gender that were built into various aspects of Facebook’s platform. They also demonstrate how, through user mobilization, platforms can, to some degree, be redesigned to encode alternative value systems. We need to develop many more case studies of user activism that targets values-laden elements of system design.

VSD advocates have also proposed tools to help designers incorporate the approach in practice. For example, Nissenbaum, Howe, and game designer Mary Flanagan suggest a library of value analyses to be used by designers to quickly develop functional requirements.67 They also note that whether a particular design embodies the intended values is, in some cases, amenable to empirical inquiry. For example, they explore a hypothetical medical records system intended to promote user privacy through a multilevel permission system. In the thought experiment, the system fails to promote privacy in practice because users generally neglect to change default permissions, thereby widely exposing sensitive data—an outcome counter to the value intended by the designers. VSD proponents argue not only that technical artifacts embody values but also that it is possible for designers to deliberately design artifacts to embody a set of values that they choose. In a recently published book that provides an overview of three decades of VSD, Batya Freidman and David Hendry highlight key elements of the approach. They emphasize that VSD takes an interactional stance to technology and human values; that the values of various stakeholders implicated in the design should be considered by the designers; that values may be in tension with one another; that technology co-evolves with social structures; that we need to design for multi-lifespans; and that VSD emphasizes progress, not perfection.68 VSD has been an important intervention in the design of computing systems. I will return below to some of the differences between design justice and VSD.

Disability and Universal(ist) Design?

In parallel to VSD, and with significant cross-pollination, over the last fifty years many designers have taken part in a long, slow shift toward deliberate design for accessibility. Historian Sarah Elizabeth Williamson describes how the disability rights movement worked for decades to transform discourse, policy, design, and practice, ultimately encoding rights to accessibility at multiple levels, including federal policy that governs architecture, public space, software interface design, and more.69 Committed activists were able to accomplish many of these changes across multiple design fields, as documented by art and design historian Bess Williamson, among many others.70 In computing, over time, a body of knowledge, examples, software libraries, automated tests, and best practices has grown along with a community of practice. Disabled people and their allies and accomplices implemented alternative interfaces such as text to speech; fought for engineering, architectural, and building standards to enable wheelchair access; convinced federal regulators to mandate closed captions in broadcast media; and much, much more.71 Standardization steadily lowered implementation costs. At the same time, a legal regime was put in place that required designers in many fields to implement accessibility best practices, as Aimi Hamraie writes in their recent book Building Access: Universal Design and the Politics of Disability.72

This is not at all to imply that design practices now fully reflect a normalized concern for accessibility or incorporation of disability rights, let alone a disability justice analysis. For example, communication scholar Meryl Alpert has recently demonstrated that communication technologies meant to “give voice to the voiceless” continue to reproduce intersectional structural inequalities.73 At the same time, disability justice underpins real gains. Alison Kafer’s brilliant book Feminist, Queer, Crip draws from the history and practice of environmental justice, reproductive justice, disability justice, trans* liberation, and other movements to reimagine a radically inclusive world of Crip futures.74 For example, trans* and GNC people, some abled and some who identify as Crip (in a move that reclaims the pejorative “cripple” as an in-group term of pride), are simultaneously challenging ableist spaces and the sociotechnical reproduction of the gender binary by struggling for (and in many cases winning) the implementation of gender-neutral, accessible bathrooms in schools, universities, public buildings, and private establishments across the country and around the world. Certainly, there is a possible future where gender-neutral, accessible bathrooms are standardized in most architectural plans, as well as mandated by law, at least in all public buildings and spaces. Along the same lines, scholars and activists like Heath Fogg Davis are pushing back against both public and private information systems and user interface design that regards gender as binary, that requires self-identification as Male or Female via dropdown menus, and that fails to recognize the gender identity or pronouns of system users.75

The history of design and disability activism provides the cornerstone for design justice. First, this history teaches us that it is indeed possible for a social movement to impact design policy, processes, practices, and outcomes in ways that are very broad, deep, and long-lasting. Disability rights and disability justice activists have changed federal policy, forced the adoption of new requirements in a wide range of design processes, altered the way many designers practice their craft, and significantly changed the quality of life for billions of people, not only for those who presently experience disabilities or identify as Disabled. Design justice is deeply intertwined with the disability justice movement and cannot exist apart from it (in chapter 2 we will return to additional discussion of disability justice).

In large part due to the efforts of Disabled activists, an approach known as universal design (UD) has gained reach and impact over the last three decades. UD emphasizes that the objects, places, and systems we design must be accessible to the widest possible set of potential users. In the 1990s, the Center for Universal Design at North Carolina State University defined UD as “the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design.”76 For example, following UD principles, we need to add auditory information to crosswalk signals so that they will also be useful for Blind people and for anyone who has difficulty seeing or processing visual indicators. UD principles have led to real and significant changes in many design fields. However, as Aimi Hamraie has described, there is a tension between UD and disability justice approaches.77 UD discourse emphasizes that we should try to design for everybody and that by including those who are often excluded from design considerations, we can make objects, places, and systems that ultimately function better for all people. Disability justice shares that goal, but also acknowledges both that some people are always advantaged and others disadvantaged by any given design, and that this distribution is influenced by intersecting structures of race, class, gender, and disability. Instead of masking this reality, design justice practitioners seek to make it explicit: we prioritize design work that shifts advantages to those who are currently systematically disadvantaged within the matrix of domination.

Inclusive Design

One group that has worked steadily to advance design practice that is not universalizing is the Inclusive Design Research Centre (IDRC). IDRC defines inclusive design as follows: “design that considers the full range of human diversity with respect to ability, language, culture, gender, age and other forms of human difference.”78 The IDRC’s approach to design recognizes human diversity, respects the uniqueness of each individual, and acknowledges that a given individual might experience different interactions with the same design interface or object depending on the context. In addition, this group also sees disability as socially constructed and relational, rather than as a binary property (disabled or not) that adheres to an individual. Disability is “a mismatch between the needs of the individual and the design of the product, system or service. With this framing, disability can be experienced by anyone excluded by the design. … Accessibility is therefore the ability of the design or system to match the requirements of the individual. It is not possible to determine whether something is accessible unless you know the user, the context and the goal.”79

The group of designers and researchers who use this approach call for “one size fits one” solutions over “one size fits all.” At the same time, they acknowledge that “segregated solutions” are technically and economically unsustainable. They argue that, at least in the digital domain, adaptive design that enables personalization and flexible configuration of shared core objects, tools, platforms, and systems provides a path out of the tension between the diverse needs of individual users and the economic advantages of a large-scale user base.80

Retooling for Design Justice

A paradigm shift to design that is meant to actively dismantle, rather than unintentionally reinforce, the matrix of domination requires that we retool. This means that there is a need to develop intersectional user stories, testing approaches, training data, benchmarks, standards, validation processes, and impact assessments, among many other tools. Yet the idea that we need to retool is sure to meet with great resistance. Physicist and philosopher of science Thomas Kuhn famously described how each scientific paradigm develops along with a widely deployed and highly specialized apparatus of experimentation, testing, and observation. These fixed costs reduce the likelihood of paradigm shift, absent a growing crisis where the current paradigm is unable to effectively account for discrepancies with the observed world. As Kuhn remarks: “As in manufacture, so in science—retooling is an extravagance to be reserved for the occasion that demands it.”81 As in manufacturing and in science, so in design: an intersectional critique of the ways that current design practices systematically reproduce the matrix of domination ultimately requires not only more diverse design teams, community accountability, and control, as we will explore in chapter 2, but also a retooling of the methods that shape so many design domains under the current universalist paradigm. That shift, however, will not come unless and until a large number of designers (and design institutions) become convinced that equitable design outcomes are a goal that is important enough to warrant retooling. It is my contention that this will only happen through organized, systematic efforts to demand design justice from a wide coalition of designers, developers, social movement organizations, policymakers, and everyday people. This section explores how a design justice analysis might help to rethink specific techniques and tools that designers use every day.

Make Me Think: Differential Cognitive Load

One of the most important goals in HCI, in particular for UI design, is to reduce the user’s cognitive load to a minimum. Put simply, people should not have to think too hard to use computers to perform desired tasks. This imperative provides the title of designer Steve Krug’s book Don’t Make Me Think, sometimes known as “the Bible of interface design.”82 The book is a clearly written rundown of best practices in web usability. Unfortunately, in it, the imagined user is “unmarked” and universalized. Terms like race, class, and gender never appear. Somewhat surprisingly, the term multilingual is absent, and there is only one quick reference to a UI that requires language selection. Krug does devote a section to accessibility, but mostly to note that adhering to accessibility standards is often a legal requirement and that most sites can be made accessible after the fact without much effort.83

Taking a design justice approach, with attention to the distribution of benefits and burdens, we might ask: Is it always (or ever) possible to reduce cognitive load for all users simultaneously? Perhaps not. Instead, designers constantly make choices about which users to privilege and which will have to do more work. UI decisions distribute higher and lower cognitive loads among different kinds of people. The point is not that it’s wrong to privilege some users over others; the point is that these decisions need to be made explicit.

Default language settings provide a simple example. In web applications design in the United States, if the default interface language is US English, there will be a higher bounce rate from, for example, monolingual Spanish speakers.84 Providing an initial page that requires the user to make a language selection might reduce the bounce rate for this group and over time build a more multilingual user community on the site. However, this will also reduce overall traffic due to a loss of English-only users who don’t want to click through the language-selection screen. On the other hand, if we choose to default to US English (as most sites in the United States do), we may lose site visitors who prefer Spanish (about forty-one million people in the United States speak Spanish at home).85 What’s more, because design decisions privilege one group of users over another, we shape the user base to conform to our (implicit or explicit) assumptions. Future A/B testing processes will be skewed by our existing user base, leading us to continually make decisions that reinforce our initial bias (A/B testing is discussed in more depth in the next section). For example, a test between Spanish-language menus and English-language menus will then be more likely to result in favorable results for the latter. In other words, initial design decisions about who to include and exclude produce self-reinforcing spirals.

Empirical studies support a strong critique of the idea that the same design is “best” for all users. For example, Reinecke and Bernstein found that most users preferred a user interface customized according to cultural differences. They note that it is not possible to design a single interface that appeals to all users; they argue instead for the design of “culturally adaptive systems.”86 Indeed, web designers increasingly talk about culturally adaptive and personalized systems and hope to shift toward providing personalized experiences for each user based on what they know about them. On the one hand, this approach has real potential to escape the reproduction of existing social categories as variables that are used to shape experience; it may destabilize existing social categories and replace them with truly personalized, behavior-driven user experience (UX) and UI customization. However, in practice this approach also leads to the reproduction and reification of existing social categories through algorithmic surveillance, tracking users across sites, gathering and selling their data, and the development of filter bubbles (only showing users content that we believe they are comfortable with).

Universalization erases difference and produces self-reinforcing spirals of exclusion, but personalized and culturally adaptive systems too often are deployed in ways that reinforce surveillance capitalism.87 Design justice doesn’t propose a “solution” to this paradox. Instead, it urges us to recognize that we constantly make intentional decisions about which users we choose to center and holds us accountable for those choices. Community accountability, control, and ownership of design processes is the topic of chapter 2.

A/B Tests and Denormalizing the “Universal User”

A/B testing is one of the most widely used methods for making design decisions. In A/B testing, users arrive at a web page or application screen and are randomly assigned to one of two (or more) versions of the same interface. Most elements are held constant, but one element (e.g., the size of a particular button) is varied. Designers then carefully observe and measure user interactions with the page, with an emphasis on key metrics such as time to complete a task. Whichever version of the interface performs better according to this key metric is then adopted. This approach has led to vast improvements in common web application UI and UX.

However, A/B testing is also nearly always deployed within a universalist design paradigm. For example, companies that operate platforms with billions of users, such as Google or Facebook, A/B test everything from the color of interface elements to major new features, from individual content items to recommendation algorithms. Based on the results, they roll out new changes to users. The underlying universalizing assumption is that A/B testing on existing users always results in a clear winner from the perspective of efficiency, reduced cognitive load, and user satisfaction, as well as (most importantly) profitability. The results of randomized A/B testing, it is assumed, will apply to all users. The change can be deployed, and the world will be a better place—or at least the firm will be a more profitable firm. However, what is A/B testing actually for? A/B testing is widely seen as leading inexorably to “better UX” and “better UI.” But a question must be asked: Better for whom? Absent this question, A/B testing reproduces structural inequality through several mechanisms.

First, we should critique (trouble, queer, or denormalize) the assumption that A/B testing is always geared toward improving UX, for the simple reason that it is actually geared toward increasing the decision-making power of the product designer. The goals of the product designer are often in sync with the goals of many users, but this is not necessarily the case. For example, a product owner who wants to encourage users to share more personal information might A/B test various ways of encouraging (or requiring) users to do so. This is further complicated by the reality that product design decisions in medium to large firms are not necessarily made by the product designer. Instead, key decisions are frequently made further up the management chain. In this way, designers who may prefer a decision that would benefit users are often overruled by project managers or executives who prioritize profits. However, for the purposes of the present line of argument, this distinction is not important.

Second, we might destabilize the underlying assumption that what is best for the majority of users is best for all users. To take a simple example, consider a UI for personal profile creation on a university admissions portal. The site designer is required, for institutional diversity metrics, to request the race or ethnicity of the applicant. The designer is deciding how to implement the race/ethnicity selection process in as few clicks as possible to improve UX. In one version of the page (call it A), there is a default race/ethnicity set to White, Non-Hispanic. In a second version of the page (call it B), there is no default set, so the user must select their own race/ethnicity from a list. Now, keep in mind that the university applicant pool will reflect our broken and structurally unequal K–12 educational system, so the users of the site are disproportionally white. In a simple A/B test, the majority of (white) users would have a smoother experience, with fewer clicks required, under option A. However, can we therefore say that option A is the “best” option for user experience? In this case, what is best for the majority of current site visitors (set the default to White) produces an unequal experience, with the ever-so-slightly more time-consuming experience (additional clicks) reserved for PoC, who may also experience a microaggression in the process. Although our hypothetical “default to white” race/ethnicity dropdown is rarely implemented because of widespread sensitivity to such a blunt reminder of ongoing racial disparity, the same underlying principle is constantly used to develop and refine UX, UI, and other elements of sociotechnical systems.

How might we rethink A/B testing through a design justice lens? In some cases, it may not be a technique we can use. But in others, we may be able to compare responses from intersectional user subgroups. To generalize: imagine testing design options for an app with different kinds of users—for example, a group of Black women, a group of Black men, a group of white women, and a group of white men. If the design team sees statistically equivalent preferences from all groups, they may conclude that the design decision does not privilege one group over another. On the other hand, if the preferences of these different groups diverge, the design team must then discuss and intentionally decide what to do: if, say, both groups of women prefer one design, but both groups of men prefer another, the design team will have to make a decision about whose preferences to privilege.

Intersectional Benchmarks

Unfortunately, most design processes do not yet systematically interrogate how the unequal distribution of user experiences might be structured by the user’s position within intersecting fields of race, class, gender, and disability. Design justice proposes the normalization of these types of questions and their adoption as a key aspect of all types of design work. At the moment, other than ADA compliance, questions of bias typically only surface when systems obviously fail some subset of raced and/or gendered users—for example, soap dispensers with higher error rates for darker skin, or cameras that don’t recognize eyes without epicanthic folds as open.88 Rather than understand these types of cases as marginalia, we might consider how they point to fundamental underlying problems of unexamined validation failure that are currently “baked-in” to most design processes.

A paradigm shift to a design justice approach replaces universalizing assumptions about test validity with an array of intersectional validation tests. This requires significant changes to existing instrumentation and product-testing processes. Consider the hand soap dispenser. Prior to the release of a commercial product, product engineers subject prototypes to a range of tests; these tests must typically be met at certain thresholds. In modern product design methods, they are likely to be couched in terms of user stories that must be completed and validated prior to product release. For example, “I am a user, and when I wave my hands beneath the dispenser within a range of 0–10 centimeters, soap is dispensed more than 95 percent of the time.” Within the current (non-intersectional) paradigm, the user in this story is unmarked: their gender, race, age, class, and so on are not specified. If we shift to an intersectional framework, one of the implications is that we must restructure testing at all stages, from early prototypes through quality control in mass production, around what Algorithmic Justice League founder Joy Buolamwini has described as intersectional benchmarks.89

Retrofitting against Racism

In the long view, we live in the relatively early stages of a shift of these concerns from margins to center. Design justice is not yet a community of practice that is powerful enough to retool design processes writ large. For the moment, instead, each inequitable design outcome is read as an outlier or a quirk. For example, instances of obvious racism or sexism in algorithmic decision systems are framed as unfortunate byproducts of a system of technology design that is, overall, seen as laudable—and, furthermore, unstoppable. Biased tools and sociotechnical systems occasionally generate attention, typically through public outrage on social media followed by a few news stories. At that point, the responsible design team, institution, or firm allocates a small amount of resources to correct the flaw in what is seen as an otherwise excellent product. Yet there is a world of difference between post hoc debiasing of existing objects and systems, even if done to meet intersectional benchmarks, and the inclusion of design justice principles from the beginning. This is not to say that the former is never worthwhile. Our world is composed of a vast accretion of hundreds of years of designed objects, systems, and the built environment. Most have not been designed with the participation or consent of, let alone accountability to or control by, communities marginalized within the matrix of domination. Few have been designed or tested using an intersectional lens. In this context, “retrofitting against racism” is a key component of improving and equalizing life chances and experiences for subjects at disparate locations within the race/class/gender/disability matrix. That said, a successful paradigm shift would obviate the need to engage in post hoc fixes for designed objects and systems that constantly produce inequitable outcomes.

From Algorithmic Fairness to Algorithmic Justice: Color Blindness, Symmetrical Treatment, Individualization of Equality, and the Erasure of Historical Discrimination

One of the most urgent areas in which to apply design justice principles is algorithmic decision support systems. There is a growing awareness of algorithmic bias, both in popular discourse and in computer science as a field. An ever-growing body of journalism and scholarship demonstrates that algorithms unintentionally reproduce racial and/or gendered bias (less attention has been focused so far on algorithms and ableism, and questioning algorithmic reproduction of class inequality is barely on the table since financial risk calculation is so deeply normalized as to be hegemonic).90 Algorithms are used as decision-making tools by powerholders in sectors as diverse as banking, housing, health, education, hiring, loans, social media, policing, the military, and more. Design justice calls for an analysis of how algorithm design both intentionally and unintentionally reproduces capitalism, white supremacy, patriarchy, heteronormativity, ableism, and settler colonialism.

For example, Safiya Noble, in her work Algorithms of Oppression, focuses our attention on the ways that search algorithms reproduce the matrix of domination through misrepresentation of marginalized subjects, especially through the circulation of hypersexualized images of Black girls and women (what Patricia Hill Collins calls controlling images).91 Virginia Eubanks, in Automating Inequality, unpacks how algorithmic decision support systems that punish poor people were implemented as a right-wing strategy to limit and roll back hard-fought access to social welfare programs that were won by organized poor people’s movements.92 Kate Crawford and the AI Now Institute at NYU are producing a steady stream of critical work. For example, they ask us to consider what it would look like if search algorithms operated according to a logic of agonistic democracy,93 and exhort us to imagine how algorithms might “acknowledge difference and dissent rather than a silently calculated public of assumed consensus and unchallenged values.”94 Joy Buolamwini, in her work with the Algorithmic Justice League, argues that we must develop intersectional training data, tests, and benchmarks for machine-learning systems.95 Buolamwini is best known for demonstrating that facial analysis software performs worst on women with darker skin tones, but also advocates for greatly increased regulation and oversight of facial analysis tools, against their use by military or law enforcement, and fights to limit their use against marginalized people across areas as diverse as hiring, housing, and health care.96

There is a growing community of computer scientists focused specifically on challenging algorithmic bias. Beginning in 2014, the FAT* community emerged as a key hub for this strand of work.97 FAT* has rapidly become the most prominent space for computer scientists to advance research about algorithmic bias: what it means, how to measure it, and how to reduce it. Papers about algorithmic bias are now regularly published in mainstream HCI journals and conferences. The keynote speech at the 2018 Strata Data Conference in Singapore focused on the need to use machine learning to monitor and counter algorithmic bias in machine-learning systems as they are deployed in myriad areas of life.98 This is all important work, although the current norm of single-axis fairness audits should be replaced by a new norm of intersectional analysis. In some cases, this will require the development of new, more inclusive training and benchmarking data sets. At the same time, design justice as a framework also requires us to question the underlying set of assumptions about “inclusion,” as STS scholar Os Keyes insists in their brilliant critique of the reproduction of the gender binary through data ontologies and algorithmic systems.99 Design justice also involves a critique of the idea of “fairness” that nearly all these efforts contain, as Anna Lauren Hoffman reminds us in her recent paper on the limits of antidiscrimination discourse.100

As Patricia Hill Collins writes about the erasure of historical discrimination, the individualization of equality, and the concept of “symmetrical treatment” that characterize the ideology of “color blindness” in the post-Brown v. Board of Education US legal system: “Under this new rhetoric of color-blindness, equality means treating all individuals the same, regardless of differences they brought with them due to the effects of past discrimination or even discrimination in other venues.”101 What’s more, the rhetoric of color blindness functions “as a new rule that maintains long-standing hierarchies of race, class, and gender while appearing to provide equal treatment.”102 Ruha Benjamin, in Race After Technology, develops the term “the New Jim Code” to highlight the ways that algorithmic decision systems based on historical data sets reinforce white supremacy and discrimination even as they are positioned by their designers as “fair,” in the “colorblind” sense. Racial hierarchies can only be dismantled by actively antiracist systems design, not by pretending they don’t exist.103

Unfortunately, most current efforts to ensure algorithmic fairness, accountability, and transparency operate according to the logic of individualized equality, symmetrical treatment, color blindness, and gender blindness. The operating assumption is that a fair algorithm is one that shows no group bias in the distribution of true positives, false positives, true negatives, and false negatives. For example, the widely read ProPublica article “Machine Bias” demonstrated that a recidivism risk algorithm overpredicted the likelihood of Black recidivism and underpredicted the rate of white recidivism.104 The algorithm allocated more false positives to Black people and more false negatives to white people. A debate ensued about whether the algorithm was really biased, and if so, how it could be “fixed.”

Design justice leads us to several key insights about this approach. First, the use of biased risk-assessment algorithms should not be discussed without reference to the context of the swollen prison industrial complex (PIC). A prison abolitionist stance does not support allocating additional resources to the development of tools that extend the PIC, even to make them “less biased.” Instead, pretrial detention should be minimized as much as possible and ultimately eliminated.

Second, in areas where it does make sense to invest in attempts to monitor, reveal, and correct algorithmic bias, such efforts must be intersectional, rather than single-axis. For example, in bias audits, we need to know the false positive rates for white men, white women, Black men, and Black women.

Third, we should challenge the underlying assumption that our ultimate goal in algorithm design is symmetrical treatment. In other words, we have to raise the question of whether algorithm design should be structured according to the logic of “fairness,” read as color and gender blindness, or according to the logic of racial, gender, and disability justice. The former implies that our goal is a fair algorithm that “treats all individuals the same,” within the tightly bound limits of its operational domain and regardless of the effects of past or present-day discrimination. The latter implies something else: that the end goal is to provide access, opportunities, and improved life chances for all people, and that this requires redistributive action to undo the legacy of hundreds of years of discrimination and oppression. We need to discuss the difference between algorithmic colorblindness and algorithmic justice.

For example, consider an algorithm for university admissions. An (individualized) algorithmic fairness approach attempts to ensure that any two individuals with the same profile, but who differ only by, say, gender, receive the same recommendation (admit/waitlist/decline). Auditing an admissions algorithm under the assumptions of algorithmic fairness can be conducted through paired-test audits: submit a group of paired, identical applications, but change only the gender of one of the applicants in each pair and observe whether the system produces the same recommendation for each. If the algorithm recommends admission for more men than women (at a statistically significant level) in otherwise identical paired applications, we can say that it is biased against women. It needs to be retrained and reaudited to ensure that this bias is eliminated. This is the approach proposed by most of the researchers and practitioners working on algorithmic bias today.105

To modify this approach to be intersectional rather than single-axis may be more difficult but does not require a fundamental shift. An intersectional paired-test audit of the admissions algorithm requires submitting a far greater number of paired applications, with groups of applications that are nearly identical but with modified identity markers across multiple axes of interest within the matrix of domination: race, class, gender identity, sexual orientation, disability, and citizenship status, for example. This allows analysis of whether the system is biased against, say, Black Disabled men, queer noncitizen women, and so on. One question about this approach is how many identity variables to include because each adds complexity (and, in many situations, time and cost) to the audit. However, most of the researchers, developers, and engineers who are interested in correcting for algorithmic bias can probably be convinced that algorithmic bias analysis and correction should be intersectional and should at least include categories typically protected under US antidiscrimination law, such as sex, race, national origin, religion, and disability.

Now, imagine auditing the same admissions algorithm, but under the assumptions of algorithmic justice. This approach is concerned not only with individualized symmetrical treatment, but also with the individual and group-level effects of historical and ongoing oppression and injustice within the matrix of domination, as well as how to ultimately produce a more just distribution of benefits, opportunities, and harms across all groups of people. In our example, this means that the algorithm designers must discuss, debate, and decide upon what they believe to be a just distribution of outcomes. For instance, they might decide that a just allocation of admissions decisions would produce an incoming class with a gender distribution that mirrors the general population (about 51 percent women). They might further decide that they would seek an incoming class in which intersecting race, class, gender, and disability identities also mirrored the proportions in the general population. Alternately, they might decide that their goal was to correct, as rapidly as possible, the currently skewed distribution of enrolled students across all four undergraduate years. In this case, if the current student population greatly underrepresented, say, Latinx students in relation to their demographic proportion of the broader population, then the admissions algorithm would be calibrated to admit a greater proportion of Latinx first years to “make up for” underadmissions in previous years. To take the thought experiment much further, perhaps the algorithm developers would decide that the goal was to correct bias in the admissions demographic data across the full institutional lifetime. To correct for the systematic exclusion of women and people of color during the first hundred years of the university’s existence, the algorithm might be calibrated to admit an entire class of women of color. Here, we are raising the question, “What would algorithmic reparations look like?”

My point is not to argue that this is exactly the outcome that should be sought for all algorithmic decision-making systems. My point is that the question of “What is a just outcome?” is not even on the table. Instead, our conversation remains tightly limited to a narrow, individualized conception of fairness. In the US context, it is highly unlikely that an algorithmic justice approach will advance, not least because in many instances this approach would violate existing antidiscrimination law. Nevertheless, as the conversation about algorithmic bias swells, and as we develop an array of tools to detect, mitigate, and counter algorithmic bias, we must propose alternate approaches, tools, configurations, and outcome metrics that would satisfy algorithmic justice. We must ask questions such as this: Within any decision-making system, what distribution of benefits do we believe is just?

Hard Coding Liberation: New Developments in Scholarship and Practice

This chapter began with a story about the (lack of) affordances of popular social media platforms for community organizing. It then opened into a critical discussion of the distribution of affordances under the matrix of domination, introduced the concept of disaffordances and dysaffordances, and described how affordances may be experienced as microaggressions. Design justice rethinks the universalizing assumptions behind affordance theory, requires us to ask questions about how inequality structures affordance perceptibility and availability, and takes both intentionally and unintentionally discriminatory design seriously.

Design justice builds on a long history of related approaches, such as value-sensitive design, universal design, and inclusive design. VSD provides some useful tools; however, it leaves many of the central questions of design justice unaddressed. VSD is descriptive rather than normative: it urges designers to be intentional about encoding values in designed systems but does not propose any particular set of values at all, let alone an intersectional understanding of racial, gender, disability, economic, environmental, and decolonial justice. VSD never questions the standpoint of the professional designer, doesn’t call for community inclusion in the design process (let alone community accountability or control), and doesn’t require an impact analysis of the distribution of material and symbolic benefits that are generated through design. Values are treated as disembodied abstractions, to be codified in libraries from which designers might draw to inform project requirements. In other words, in VSD we are meant to imagine that incorporating values into design can be accomplished largely by well-meaning expert designers. In design justice, by contrast, values stem from the lived experience of communities and individuals who exist at the intersection of systems of structural oppression and resistance.

The disability justice movement created many of the key concepts that underpin design justice, and has long articulated critiques of universalist design approaches. Much, or perhaps most, design work imagines itself to be universal: designers intend to create objects, places, or systems that can be used by anybody. Design justice challenges the underlying assumption that it is possible to design for all people. Instead, we must always recognize the specificity of which kinds of users will benefit most. Does this mean that design justice denies the very possibility of universal design? Perhaps design justice is an approach that can be applied to both universalist and inclusive (one size fits one) design projects. Design justice might help universalist design processes more closely approach their never fully realizable goals and provide useful insights to inclusive design processes. Retooling for design justice means developing new approaches to key design methods like A/B tests, benchmarks, user testing, and validation. In addition, this approach raises questions about the current dominant approach to the design of algorithmic decision support systems.

In the future, design justice must also help inform the development of emergent sociotechnical systems like artificial intelligence. Beyond inclusion and fairness in AI, we need to consider justice, autonomy, and sovereignty. For example, how does AI reproduce colonial ontology and epistemology? What would algorithmic decision making look like if it were designed to support, extend, and amplify indigenous knowledge and/or practices? In this direction, there is a growing set of scholars interested in decolonizing technologies, including AI systems. For example, designers Lewis, Arista, Pechawis, and Kite draw from Hawaiian, Cree, and Lakota knowledge to argue that indigenous epistemologies, which tend to emphasize relationality and “are much better at respectfully accommodating the non-human,” should ground the development of AI.106 Lilly Irani et al. have argued for the development of postcolonial computing;107 Ramesh Srinivasan has asked us to consider indigenous database ontologies in his book Whose Global Village;108 and anthropologist and development theorist Arturo Escobar has recently released a sweeping new book titled Designs for the Pluriverse.109

Escobar draws from decades of work with social movements led by indigenous and Afro-descended peoples in Latin America and the Caribbean to argue for autonomous design. He traces the ways that most design processes today are oriented toward the reproduction of the “one world” ontology. This means that technology is primarily used to extend capitalist patriarchal modernity and the aims of the market and/or the state, and to erase indigenous ways of being, knowing, and doing (ontologies, epistemologies, practices, and lifeworlds). Escobar argues for a decolonized approach to design that focuses on collaborative and place-based practices and that acknowledges the interdependence of all people, beings, and the earth. He insists on attention to what he calls the ontological dimension of design: all design reproduces certain ways of being, knowing, and doing. He’s interested in the Zapatista concept of creating “a world where many worlds fit,”110 rather than the “one world” project of neoliberal globalization.

Happily, research centers, think tanks, and initiatives that focus on questions of justice, fairness, bias, discrimination, and even decolonization of data, algorithmic decision support systems, and computing systems are now popping up like mushrooms all around the world. As I mentioned in this book’s introduction, these include Data & Society, the AI Now Institute, and the Digital Equity Lab in New York City; the new Data Justice Lab in Cardiff; and the Public Data Lab.111 Coding Rights, led by hacker, lawyer, and feminist Joana Varon, works across Latin America to make complex issues of data and human rights much more accessible for broader publics, engage in policy debates, and help produce consent culture for the digital environment. They do this through projects like Chupadados (“the data sucker”).112 Others groups include Fair Algorithms, the Data Active group, the Center for Civic Media at MIT; the Digital Justice Lab, recently launched by Nasma Ahmed in Toronto; Building Consentful Tech, by the design studio And Also Too in Toronto; the Our Data Bodies Project; and the FemTechNet network.113 There is also a growing number of conferences and convenings dedicated to related themes; besides FAT*, 2018 saw the Data4BlackLives conference, the 2018 Data Justice Conference in Cardiff, and the AI and Inclusion conference in Rio de Janeiro, organized by the Berkman-Klein Center for Internet & Society, ITS Rio, and the Network of Centers; as well as the third design justice track at the Allied Media Conference in Detroit.114 Regardless of the design domain, design justice explicitly urges designers to adopt social justice values, to work against the unequal distribution of design’s benefits and burdens, and to attempt to understand and counter white supremacy, cisheteropatriarchy, capitalism, ableism, and settler colonialism, or what Black feminist thought terms the matrix of domination. Design justice is interested in how to hard-code the liberatory values of intersectional feminism at every level of designed objects and systems, including the interface, the database, the algorithm, and sociotechnical practices “in the wild.” What’s more, this approach is interested not only in designed objects and systems, but in all stages of design, from the framing and scoping of design problems (chapter 3) to designing and evaluating particular affordances (as we explored in this chapter) to the sites where we do design work (chapter 4). The next chapter (chapter 2) unpacks the implications of design justice for the question, “Who gets to be a designer?”

Comments
40
?
zoya Khan:

Hookups are time-saving for men, but they can also enjoy the hookups, and this time you can enjoy a great credible relationship for the lovely things. Delhi Escorts The time has come to ensure the hard-core fucking things along with the best girls.

Manisha Pandey:

You will come many call girls in Uttam Nagar. That will provide you with an unforgettable experience that you will remember for quite some time. Furthermore, you have the facility of choosing from different types of girls. That will satisfy your inborn desires in the best possible way.

Manisha Pandey:

Female Delhi Call Girl Service is one of the biggest escort agencies in Tricity. We have 500+ escorts working for us. You can book our escorts easily by calling us or by WhatsApp. Delhi escort service Our agency motto is to help you find a perfect call girl. So take advantage of our agency and find your call girl now.

?
Stress Ball:

Visit Cloud Nine Clothing today for a variety of anxiety hoodies including our best seller product - Stress Ball Hoodie. Visit now! anxiety hoodie

?
prince narulax:

In this Mahipalpur escort service,safety is very important.Most of the people who belong to reputed families and society and are well-established in their professional terms want to hire this sexiest escort girl service here.So,they always prefer such fantasy and pleasurable services in a safe mode.They don’t want to get any legal compliances or they want to avail this service hidden.

?
priyanka sharma:

Our independent Goa Escort Service have been working with us since our inception, building trust and loyalty with a clear and transparent Goa escort price structure.

?
Joey Watt:

Guaranteed that all the projects with Atlanta Fiber Cement Siding Pros are on par with international standards and comply with state regulations and standards. Click here for further details. siding installation

?
Joey Watt:

We offer a range of services for both residential and commercial clients, including patio, driveway, walkway, and pool deck resurfacing. Their goal is to provide a cost-effective solution for sprucing up your concrete surfaces and giving them a fresh, new look. concrete resurfacing

?
Joey Watt:

Nu Kitchen Designs is a full-service kitchen remodeling company that can take your vision to reality. Contact us for a free consultation! kitchen remodeling

?
Joey Watt:

At Fitprint Orlando we offer professional workout trainers for the best gym and workout experience. Check us out here: personal trainer Orlando

?
satyam satyam:

If you are in South Delhi and want to have some fun and adult entertainment,then you should choose South Delhi Escorts service.This is one of the most demanding party girls to spend time in South Delhi! Enjoy the whole night or a few hours with the pretty face of a hot and sexy girl without any terms and conditions!

?
Joey Watt:

Retaining Walls, Patio Pavers, and Driveway Pavers are just a few services offered by US Brick Mason Company. Feel free to their website today! www.detroitmasonrycontractors.com

?
Joey Watt:

Premier Stone Countertops Tampa revolutionizes your home and business with quality yet affordable countertops. Visit us here! www.countertopinstallationtampafl.com

?
Joey Watt:

Premier Stucco Repair offered a wide range stucco services including Stucco Sealing. For more information click here: www.tampastuccorepair.com

?
soniya singhania:

We provide Indore Call Girls services for each customer based on their budget, and conjointly you're allowed to select the woman that we've provided in the list. As one of the most reputable call girl organizations, we're concerned about our customers' concerns, which leads us to research and improve the quality of our products and services. Higher, we supply services for each customer according to their budget, and based on that budget; you'll get the best call girls in Indore; additionally, you'll be allowed to select the woman from the list that we tend to have provided; for the booking technique, you simply follow the instructions on the website.

?
satyam satyam:

All the girls who belong to this Karol Bagh escort service are trained and professional .They know how to behave with their men in a private room, in a public place, and on dinner dates.So you don’t need to worry about her behavior whenever she is with you publicly or privately.Our beautiful women belong to reputed families in Karol Bagh and Nearby cities.

?
satyam singh:

Our Escorts are unique because they come from many walks of life. Many of them are bikini models, fashion models, university students, and beautiful girls. We went in a completely different direction than some Delhi Call Girls. You can find Escorts agencies in Delhi that has Delhi Escorts who is blondes, brunettes, and redheads to suit your preferences and needs.

?
soniya singhania:

Are you looking for gorgeous as well as sensational Call Girls Service in Mumbai tonight? If of course, our Mumbai call girls are always all set to be with you in your house/ hotel for an outstanding experience. The crackling hot call girls in Mumbai are outstanding at various occasions and celebrations. They can be your girlfriend as well as performer to bring excitement to the session.

john babu:

Do you desire to meet with an elite and charming lady tonight? We can hear you. Our Delhi Escort Girls is taking all possible steps to meet the requirement of customers. We have recruited a variety of female escorts in Delhi to deliver services. No corner is left out of our quality Delhi escort services. It means our horny females can come to meet in luxurious hotels, residences, apartments, and restaurants situated in Delhi.

?
Angel Be:

Your site was full of inspiring and informative topics. Thanks for posting this. drolet austral iii with heatflow s5

john babu:

Intimacy with seasoned Escorts In Delhi takes your entertainment to a new level. As these girls are experts in the art and science of erotica they know well to please clients of varied tastes and preferences. People can choose curvy or Busty Delhi call girls, tall or petite, young or mature, blonde or brunette from the comfort of their homes.

?
charlotte jumawan:

Intelligent information you have. Click here to read more.

?
charlotte jumawan:

It shows different colors and cultures! www.tailoredretirement.co.uk

?
Joahn Ali:

In a residential setting, a reliable and watertight roofing system is essential for protecting the structure, maintaining interior comfort, and preserving the property's value. https://directorios.us/davie/professional-services/the-roof-store-waterproofing-products

robyn juevesano:
robyn juevesano:

I hope that people from your country will never forget the braveness of a significant personality in history. Our site enjoys reading your post. Click here now,

robyn juevesano:

That is a long history. All significant events were mentioned. www.santamonica-roofing.com/

?
Joey Watt:

Stucco Repair Fort Myers is a professional service provider specializing in top-notch stucco repair and restoration solutions in Fort Myers, Florida. Click here for further information: https://stuccorepairfortmyers.com/

?
Joey Watt:

This is impressive. Here at Patriot Stucco Repair, the expertise of our stucco repair contractors allows us to manipulate and apply the materials to get the highest quality of results. https://stuccorepairsarasotafl.com/

?
Joey Watt:

Thanks for the share. countertop installation Tampa

?
Joey Watt:

I’m sure the painter is very inspired to do this one. Keep painting! stamped concrete

?
Joey Watt:

This is classic. I can say that the painting has its own stories to tell. Nice! masonry contractors

will james:

It offers a quiet and comfortable ride while also providing good fuel efficiency, making it a great option for daily commuting. best all terrain tires for jeep

?
jannick noah:

Design justice offers a way to challenge these harmful values and create more equitable and just systems. By centering the needs of marginalized communities, designers can create objects, processes, and systems that are more inclusive and accessible. They can also work to dismantle systems of oppression by designing for accountability, transparency, and participation. By the way, you can buy diablo 4 gold at https://www.d4golds.com/ if you like Diablo 4.

?
Joahn Ali:

The queen-size blanket is perfect for a standard queen-size bed. It provides ample coverage for two people and works well as a layering piece during colder months. This site is also great for use as a throw on a larger couch or for outdoor use on a picnic or beach day. https://blanketzzz.com/best-picnic-blanket/

?
Latoya McCullough:

Surprised with the detailed the backrooms and very helpful analysis on this topic. Thank you for sharing and looking forward to the next posts.

?
piledsy [email protected]:

This story inspired me a lot, as trap the cat is very entertaining.

?
Benjemin Bales:

I have some share here 2 player games are online games.

?
Kat Rohrmeier:

Technology that was not created for us or our needs, rather someone else’s, but which they propose to be a service to us, in order to gain from our usage.

?
Kat Rohrmeier:

Yes, this!

?
Leanne Griffin:

Facebook is responding within the frame of capitalism - losing users to a competitor which threatens their market dominance?