How emerging technologies amplify racism—even when they're intended to be neutral
"One of the things I'm trying to do with this book is to explode what we think of as technology."
If you've seen the movie The Matrix, you might remember the 'cat scene': Neo, (played by Keanu Reeves) and his fellow rebels are in the matrix—the artificial world created by machines to enslave humans. Neo sees a black cat cross his path.
And then the same black cat passes by again.
Neo mistakes it for a deja vu. But the rebels know it as a 'glitch', a hiccup that happens when their machine overlords change something in the matrix. It's a glimpse behind the curtain to the oppressive reality.
The Matrix analogy is just one example in Ruha Benjamin's book, Race After Technology: Abolitionist Tools for the New Jim Code.
Benjamin spoke with Spark host Nora Young about her book.
Here's part of their conversation.
Can you talk a little bit about what we know about the problems with facial recognition technology?
[Research has shown that] not only are most of these systems faulty at detecting people with darker skin, they also are known to misgender Black women in particular, and a lot of that has to do with the training data that goes into teaching algorithms how to detect individuals. And so at a strictly technical level they often are incorrect, but also because of the larger context in which Black people and Black communities are over-policed it increases the likelihood that we are going to have false positives.
And so it's not just police use of it, but across our institutions: educational, healthcare. We have to be wary of the idea that these systems are accurate and neutral, but also of the way that they reinforce racist forms of policing and surveillance.
RELATED: From racial profiling to #BlackLivesMatter: Technology, oppression and expression
That kind of example is often used as an argument for why tech companies need more diverse teams. Is it as simple as that, or is that a necessary but not sufficient condition to address this kind of problem?
It's important. We have to think about who's behind the screens right now. We have a narrow slice of humanity in terms of gender, race, class, nationality, language who are designing the digital infrastructure that the rest of us have to deal with. So certainly diversifying that process is important but definitely not sufficient.
And one of the examples that comes to mind is a few months ago, where this research about facial recognition not working on darker-skinned people had started circulating. Google was about to launch its new Pixel 4 phone and it wanted to diversify the training data for the phone, so that the product would work on a wider variety of phenotypes. In the process, what they did was hire contract workers in Atlanta and they were specifically directed to walk up to Black homeless people on the street and get them to play with the phone, take a selfie—[but they didn't tell them] explicitly that this was to train a facial recognition system.
The contract workers thought something was fishy and so they went to the media, and kind of blew the whistle and said we're being told to give these individuals five-dollar gift cards in exchange for their selfies, without fully consenting them to what they're involved in.
And so this, for me, is a cautionary example because it tells us that in the pursuit of 'inclusive' products this can often rest on a coercive process that does not fully consent people but also ignores the fact that these very same systems are likely to be deployed against homeless people and against Black communities.
RELATED: Bad algorithms are making racist decisions
Another part of your argument that I found quite fascinating was the idea that race itself is a technology. Can you explain?
One of the things I'm trying to do with this book is to kind of explode what we think of as technology. It's not just the hardware and software that we're sold by Silicon Valley, but it's also the social technologies, the ways that our laws, our policies and our social norms create hierarchies and inequalities.
And so if we think about a tool in its most generic sense that it creates things in the world, it generates outcomes, then we can begin to think about the tools that we use every day to make decisions: to judge one another, to rank and classify.
And in that case race becomes one of the most powerful tools that our modern society employs to create—in some cases parallel universes: if you think about the way that, for example, our real estate policies and practices have created a landscape in which we're more racially segregated today in 2020 than at the end of the Civil War.
This interview has been edited for length and clarity. To hear the full conversation with Ruha Benjamin, click the 'listen' button at the top of the page.