Why MIT researcher is calling for 'algorithmic justice' against AI biases
'Algorithms of discrimination persist,’ says Joy Buolamwini, who is fighting for AI accountability


Joy Buolamwini is at the forefront of artificial intelligence research, noting the several ways AI systems have caused harm, through racial bias, gender bias and ableism. She is the founder of the Algorithmic Justice League, an organization working to make AI accountable.
"The rising frontier for civil rights will require algorithmic justice. AI should be for the people and by the people, not just the privileged few," Buolamwini writes.
Her research as a graduate student at MIT led her to call out Microsoft, IBM, Amazon, and other tech giants — whose facial recognition systems failed to identify people of colour. The worst results were related to darker-skinned females. To make matters worse, this flawed facial recognition software was already in use by corporations and law enforcement agencies.
She first discovered the limits of face detection as she was working on a creative computing project.
"Face detection wasn't really detecting my face until I put on a white mask. It was Halloween time, I happened to have a white mask around. Pull on the white mask, the face of the white mask is detected. Take it off, my dark-skinned face, the human face, the actual face, not detected. And so this is when I said: hmmm what's going on here?"
In the years since, she has been a fierce advocate for correcting algorithmic bias, which she says is a problem that will cost society dearly, if it isn't addressed.
Here's an excerpt from Joy Buolamwini's Rubenstein Lecture, delivered at the Sanford School of Public Policy at Duke University in February 2025.
"Show of hands. How many have heard of the male gaze? The white gaze? The postcolonial gaze?
"To that lexicon, I add the coded gaze, and it's really a reflection of power. Who has the power to shape the priorities, the preferences — and also at times, maybe not intentionally — the prejudices that are embedded into technology?
"I first encountered the coded gaze as a grad student working on an art installation…. I literally had to put on a white mask to have my dark skin detected. My friend, not so much. This was my first encounter with the coded gaze.
"I shared the story of coding in a white mask on the TEDx platform. A lot of people saw it. So I thought, you know what? People might want to check my claims — let me check myself."
"I took my TEDx profile image, and I started running it through online demos from different companies. And I found that some companies didn't detect my face at all. And the ones that did misgendered me as male. So I wondered if this was just my face or other people's faces.
"So it's Black History month [the lecture was recorded in February 2025]. I was excited to run some of the cast from Black Panther. In some cases there's no detection. In other cases there's misgendering... You have Angela Bassett — she's 59 in this photo. IBM is saying 18 to 24. So maybe not all bias is the worst.
"What got me concerned was moving beyond fictional characters and thinking about the ways in which AI, and especially AI Field facial recognition, is showing up in the world.
"Leading to things like false arrests, non-consensual deep fakes as well for explicit imagery. And it impacts everybody, especially when you have companies like Clearview AI, that has scraped billions of photos courtesy of social media platforms. Not that we gave them permission, but this is what they've done.
"So as we think about where we are in this stage of AI development, I oftentimes think of the excoded — the excoded represents anyone who's been condemned, convicted, exploited, otherwise harmed by AI systems."

"I think of people like Porcha Woodruff, who was eight months pregnant when she was falsely arrested due to facial recognition misidentification. She even reported having contractions while she was being held. What's crazy to me about her story is that a few years earlier, the same police department falsely arrested Robert Williams in front of his two young daughters and his wife.
"So this isn't a case where we didn't know there were issues. Right. But it was willful negligence in some cases to continue to use systems that have been shown time and time again to have all kinds of harmful biases. These algorithms of discrimination persist. And that's one way you can be excoded."
"Another way is we have algorithms of surveillance.
"Some of you, as you are flying home for the holidays or other places, you're likely starting to see airport face scans creeping up. And so the hand of surveillance continues to extend.
"And then you have algorithms of exploitation. Celebrity will not save you. Lighter skin will not save you. We've seen with the rise of generative AI systems, the ability to create deep fakes and impersonate people, whether it's non-consensual explicit photos of Taylor Swift or Tom Hanks selling you a dental plan he's never, ever heard of. "
Download the IDEAS podcast to hear the full episode.
*Excerpt edited for clarity and length. This episode was produced by Seán Foley.