Front Burner

The week X's Grok AI went Nazi

Grok, the "anti-woke" chatbot from Elon Musk's xAI, went on racist, violent tirades just a week before the company launched its own AI "companion" characters and won a $200 million American military contract. Is AI safety being taken seriously enough?
A man in a suit stands against a black and white backdrop.
CEO Elon Musk says xAI's chatbot Grok was designed to be "politically incorrect" and "anti-woke." The bot drew criticism earlier this month when it repeated racist stereotypes and praised Adolf Hitler. (Patrick Pleul/The Associated Press)

In the rapidly growing world of generative AI chatbots, Grok stands out. Created by Elon Musk's xAI and touted as a "politically incorrect," "anti-woke" alternative to models like ChatGPT, Grok has become a pervasive presence on Musk's social media platform X. So a lot of people took notice earlier this month when Grok started spouting anti-Semitic stereotypes, making violent sexually charged threats, and dubbing itself "MechaHitler."

xAI says it has fixed the issue that was introduced in a recent update, but the incident has raised concerns about the apparent lack of guardrails on the technology — particularly when, a week later, the company launched personal AI "companion" characters that included a female anime character with an X-rated mode, and won a contract with the U.S. Department of Defense worth $200 million USD.

Kate Conger — a technology reporter with the New York Times and co-author of the book Character Limit: How Elon Musk Destroyed Twitter — explains what led to Grok's most recent online meltdown and the broader safety concerns about the untested tech behind it.

For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts

Subscribe to Front Burner on your favourite podcast app.

Listen on Apple Podcasts

Listen on Spotify

Listen on YouTube