India's religious chatbots condone violence using the voice of god
Experts warn of potential dangers as new chatbots use AI to interpret Hindu scripture
As Vandana Poddar performs the Hindu puja ceremony daily at her home shrine in Mumbai, she's guided on her spiritual journey by the Bhagavad Gita, a 700-verse scripture.
She even attends a weekly class dissecting the deeper meaning of the ancient religious text, with her teacher providing examples to illustrate a particular passage.
"Interpretation is the backbone of this text," Poddar, 52, told CBC News. "Superficial knowledge can be misleading."
But many in India are foregoing that in-person contact with a guru interpreting the Bhagavad Gita and turning to online chatbots, which imitate the voice of the Hindu god Krishna and give answers to probing questions about the meaning of life based on the religious scripture's teachings.
It's new technology with the tendency to veer off script and condone violence, according to experts, who warn that artificial intelligence chatbots playing god can be a dangerous mix.
Several of the bots consistently provide the answer that it's OK to kill someone if it's your dharma, or duty.
In the Bhagavad Gita, written more than 2,000 years ago, the prince Arjuna is hesitant to go into battle where he will have to kill his family and friends until the Hindu god Krishna reminds him that as a warrior from the Kshatriya case, it is his duty to fight.
"It's miscommunication, misinformation based on religious text," said Lubna Yusuf, a Mumbai-based lawyer and a co-author of The AI Book. "A text gives a lot of philosophical value to what they are trying to say and what does a bot do? It gives you a literal answer and that's the danger here."
At least five Gita chatbots appeared online in early 2023, powered by the language model Generative Pre-trained Transformer 3 (GPT-3). They're using artificial intelligence, which simulates a conversation and creates answers based on statistical probability models. The sites say they have millions of users.
The main page of one of them, Gita GPT, asks, in an imitation of the voice of the Hindu god Krishna, "What troubles you, my child?" to users typing in a question.
Another chatbot, Bhagavad Gita AI, introduces itself as "a repository of knowledge and wisdom" before telling the online user: "Ask me anything."
The smaller print on the same page states that "the answer may not be factually correct" and exhorts the user to do their own research "before taking any action."
Yusuf said the potential danger of answers that condone violence is more acute in a country like India, where religion is so emotionally charged.
"You're creating confusion in the chaos," Yusuf said, adding that some could use the chatbots' answers to further their own political interests and cause irreversible damage. "It might incite more violence, it might create religious bias."
She would like to see government regulation or guidelines on what topics should not be left in the hands of chatbots, such as philosophy, religion and law.
Other experts have also spoken out about the ethical concerns with mixing religion and statistical models, with one AI ethicist telling CBC the world of artificial intelligence is the "Wild West ethically right now".
"We can't control technology but we can control its application," said Jibu Elias, a New Delhi-based AI ethicist and researcher, when referring to the need for governments to set out guidelines.
But the Indian government, in a written submission, informed parliament in April that it has no plans to regulate artificial intelligence in the country, even while acknowledging the ethical concerns and risks around AI and promising to promote best practices.
Disclaimers and toxicity filters
Samanyou Garg, an AI entrepreneur who created the chatbot on Bhagavad Gita AI through his non-profit spiritual organization Ved Vyas Foundation, acknowledged there is still work to do on the technology, but said that is the case for all new technology.
"AI is still not there yet, where it can be totally trusted," he told CBC News at his home in New Delhi.
He pointed to a screen and highlighted steps he said he's taken to protect users from dubious answers, including a disclaimer that shifts the responsibility onto the user's own judgment and very careful language.
"We've mentioned Gita AI there. We haven't said it's the actual Gita or [that] it's Krishna speaking," Garg, 26, said, adding that he wanted the chatbot to be a companion, not a replacement, for a spiritual teacher.
The site is also working to constantly improve its toxicity filters, he said, but it takes time for the chatbot to catch up.
"We filter out the bad responses, we keep on training the model to be able to detect these newer toxicity questions."
For the young tech entrepreneur, the fact that his chatbot received a surge of interest without any promotion proved to him that the service is essential to expose an ancient religious text to a younger audience.
He said that outweighed any initial pitfalls of the nascent technology.
But that's not the consensus at Poddar's weekly Bhagavad Gita class in the Juhu suburb of Mumbai, where chants ring out from the dozen or so students intent on extracting more wisdom from the scriptures.
Most here think outsourcing spirituality to computers is distasteful and short-sighted.
"When you are listening to somebody, your mind works," Bijal Pandya, the guru leading the study session, said. "You start thinking, you get new questions in your mind."
The Bhagavad Gita is full of emotions that keep changing, the 53-year old said, and that's why debate is needed to tease out the text's true meaning.
"It's always better, that human touch," he added. "It's a spiritual thing. AI can never replace spirit. It is only replacing our intelligence."