Tech is turning increasingly to religion in a quest to create ethical AI

Tech companies like Anthropic and OpenAI partnered with faith leaders in New York for the inaugural 'Faith-AI Covenant' roundtable to establish ethical guidelines for AI. The initiative, led by the Interfaith Alliance for Safer Communities, aims to create global norms informed by diverse religious perspectives, though challenges remain due to differing values across traditions.
Tech companies are collaborating with faith leaders to shape ethical AI standards amid growing concerns over the technology’s societal impact. Representatives from Anthropic and OpenAI met with religious groups—including the Hindu Temple Society of North America, the Sikh Coalition, and the Church of Jesus Christ of Latter-day Saints—at the inaugural 'Faith-AI Covenant' roundtable in New York last week. Organized by the Geneva-based Interfaith Alliance for Safer Communities, the event marked the first of planned global discussions, with future meetings scheduled in Beijing, Nairobi, and Abu Dhabi. The goal is to develop a set of shared principles to guide AI development, reflecting diverse faith-based perspectives. Baroness Joanna Shields, a former Google and Facebook executive now in British politics, emphasized the urgency of direct dialogue between tech leaders and faith communities. 'Regulation can’t keep up with this,' she said, highlighting the need for proactive ethical frameworks. Some religious groups have already issued their own AI guidelines. The Church of Jesus Christ of Latter-day Saints approved AI as a tool for learning while cautioning it cannot replace divine inspiration. Meanwhile, the Southern Baptist Convention passed a 2023 resolution urging proactive engagement with AI to mitigate future challenges. Challenges persist, however, as global faiths hold differing priorities. Rabbi Diana Gerson noted that religious communities often see ethical issues through distinct lenses, complicating the creation of universal principles. Anthropic, in particular, has taken a leading role in engaging faith leaders, following a public dispute with the Pentagon over military AI applications. The partnership reflects a broader effort to define 'moral AI,' though critics question whether such a concept is achievable. Anthropic’s chatbot, Claude, operates under a constitution developed with religious and ethics experts, aiming to align its behavior with human ethical standards. The initiative signals a shift in Silicon Valley’s approach, moving beyond skepticism of organized religion to seek faith-based solutions for AI’s ethical dilemmas.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.