Tech is turning increasingly to religion in a quest to create ethical AI

Tech companies like Anthropic and OpenAI held the inaugural Faith-AI Covenant roundtable in New York to collaborate with religious leaders on ethical AI guidelines, marking a shift from Silicon Valley’s historical skepticism of organized religion. The initiative, led by the Geneva-based Interfaith Alliance for Safer Communities, aims to develop global norms for AI morality, though challenges remain due to differing religious values and priorities.
Tech companies are increasingly partnering with faith leaders to address ethical concerns in artificial intelligence development. Representatives from Anthropic and OpenAI met with religious leaders last week in New York for the first Faith-AI Covenant roundtable, organized by the Geneva-based Interfaith Alliance for Safer Communities. The event included participants from the Hindu Temple Society of North America, the Baha’i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America, and The Church of Jesus Christ of Latter-day Saints, among others. The goal is to create a set of global ethical norms for AI, drawing on diverse religious perspectives to guide technology companies in responsible decision-making. Baroness Joanna Shields, a former tech executive with Google and Facebook, emphasized the urgency of the initiative, stating that regulation cannot keep pace with AI advancements. She argued that faith leaders, with their expertise in moral guidance, should play a key role in shaping AI ethics. Shields noted that many AI developers recognize the power of their work and want to ensure it aligns with ethical standards. The collaboration follows some faith traditions’ existing ethical guidance on AI. The Church of Jesus Christ of Latter-day Saints, for example, has approved AI as a tool for learning while cautioning against replacing divine inspiration. The Southern Baptist Convention passed a 2023 resolution urging proactive engagement with AI to shape its impact on churches and communities. However, challenges remain, as religious groups prioritize different values, making consensus difficult. Anthropic has taken a leading role in engaging with faith leaders, publicly incorporating ethical principles into its AI systems. The company’s ‘Claude Constitution’ for its chatbot was developed with input from religious and ethics experts, aiming for AI responses that reflect deeply ethical human judgment. The initiative signals a broader effort to bridge tech and faith in addressing the moral implications of AI, though questions persist about the feasibility of creating universally accepted ethical standards.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.