Why AI companies want you to be afraid of them
AI company Anthropic warns that its latest model, Claude Mythos, could have world-altering consequences if it falls into the wrong hands. Critics argue that such warnings are a form of fear-mongering to distract from the real damage caused by AI and to boost stock prices.
Anthropic's latest AI model, Claude Mythos, is claimed to be so powerful that it could have severe consequences for economies, public safety, and national security if misused. The company warns that its ability to find cybersecurity bugs surpasses human experts. However, some security experts doubt these claims. Critics argue that AI companies like Anthropic benefit from keeping the public fixated on apocalyptic scenarios, distracting from the real damage their products cause. This fear-mongering can boost stock prices and create a narrative that regulators should stand aside. Anthropic's chief, Dario Amodei, has a history of warning about the dangers of AI, having done so while at OpenAI in 2019 regarding GPT-2, which was later released. OpenAI CEO Sam Altman has criticized Anthropic's 'fear-based marketing' but has also used similar tactics in the past.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.