Commentary: Anthropic’s ‘too dangerous to release’ Mythos AI model is a wake-up call for everyone
Anthropic's new AI model, Mythos, is considered too dangerous to release due to its potential for complex cyberattacks. The UK's AI Security Institute has tested Mythos and found it poses a significant threat to weakly defended systems.
Anthropic has developed a new AI model, Mythos, deemed too dangerous to release. The model has raised concerns about its potential use in complex cyberattacks. The UK's AI Security Institute has tested Mythos and found it to be more capable than other AI tools like OpenAI's ChatGPT. Large banks have secure IT systems, but small and medium-sized companies are vulnerable to hackers using Mythos. Advances in AI have reduced the time between published IT flaws and their exploitation, making 'responsible disclosure' practices less effective. AI tools can now quickly identify and exploit system vulnerabilities.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.