OpenAI offers EU access to new AI hacking model

OpenAI is negotiating with the European Commission to provide EU authorities access to an AI model capable of detecting software vulnerabilities, marking a contrast with rival Anthropic, which has not granted similar access. The proposal was led by OpenAI executive George Osborne, who initiated contact with the Commission and member states over the weekend.
OpenAI, the company behind ChatGPT, is in discussions with the European Commission to offer EU authorities access to an advanced AI model designed to identify software vulnerabilities. This move follows weeks of concern in Europe over cybersecurity risks posed by emerging AI technologies. The initiative was spearheaded by OpenAI’s lead executive on the project, George Osborne, former U.K. Chancellor of the Exchequer, who contacted the Commission late Sunday to Monday. Osborne confirmed the company is now engaging in the process of reaching out to individual member states. Unlike OpenAI, its competitor Anthropic has not provided the EU with access to its own AI model, Mythos, which also specializes in cybersecurity applications. The decision by OpenAI comes as a potential strategic advantage for European regulators seeking to strengthen their defenses against AI-driven cyber threats. The model’s capabilities could help authorities detect and mitigate vulnerabilities in critical software systems more effectively. The proposal highlights a growing divide in how major AI firms are responding to regulatory requests, particularly in sensitive areas like cybersecurity. While OpenAI appears willing to collaborate with EU institutions, Anthropic’s reluctance may raise questions about industry-wide cooperation on security-related AI tools. The European Commission’s response to OpenAI’s offer will be closely watched by policymakers and cybersecurity experts across the continent.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.