Artificial Intelligence

CAISI Signs Frontier AI Testing Agreements With 3 Companies

North America / United States1 views1 min
CAISI Signs Frontier AI Testing Agreements With 3 Companies

The National Institute of Standards and Technology’s Center for AI Standards and Innovation (CAISI) signed agreements with Google DeepMind, Microsoft, and xAI to advance frontier AI testing for national security, aligning with White House directives. These partnerships allow pre-release evaluations of AI models, classified testing, and interagency collaboration through the TRAINS Taskforce, with CAISI having completed over 40 assessments to date.

The National Institute of Standards and Technology’s Center for AI Standards and Innovation (CAISI) has signed agreements with Google DeepMind, Microsoft, and xAI to support frontier AI testing and research focused on national security. The partnerships expand efforts to evaluate AI systems before public release and after deployment, including testing in classified environments. The agreements follow renegotiated terms under directives from Commerce Secretary Howard Lutnick and the White House’s AI Action Plan. CAISI has already conducted over 40 evaluations, often involving unreleased AI models with reduced safeguards to assess national security risks. Developers collaborate by providing models tailored for security-focused assessments. The new collaborations also enable participation from government evaluators across agencies through the TRAINS Taskforce, an interagency group addressing AI-related national security issues. CAISI Director Chris Fal emphasized the role of independent measurement science in understanding frontier AI and its implications for national security. CAISI serves as the Department of Commerce’s primary liaison for AI testing and best practice development. Recent initiatives include partnerships with OpenMined to evaluate AI while preserving data confidentiality and collaboration with the General Services Administration to establish federal evaluation approaches. The organization also launched the AI Agent Standards Initiative to ensure secure adoption of agentic AI systems and sought public input on automated AI benchmark testing. These expanded industry collaborations aim to scale CAISI’s work in the public interest during a critical phase of AI advancement. The agreements align with broader discussions on AI’s role in cybersecurity, including upcoming events like the 2026 Cyber Summit.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...