Cybersecurity

Global Cyber Agencies Issue New SBOMs for AI Guidance to Tackle AI Supply Chain Risks

World0 views1 min
Global Cyber Agencies Issue New SBOMs for AI Guidance to Tackle AI Supply Chain Risks

The G7 Cybersecurity Working Group released a new framework defining minimum elements for Software Bills of Materials (SBOMs) for AI systems on May 12, aiming to enhance transparency in AI supply chains. The guidance outlines seven clusters—including metadata, model properties, and security measures—but emphasizes that SBOMs alone are insufficient without integration with cybersecurity tools like vulnerability scanning and security advisories.

A coalition of global cybersecurity agencies has published a new framework to standardize Software Bills of Materials (SBOMs) for artificial intelligence systems. The document, titled *Software Bill of Materials (SBOM) for Artificial Intelligence - Minimum Elements*, was released on May 12 by the G7 Cybersecurity Working Group. It builds on earlier efforts from June 2025 to improve transparency in AI supply chains by defining seven key clusters of information. The seven clusters include metadata, system-level properties (SLP), model details, dataset properties (DP), key performance indicators (KPI), infrastructure, and security properties (SP). Each cluster covers critical aspects of AI systems, such as software dependencies, model training methods, data provenance, and cybersecurity measures. The paper clarifies that all clusters except metadata are equally important for assessing AI risks. The document acknowledges that SBOMs for AI are not a standalone solution. It stresses the need to pair them with cybersecurity tools like vulnerability scanning, security advisories, and adaptive tooling to strengthen supply chain protection. The authors argue that without these integrations, SBOMs alone cannot ensure robust security for AI systems. The framework was jointly developed by Germany’s Federal Office for Information Security (BSI), Italy’s National Cybersecurity Agency (ACN), France’s National Cybersecurity Agency (ANSSI), and Canada’s Communications Security Establishment, among others. Allan Friedman, a former leader of CISA’s SBOM efforts, praised the clusters but noted challenges in defining and measuring some elements across organizations. The guidance aims to help both public and private sectors improve accountability and trust in AI systems by providing a structured way to document components, dependencies, and risks. It also encourages further refinement of the framework to address evolving threats in AI development and deployment.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...