Today marks a pivotal moment in international AI governance as the OECD unveils the first round of reports submitted under the G7 Hiroshima AI Process Reporting Framework. This initiative, developed at the request of G7 leaders, aims to provide unprecedented transparency into the practices employed by leading AI developers worldwide in ensuring that advanced AI systems are developed in a safe, secure, and trustworthy manner.
The publication of these reports demonstrates the global commitment to AI transparency and reflects the first tangible outcomes of the reporting framework that was launched in February 2025. A diverse group of 19 organizations, ranging from major AI developers and established tech companies to emerging AI firms and academic institutions, have contributed to this initiative, highlighting the wide-reaching impact of the framework.
The reports reveal crucial insights into how organizations conduct AI risk assessments, manage potential safety and security vulnerabilities, and implement governance frameworks for advanced AI systems. This information is now publicly accessible at transparency.oecd.ai, offering a unique resource for understanding current AI governance practices globally.
As part of an ongoing effort to promote transparency and accountability, the OECD encourages organizations that develop advanced AI systems and have not yet participated to submit their reports via the online platform. Moving forward, the OECD will analyze submitted reports and share insights regarding best practices on the OECD AI Policy Observatory.