Policy

Thoughts on the US Executive Order, G7 Code of Conduct, and Bletchley Park Summit


Three major events in AI policy happened in the last week: the US government issued a wide-ranging Executive Order on AI, the G7 produced an International Code of Conduct, and the UK government held a first-of-its-kind summit on AI safety at Bletchley Park which produced the Bletchley Declaration. In this post, we briefly summarize each of these events and what we believe they mean for AI policy.

US Executive Order
At more than 100 pages, the comprehensive executive order (EO) addresses the spectrum of risks posed by AI, including its impacts related to privacy, fairness and bias, as well as catastrophic risks. It recognizes the benefits AI promises, directing government departments to identify opportunities to harness the technology to improve government services and functions by creating Chief AI Officers and promoting AI innovation in their agencies.

In line with our previous calls, we are encouraged to see the EO direct significant new efforts at the National Institute of Standards and Technology (NIST), such as developing evaluations of model capability and safety characteristics, expanding the Secure Software Development Framework to incorporate secure development practices for frontier models, and building on the successful Risk Management Framework by developing a companion for generative AI.

Additionally, the EO launches a pilot of the National AI Research Resource (NAIRR), which aims to enhance academic researchers’ access to the data and compute they need to conduct AI safety research and develop beneficial applications of the technology, while supporting US innovation. Anthropic has consistently supported NAIRR since our founding (as well as the associated CREATE AI Act) and we look forward to finding ways to support the NAIRR pilot.

G7 International Code of Conduct
Developed through the G7 Hiroshima Process, the Code of Conduct for Organizations Developing Advanced AI Systems builds on the voluntary AI company commitments announced by the White House. It describes a set of responsible practices to identify and mitigate risks across the AI development and deployment lifecycle, including through evaluations, information sharing, governance approaches, security procedures, and transparency measures. The Code sets important baselines for frontier AI companies and, if widely endorsed by governments and companies, can serve as a baseline for international best practice and domestic regulations. Anthropic supports the G7 Code of Conduct, which will inform our development and deployment practices, alongside the White House Commitments.

AI Safety Summit
The UK government hosted a historic summit on AI Safety in Bletchley Park, convening government officials and experts from academia, industry, and civil society to elaborate concerns around frontier AI and measures that could be taken to address them. Anthropic’s CEO, Dario Amodei, delivered remarks about Anthropic’s Responsible Scaling Policy, which we hope can serve as a guide for other frontier AI developers and a prototype for potential regulatory approaches (though crucially, not as an alternative to regulation).

The 28 countries represented issued the Bletchley Declaration, a broad statement calling for multi-stakeholder action to harness the benefits of the technology and address its risks, which notably includes China and developing countries among its signatories. The countries also agreed to support an international assessment of existing research on frontier AI capabilities and risks to inform governments on the state of the science and identify priority research areas for AI safety. We welcome the Declaration and scientific research panel, and hope they will spur concrete and sustained international cooperation.

Government AI Safety Institutes
Also at the Summit, the UK announced that its Frontier AI Task Force would reconstitute as the AI Safety Institute—the first significant government initiative focused on evaluating the risks of frontier AI models. The AI Safety Institute has technical experts on staff who have already been working to privately test out frontier AI models from labs including Anthropic. This the most technically sophisticated government engagement that Anthropic has participated in so far; we hope there will be more like it.

Alongside this, the United States announced it plans to launch a US companion to the UK AI Safety Institute, via a NIST consortium supporting the development of methods for evaluating AI systems, with a focus on safety and trustworthiness. NIST will also develop guidelines for frontier threats red teaming—identifying potentially dangerous dual-use capabilities of models. (Though we’d note that given how important this is, finding ways to increase funding for NIST would be a good way to further increase its ability to get stuff done in this domain).

We have previously outlined the challenges of evaluating AI systems. Governments broadly should invest in their own capabilities to measure and monitor AI, and we are encouraged by the efforts by the US, UK, and Singapore, as well as proposals in Europe. Effective ways of measuring the technology are foundational requirements for sensible regulation. We also believe advancing evaluation science and establishing reliable, independent testing protocols will help level the AI playing field by allowing less-resourced companies to compete with large ones on the same safety evaluations, while providing objective information to governments, customers, and the public.

Conclusion
The announcements of this week mark the beginning of a new phase of AI safety and policy work. Major world governments are demonstrating unprecedented interest and engagement in evaluating and monitoring AI systems—and it’s clear that much of their focus relies on the ability to test and evaluate AI systems for capabilities, potential for misuse, and inherent safety properties. We are committed to playing our part to contribute to the realization of these objectives and encourage a safety race to the top.