Try Claude
Policy

Thoughts on America’s AI Action Plan

Today, the White House released "Winning the Race: America's AI Action Plan"—a comprehensive strategy to maintain America's advantage in AI development. We are encouraged by the plan’s focus on accelerating AI infrastructure and federal adoption, as well as strengthening safety testing and security coordination. Many of the plan’s recommendations reflect Anthropic’s response to the Office of Science and Technology Policy’s (OSTP) prior request for information. While the plan positions America for AI advancement, we believe strict export controls and AI development transparency standards remain crucial next steps for securing American AI leadership.

Accelerating AI infrastructure and adoption

The Action Plan prioritizes AI infrastructure and adoption, consistent with Anthropic’s submission to OSTP in March.

We applaud the Administration's commitment to streamlining data center and energy permitting to address AI’s power needs. As we stated in our OSTP submission and at the Pennsylvania Energy and Innovation Summit, without adequate domestic energy capacity, American AI developers may be forced to relocate operations overseas, potentially exposing sensitive technology to foreign adversaries. Our recently published “Build AI in America” report details the steps the Administration can take to accelerate the buildout of our nation’s AI infrastructure, and we look forward to working with the Administration on measures to expand domestic energy capacity.

The Plan’s recommendations to increase the federal government's adoption of AI also includes proposals that are closely aligned with Anthropic’s policy priorities and recommendations to the White House. These include:

  • Tasking the Office of Management and Budget (OMB) to address resource constraints, procurement limitations, and programmatic obstacles to federal AI adoption.
  • Launching a Request for Information (RFI) to identify federal regulations that impede AI innovation, with OMB coordinating reform efforts.
  • Updating federal procurement standards to remove barriers that prevent agencies from deploying AI systems.
  • Promoting AI adoption across defense and national security applications through public-private collaboration.

Democratizing AI’s benefits

We are aligned with the Action Plan’s focus on ensuring broad participation in and benefit from AI’s continued development and deployment.

The Action Plan’s continuation of the National AI Research Resource (NAIRR) pilot ensures that students and researchers across the country can participate in and contribute to the advancement of the AI frontier. We have long supported the NAIRR and are proud of our partnership with the pilot program. Further, the Action Plan’s emphasis on rapid retraining programs for displaced workers and pre-apprenticeship AI programs recognizes the errors of prior technological transitions and demonstrates a commitment to delivering AI’s benefits to all Americans.

Complementing these proposals are our efforts to understand how AI is transforming, and how it will transform, our economy. The Economic Index and the Economic Futures Program aim to provide researchers and policymakers with the data and tools they need to ensure AI’s economic benefits are broadly shared and risks are appropriately managed.

Promoting secure AI development

Powerful AI systems are going to be developed in the coming years. The plan’s emphasis on defending against the misuse of powerful AI models and preparing for future AI related risks is appropriate and excellent. In particular, we commend the administration’s prioritization of supporting research into AI interpretability, AI control systems, and adversarial robustness. These are important lines of research that must be supported to help us deal with powerful AI systems.

We're glad the Action Plan affirms the National Institute of Standards and Technology's Center for AI Standards and Innovation’s (CAISI) important work to evaluate frontier models for national security issues and we look forward to continuing our close partnership with them. We encourage the Administration to continue to invest in CAISI. As we noted in our submission, advanced AI systems are demonstrating concerning improvements in capabilities relevant to biological weapons development. CAISI has played a leading role in developing testing and evaluation capabilities to address these risks. We encourage focusing these efforts on the most unique and acute national security risks that AI systems may pose.

The Need for a National Standard

Beyond testing, we believe basic AI development transparency requirements, such as public reporting on safety testing and capability assessments, are essential for responsible AI development. Leading AI model developers should be held to basic and publicly-verifiable standards of assessing and managing the catastrophic risks posed by their systems. Our proposed framework for frontier model transparency focuses on these risks. We would have liked to see the report do more on this topic.

Leading labs, including Anthropic, OpenAI, and Google DeepMind, have already implemented voluntary safety frameworks, which demonstrates that responsible development and innovation can coexist. In fact, with the launch of Claude Opus 4, we proactively activated ASL-3 protections to prevent misuse for chemical, biological, radiological, and nuclear (CBRN) weapons development. This precautionary step shows that far from slowing innovation, robust safety protections help us build better, more reliable systems.

We share the Administration’s concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws. Ideally, these transparency requirements would come from the government by way of a single national standard. However, in line with our stated belief that a ten-year moratorium on state AI laws is too blunt an instrument, we continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems, if the federal government fails to act.

Maintaining strong export controls

The Action Plan states that “denying our foreign adversaries access to [Advanced AI compute] . . . is a matter of both geostrategic competition and national security.” We strongly agree. That is why we are concerned with the Administration’s recent reversal on export of the Nvidia H20 chips to China.

AI development has been defined by scaling laws: the intelligence and capability of a system is defined by the scale of its compute, energy, and data inputs during training. While these scaling laws continue to hold, the newest and most capable reasoning models have demonstrated that AI capability scales with the amount of compute made available to a system working on a given task, or “inference.” The amount of compute made available during inference is limited by a chip’s memory bandwidth. While the H20’s raw computing power is exceeded by chips made by Huawei, as Commerce Secretary Lutnick and Under Secretary Kessler recently testified, Huawei continues to struggle with production volume and no domestically-produced Chinese chip matches the H20’s memory bandwidth.

As a result, the H20 provides unique and critical computing capabilities that would otherwise be unavailable to Chinese firms, and will compensate for China’s otherwise major shortage of AI chips. To allow export of the H20 to China would squander an opportunity to extend American AI dominance just as a new phase of competition is starting. Moreover, exports of U.S. AI chips will not divert the Chinese Communist Party from its quest for self-reliance in the AI stack.

To that end, we strongly encourage the Administration to maintain controls on the H20 chip. These controls are consistent with the export controls recommended by the Action Plan and are essential to securing and growing America’s AI lead.

Looking ahead

The alignment between many of our recommendations and the AI Action Plan demonstrates a shared understanding of AI's transformative potential and the urgent actions needed to sustain American leadership.

We look forward to working with the Administration to implement these initiatives while ensuring appropriate attention to catastrophic risks and maintaining strong export controls. Together, we can ensure that powerful AI systems are developed safely in America, by American companies, reflecting American values and interests.

For more details on our policy recommendations, see our full submission to OSTP, and our ongoing work on responsible AI development and our recent report on increasing domestic energy capacity.