Anthropic is donating $20 million to Public First Action
AI will bring enormous benefits—for science, technology, medicine, economic growth, and much more. But a technology this powerful also comes with considerable risks.
Those risks might come from the misuse of the models: AI is already being exploited to automate cyberattacks; in the future it might assist in the production of dangerous weapons. Risks might also come from the models themselves: powerful AI systems can take harmful actions contrary to the intentions—and out of the control—of their users.
AI models are increasing in their capabilities at a dizzying, increasing pace, from simple chatbots in 2023 to today’s “agents” that complete complex tasks. At Anthropic, we’ve had to redesign a notoriously difficult technical test for hiring software engineers multiple times as successive AI models defeated each version. This rate of progress will not be confined to software engineering; indeed, many other professions are already seeing an impact.
Consequently, the AI policy decisions we make in the next few years will touch nearly every part of public life, from the labor market to online child protection to national security and the balance of power between nations.
In circumstances like these, we need good policy: flexible regulation that allows us to reap the benefits of AI, keep the risks in check, and keep America ahead in the AI race. That means keeping critical AI technology out of the hands of America’s adversaries, maintaining meaningful safeguards, promoting job growth, protecting children, and demanding real transparency from the companies building the most powerful AI models.
We don’t want to sit on the sidelines while these policies are developed. For that reason, Anthropic is contributing $20 million to Public First Action, a new bipartisan 501(c)(4) that will support public education about AI, promote safeguards, and ensure America leads the AI race.
Recent polling finds that 69% of Americans think the government is “not doing enough to regulate the use of AI.” We agree. AI is being adopted faster than any technology in history, and the window to get policy right is closing. Yet there are no official guardrails in place and there is no federal framework on the horizon.
At present, there are few organized efforts to help mobilize people and politicians who understand what’s at stake in AI development. Instead, vast resources have flowed to political organizations that oppose these efforts.
Public First Action is working to fill that gap. Founded and led both by Republican and Democratic strategists, it works across party lines to support policies on AI governance.
The organization will work with Republicans, Democrats, and Independents who share the same policy priorities:
- Insisting on AI model transparency safeguards that give the public more visibility into—and thus, greater confidence that—frontier AI companies are responsibly managing risks;
- Supporting a robust federal AI governance framework — and opposing preemption of state laws unless Congress enacts stronger safeguards;
- Supporting smart export controls on AI chips that will keep America ahead of its authoritarian adversaries;
- Pursuing targeted regulation focused on the nearest-term high risks: AI-enabled biological weapons and cyberattacks
These policies aren’t partisan. Nor are they for the benefit of Anthropic as an AI developer—effective AI governance means more scrutiny of companies like ours, not less. They’re also not an attempt to hold back smaller or less well-resourced developers: our view is that transparency regulation, for example, should apply only to companies developing the most powerful (and most dangerous) AI models.
The companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests. Our contribution to Public First Action is part of our commitment to governance that enables AI’s transformative potential and helps proportionately manage its risks.
Related content
Covering electricity price increases from our data centers
Read moreIntroducing Claude Opus 4.6
We’re upgrading our smartest model. Across agentic coding, computer use, tool use, search, and finance, Opus 4.6 is an industry-leading model, often by wide margin.
Read moreClaude is a space to think
We’ve made a choice: Claude will remain ad-free. We explain why advertising incentives are incompatible with a genuinely helpful AI assistant, and how we plan to expand access without compromising user trust.
Read more