Policy

Focus areas for The Anthropic Institute

May 7, 2026
Focus areas for The Anthropic Institute

At The Anthropic Institute (TAI), we’ll be using the information we can access from within a frontier lab to investigate AI’s impact on the world, and sharing our learnings with the public. Here, we’re sharing the questions that drive our research agenda.

Our agenda focuses on four areas for research:

  • Economic diffusion
  • Threats and resilience
  • AI systems in the wild
  • AI-driven R&D

In Core Views on AI Safety, we wrote that doing effective safety research required close contact with frontier AI systems. The same logic applies to doing effective research on AI’s impacts on security, the economy, and society.

At Anthropic, we can see early evidence that jobs like software engineering are changing radically. We’re watching the internal economy of Anthropic start to shift, new threats emerge from the systems we build, and early signs of AI contributing to speeding up the research and development of AI itself. In order to realize the full benefits of AI progress, we want to share as much of that information as we can. We’re researching how these dynamics might shape the outside world, and how the public can help direct those changes.

At TAI, we’ll study AI's real-world impacts from our position within a frontier lab, then publish those findings, to help external organizations, governments, and the public make better decisions about AI development.

We’ll share research, data, and tools to make it easier for individual researchers and institutions to work on these research questions. In particular, we’ll share:

  • More granular information from The Anthropic Economic Index, at a higher cadence, about what we’re seeing in labor impacts and usage of AI. We’ll try to be an early warning signal for significant change and disruption.
  • Research on the societal areas most in need of investment in resilience in the face of new AI-enabled security risks.
  • More detailed information about how our work at Anthropic has sped up as a result of new AI tools, and ideas about the implications of potential recursive self-improvement of AI systems.

TAI will shape the decisions Anthropic makes. That may look like the company sharing data with the world that it otherwise would not (like the Economic Index), or approaching how it releases technology differently (like cyber threat analyses which feed into initiatives like Project Glasswing).

We expect that work developed by The Anthropic Institute will increasingly serve as important inputs to Anthropic’s Long-Term Benefit Trust (LTBT). The LTBT’s mission is to ensure that Anthropic continually optimizes its actions for the long-term benefit of humanity. We’ve developed this research agenda with the LTBT, as well as with staff across Anthropic.

This is a living agenda, rather than a fixed one. We'll continue to fine-tune these questions as evidence accumulates, and we expect new questions to emerge that aren't captured here today. We welcome feedback on this agenda, and will revise it in light of what we learn through our conversations.

If you are interested in helping us answer some of these questions, we welcome your application to become an Anthropic Fellow. The Fellowship is a four-month funded opportunity to tackle one or more of these questions with mentorship from TAI team members. You can find out more and apply to the next cohort here.

Our research agenda:

Last updated: May 7, 2026

Economic diffusion

It’s crucial to understand how the deployment of increasingly powerful AI systems changes the economy. We also need to develop the necessary economic data and predictive ability to choose to deploy AI in ways that benefit the public.

To answer the questions in this pillar of our research, we’ll further develop the data within The Anthropic Economic Index. We’ll also explore other methods to sharpen our models of how powerful AI could affect society, whether by driving job loss, unprecedented economic growth, or other effects.

AI adoption and diffusion

  • Who adopts AI? AI development is concentrated in a small number of companies in a small number of countries, but deployment is global. What determines whether a country, region, or city can access AI? If it can access it, how does it capture economic value from AI? What policies and business models meaningfully shift that balance? How do free or open weight models contribute to this dynamic?
  • Adoption in firms: What causes AI adoption at the firm level, and what are the consequences? How does AI change the scale at which a firm or team can be most efficient? How concentrated is AI usage across firms? How do changes in concentration of AI adoption translate into markups and labor share? If a 3-person team or company can now do what required 300 before, what happens to industrial organization? Or, if firms can more easily centralize knowledge and there are benefits from doing so at scale, will we see larger, more expansive firms with a greater incentive to systematically surveil workers?
  • Is AI a general purpose technology? Is AI following the pattern of previous “general purpose technologies,” where adoption is fastest in high-margin commercial applications, and slowest where social returns exceed private returns? Are there policies or decisions that could change these dynamics?

Productivity and economic growth

  • Productivity growth: What impact will AI have on the rate of innovation and productivity growth across the economy?
  • Sharing the gains: What pre- or re-distributive mechanisms could effectively spread the gains from AI development and deployment more broadly?
  • Transaction costs in markets: How does AI affect systems of exchange and transaction costs in marketplaces? When does access to agents able to negotiate on your behalf improve market efficiency and equitable outcomes? When does it not?

Broad labor market impacts

  • AI and jobs: How will AI change jobs and employment in different parts of the economy? What new tasks and jobs could emerge as AI automates existing parts of the economy? How will these changes vary across regions and countries? Our Anthropic Economic Index Survey will provide monthly signals of how people see AI affecting their work, and what they expect for the future. We’re also updating the Economic Index to share more high-frequency, granular data.
  • Can AI diffusion be modulated? Central banks seek to moderate inflation through “dials” like the policy rate and forward guidance. Are there analogous dials that AI companies (at an industry level, in partnership with government) might turn to control the rate of AI diffusion on a sector-by-sector basis? Would there be a clear public benefit to turning them?

The future of jobs and workplaces

  • Worker views of their jobs: How are workers across the economy experiencing changes in their professions? How much influence do they have over these changes, and can 'worker' power be preserved or transformed?
  • The professional pipeline: Many professions rely on junior roles (like paralegals, junior analysts, and associate developers) to serve as training for the senior practitioners of the future. If AI absorbs the tasks that historically built expertise, how do people become experts in the first place? What does this mean for the long-term supply of senior judgment in a field?
  • Studying for the future: What should people study today to be well positioned for the future? What are the professions of the future? How does AI change what it means to learn something and to develop expertise?
  • The role of paid work: If AI substantially reduces the centrality of paid work in human life, what conditions will allow people to reallocate their time and effort toward other sources of meaning, and what can we learn from historical or contemporary populations where work has been scarce or optional? How do societies navigate this transition?

Threats and resilience

AI systems tend to advance many capabilities at once, including dual-use capabilities. An AI system that gets better at biology also gets better at creating biological weapons. AI systems which are performant at computer programming also get better at hacking into computers. If we can better understand the potential for threats to be exacerbated by AI systems, society can more easily become resilient to this changed threat landscape.

We're asking these questions to help develop partnerships to improve the world's resilience in the face of transformative AI, and to develop early warning systems for new threats that may emerge. Many of these questions will drive the research agenda of our Frontier Red Team.

Assessing risk and dual-use capabilities:

  • Dual-use technology: Powerful AI is inherently dual-use: the same tools that improve health and education can enable surveillance and repression. Can we build observability tools to understand whether and how this is happening?
  • Pricing risk appropriately: What are the effective, market-driven approaches to improve societal resilience to anticipated threats from AI systems? Can we develop new ways of pricing risk, or technical tools and human organizations to improve resilience ahead of the arrival of predictable threats (like improved AI cyberattack capabilities)?
  • Offense-defense balance: Will AI-enabled capabilities structurally benefit the attacker in domains like cyber and bio? When AI is applied in more conventional domains, like increasing integration into command and control systems, does it benefit the attacker? More generally, how will AI change the character of human conflict?

Establishing risk mitigations:

  • Planning for crisis scenarios: During the Cold War, the American president had a hotline directly to the Kremlin, for use in the event of a nuclear crisis. What geopolitical infrastructure would be needed in the event of a crisis scenario involving AI systems? This infrastructure might not necessarily be state-to-state, but could be company-to-state or company-to-company.
  • Faster defensive mechanisms: AI capabilities can advance in months. Regulatory, insurance, and infrastructure responses operate on timescales of years. How do we close that gap? Can defensive mechanisms—like automated patching, AI-enabled threat detection, or pre-positioned response capabilities match the tempo and scale of AI-enabled offense? Or is the asymmetry structural? And how do we roll these defensive mechanisms out as effectively as possible?

Intelligence capabilities for surveillance

  • AI’s effect on surveillance: How does AI change how surveillance works? Will it make surveillance cheaper, or more effective, or both?

AI systems in the wild

The interaction of people and organizations with AI systems will be a major source of societal change. Understanding the ways AI systems might alter the people and institutions that interact with them is a core focus area for our Societal Impacts team. To study these changes, we are advancing our existing tools and building new ones to carry out our research, ranging from software for better observability of our platform to tools for conducting large-scale qualitative surveys.

The impact of AI to individuals and societies:

  • Group epistemology: When a large fraction of a population consults the same few models, what happens to our epistemology? Can we find ways to measure large-scale changes in beliefs, writing style, and problem-solving approaches that are attributable to shared AI use?
  • Critical thinking: As AI systems become more capable and more trusted, how do we detect and avoid the degradation of human critical thinking skills that may come from increasing deference to AI judgment?
  • Technological interfaces: The interfaces for technologies can determine how people interact with them—televisions make people passive viewers, and computers can make it easier for people to be generative creators. What interfaces can be built to cause AI systems to improve and promote human agency?
  • Managing human-AI systems: How might humans manage teams composed of a mixture of humans and AI systems effectively? And how might this be inverted—how might AI systems manage teams that consist of humans, AIs, or some combination thereof?

Identifying significant impacts from AI:

  • Behavioral effects: In the same way that social media led to behavioral changes in people, AI may shape human behavior. What kinds of monitoring or measurement can inform researchers about this dynamic?
  • Enabling research: Are there transparency regimes and tools that can enable a broad set of people, not just frontier AI companies, to easily study real-world AI usage?

Understanding and governing AI models:

  • System “values”: What are the expressed “values” of AI systems and how do these relate to how these systems were trained? More specifically, how can we measure the influence that an AI “constitution” has on behavior of the model once deployed? We’ll extend our previous research on these questions.
  • Governing autonomous agents: What aspects of existing laws, governance systems, and accountability mechanisms could be adapted to autonomous AI agents? For example, how naval law treats abandoned ships has relevance to how the law might treat agents that run without human oversight. Conversely, are there aspects of existing law which already apply to AI agents and shouldn’t?
  • Reliability of agents: What aspects of autonomous AI agents could be adapted to fit into existing laws, governance systems, and accountability mechanisms? For example, can we ensure AI agents have a unique identity that they reliably output, even in the absence of direct human control?
  • AI governance of AI: How effectively can we use AI to govern AI systems? What are areas of AI oversight where humans either have a comparative advantage or a legal or normative requirement to be 'in the loop'?
  • Agent interactions: What kinds of norms emerge in how AI agents interact with one another? How might different agents express different preferences, and how might these influence other agents?

AI-driven R&D

As AI systems get more powerful, scientists are using them to carry out more of their research. This means that more scientific research is occurring autonomously or semi-autonomously with less and less active oversight from humans. In AI research itself, increasingly powerful systems may be used to help develop successor versions of themselves. We sometimes call this “AI-driven AI R&D.”

AI-driven AI R&D may be a “natural dividend” of making smarter and more capable systems. In the same way that advances in coding capabilities have led to dual-use cyber capabilities, and advances in scientific capabilities may lead to dual-use bio capabilities, advances in complex technical work may naturally yield AI systems which are capable of developing AI systems.

AI-driven AI R&D holds within itself the potential for significant danger. As policymakers assess the levers they can pull, it will be crucial to understand how the rate of AI progress is changing, and whether AI research might start to see a compounding return.

AI for AI R&D

  • Governance of AI R&D: If AI systems are being used to autonomously develop and improve themselves, how do humans exercise meaningful visibility into and control over these systems? What will eventually govern these systems?
  • Fire drill scenarios: How do we run a "fire drill" for an intelligence explosion? What would a tabletop exercise look like that actually tests the decision-making of lab leadership, boards, and governments?
  • Telemetry for AI R&D: How can we measure the aggregate speed of AI research and development? What sorts of telemetry and underlying technical affordances must exist in order to gather this information? How might metrics relating to AI R&D serve as early warning signals for recursive self-improvement?
  • Controlling AI acceleration: If an intelligence explosion was upon us, what intervention points would facilitate slowing or otherwise changing the rate of the explosion? Assuming humans can intervene, which entities should wield this capacity—governments? Companies?

AI for R&D in general—that is, AI-driven research in other fields:

  • The tech tree: AI is speeding up some sciences far faster than others, depending on data availability, evaluation signals, and how much knowledge is tacit or institutionally gated. How uneven is this gradient, and what does the changing composition of scientific progress imply for which human problems get solved first?
  • The jagged frontier: Model capabilities are stronger in some domains than in others. Domains with large positive externalities—like drug discovery and materials science—receive less investment than their value warrants. Markets steer the direction of model improvement according to private return, but can we improve how models perform to address social externalities?

Related content

How people ask Claude for personal guidance

Read more

Evaluating Claude’s bioinformatics research capabilities with BioMysteryBench

Read more

Announcing the Anthropic Economic Index Survey

We're launching the Anthropic Economic Index Survey, a monthly survey conducted through Anthropic Interviewer.

Read more
Focus areas for The Anthropic Institute \ Anthropic