Research
Our research teams investigate the safety, inner workings, and societal impacts of AI models – so that artificial intelligence has a positive impact as it becomes increasingly capable.
Interpretability
The mission of the Interpretability team is to discover and understand how large language models work internally, as a foundation for AI safety and positive outcomes.
Alignment
The Alignment team works to understand the risks of AI models and develop ways to ensure that future ones remain helpful, honest, and harmless.
Societal Impacts
Working closely with the Anthropic Policy and Safeguards teams, Societal Impacts is a technical research team that explores how AI is used in the real world.
Frontier Red Team
The Frontier Red Team analyzes the implications of frontier AI models for cybersecurity, biosecurity, and autonomous systems.
Teaching Claude why
New research on how we've reduced agentic misalignment.
Project Deal
We created a marketplace for employees in our San Francisco office, with one big twist. We tasked Claude with buying, selling and negotiating on our colleagues’ behalf.
What 81,000 people want from AI
We invited Claude.ai users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people participated—the largest and most multilingual qualitative study of its kind. Here's what we found.
Project Vend: Phase two
In June, we revealed that we’d set up a small shop in our San Francisco office lunchroom, run by an AI shopkeeper. It was part of Project Vend, a free-form experiment exploring how well AIs could do on complex, real-world tasks. How has Claude's business been since we last wrote?
Publications
- Teaching Claude why
- Natural Language Autoencoders: Turning Claude’s thoughts into text
- Donating our open-source alignment tool
- Focus areas for The Anthropic Institute
- How people ask Claude for personal guidance
- Evaluating Claude’s bioinformatics research capabilities with BioMysteryBench
- Announcing the Anthropic Economic Index Survey
- What 81,000 people told us about the economics of AI
- Automated Alignment Researchers: Using large language models to scale scalable oversight
- Trustworthy agents in practice
