Make safe AI systems
Deploy them reliably

We develop large-scale AI systems so that we can study their safety properties at the technological frontier, where new problems are most likely to arise. We use these insights to create safer, steerable, and more reliable models, and to generate systems that we deploy externally, like Claude.

01

AI as a Systematic Science

Inspired by the universality of scaling in statistical physics, we develop scaling laws to help us do systematic, empirically-driven research. We search for simple relations among data, compute, parameters, and performance of large-scale networks. Then we leverage these relations to train networks more efficiently and predictably, and to evaluate our own progress. We’re also investigating what scaling laws for the safety of AI systems might look like, and this will inform our future research.

02

Safety and Scaling

At Anthropic we believe safety research is most useful when performed on highly capable models. Every year, we see larger neural networks which perform better than those that came before. These larger networks also bring new safety challenges. We study and engage with the safety issues of large models so that we can find ways to make them more reliable, share what we learn, and improve safe deployment outcomes across the field. Our immediate focus is prototyping systems that pair these safety techniques with tools for analyzing text and code.

03

Tools and Measurements

We believe critically evaluating the potential societal impacts of our work is a key pillar of research. Our approach centers on building tools and measurements to evaluate and understand the capabilities, limitations, and potential for societal impact of our AI systems. A good way to understand our research direction here is to read about some of the work we’ve led or collaborated on in this space: AI and Efficiency, Measurement in AI Policy: Opportunities and Challenges, the AI Index 2021 Annual Report, and Microscope.

04

Focused, Collaborative Research Efforts

We highly value collaboration on projects, and aim for a mixture of top-down and bottom-up research planning. We always aim to ensure we have a clear, focused research agenda, but we put a lot of emphasis on including everyone — researchers, engineers, societal impact experts and policy analysts — in determining that direction. We look to collaborate with other labs and researchers, as we believe the best research into characterizing these systems will come from a broad community of researchers working together.

Publication

No results found.