InterpretabilityResearch

Circuits Updates — May 2023

May 24, 2023
Read Paper

Abstract

We report a number of developing ideas on the Anthropic interpretability team, which might be of interest to researchers working actively in this space. Some of these are emerging strands of research where we expect to publish more on in the coming months. Others are minor points we wish to share, since we're unlikely to ever write a paper about them.

Related content

Automated Alignment Researchers: Using large language models to scale scalable oversight

Can Claude develop, test, and analyze alignment ideas of its own? We ran an experiment to find out.

Read more

Trustworthy agents in practice

AI “agents” represent the latest major shift in how people and organizations are using AI. Here, we explain how they work and how we ensure they're trustworthy.

Read more

Emotion concepts and their function in a large language model

All modern language models sometimes act like they have emotions. What’s behind these behaviors? Our interpretability team investigates.

Read more
Circuits Updates — May 2023 \ Anthropic