InterpretabilityResearch

Circuits Updates — May 2023

May 24, 2023
Read Paper

Abstract

We report a number of developing ideas on the Anthropic interpretability team, which might be of interest to researchers working actively in this space. Some of these are emerging strands of research where we expect to publish more on in the coming months. Others are minor points we wish to share, since we're unlikely to ever write a paper about them.

Related content

Estimating AI productivity gains from Claude conversations

Read more

Mitigating the risk of prompt injections in browser use

Read more

From shortcuts to sabotage: natural emergent misalignment from reward hacking

Read more