Abstract
We report a number of developing ideas on the Anthropic interpretability team, which might be of interest to researchers working actively in this space. Some of these are emerging strands of research where we expect to publish more on in the coming months. Others are minor points we wish to share, since we're unlikely to ever write a paper about them.
Related content
An update on our model deprecation commitments for Claude Opus 3
Read moreThe persona selection model
Read moreAnthropic Education Report: The AI Fluency Index
We tracked 11 observable behaviors across thousands of Claude.ai conversations to build the AI Fluency Index — a baseline for measuring how people collaborate with AI today.
Read more