InterpretabilityResearch

Toy Models of Superposition

Sep 14, 2022
Read Paper

Abstract

In this paper, we use toy models — small ReLU networks trained on synthetic data with sparse input features — to investigate how and when models represent more features than they have dimensions. We call this phenomenon superposition. When features are sparse, superposition allows compression beyond what a linear model would do, at the cost of "interference" that requires nonlinear filtering.

Related content

How AI assistance impacts the formation of coding skills

Read more

Disempowerment patterns in real-world AI usage

Read more

The assistant axis: situating and stabilizing the character of large language models

Read more