If knowledge is power and we're building machines that have more knowledge than us, what will happen between us and the machines?

Deep Ganguli
Research Lead, Societal Impacts

At Anthropic, we build AI to serve humanity’s long-term well-being.

While no one can foresee every outcome AI will have on society, we do know that designing powerful technologies requires both bold steps forward and intentional pauses to consider the effects.

That’s why we focus on building tools with human benefit at their foundation, like Claude. Through our daily research, policy work, and product design, we aim to show what responsible AI development looks like in practice.

Core Views on AI Safety

Learn more

Anthropic’s Responsible Scaling Policy

Learn more

Anthropic Academy: Learn to build with Claude

Learn more

Featured

  • Tracing the thoughts of a large language model

    Interpretability
     Mar 27, 2025
  • Anthropic Economic Index

    Societal impacts
     Mar 27, 2025
  • Claude’s extended thinking

    Product
     Feb 24, 2025
  • Alignment faking in large language models

    Alignment science
     Dec 18, 2024
  • Introducing the Model Context Protocol

    Product
    Nov 25, 2024

Want to help us build the future of safe AI?

See open roles
Speak with sales