Announcements

Anthropic partners with BCG

We’re pleased to announce our new collaboration with Boston Consulting Group (BCG) to bring Claude to more enterprises. BCG customers around the world will get direct access to our AI assistant to power their strategic AI offerings and deploy safer, more reliable AI solutions.

Our work towards creating helpful, honest and harmless systems with techniques like Constitutional AI aligns with BCG’s focus on responsible AI. Through this collaboration, BCG will advise their customers on strategic applications of AI and help them deploy Anthropic models including Claude 2 to deliver business results. Use cases involving Claude span knowledge management, market research, fraud detection, demand forecasting, report generation, business analysis and more.

Anthropic and BCG have already partnered to help organizations understand the force-multiplying impact of generative AI, most recently at the United Nations. In addition to working together to bring AI to new organizations, BCG has partnered with Anthropic to use Claude within its own teams. We're excited to see how Claude will provide BCG with the ability to synthesize research effectively, analyze data quickly, and drive inspired insights to clients.

“The large enterprises I talk with are focused on harnessing value and bottom line impact from AI, and doing that in the most effective and ethical way possible. Aligning these two aspects of AI is a challenge and the price for getting it wrong can be immense, both financially and in reputational harm. Our new collaboration with Anthropic will help deliver that alignment on ethics and effective GenAI,” says Sylvain Duranton, global leader of BCG X, BCG’s tech build and design unit. “Together, we aim to set a new standard for responsible enterprise AI and promote a safety race to the top for AI to be deployed ethically.”

We’re extending a warm welcome to BCG and its customers—and look forward to working with them to deploy innovative applications of generative AI safely and responsibly.