Updating restrictions of sales to unsupported regions
Anthropic's Terms of Service prohibit use of our services in certain regions due to legal, regulatory, and security risks. However, companies from these restricted regions—including adversarial nations like China—continue accessing our services in various ways, such as through subsidiaries incorporated in other countries.
Companies subject to control from authoritarian regions like China face legal requirements that can compel them to share data, cooperate with intelligence services, or take other actions that create national security risks. These requirements make it difficult for companies to resist these pressures regardless of where they operate or of the personal preferences of the individuals at those companies. When these entities access our services through subsidiaries, they could use our capabilities to develop applications and services that ultimately serve adversarial military and intelligence services and broader authoritarian objectives. They could also potentially use our models to advance their own AI development through techniques like distillation and to compete globally with trusted technology companies headquartered in the United States and allied countries.
To account for this reality and better align with our commitment to ensuring that transformative AI capabilities advance democratic interests, we are strengthening our regional restrictions. This update prohibits companies or organizations whose ownership structures subject them to control from jurisdictions where our products are not permitted, like China, regardless of where they operate. This includes entities that are more than 50% owned, directly or indirectly, by companies headquartered in unsupported regions. This change ensures our Terms of Service reflect real-world risks and are true to the spirit of our policies.
Consistent with this concern, we continue to advocate for policies like strong export controls to prevent authoritarian nations from developing frontier AI capabilities that could threaten national security, accelerating energy projects on US soil to build out large-scale infrastructure for AI scaling, and rigorously evaluating AI models for national security relevant capabilities, including those that could be exploited by US adversaries.
The safety and security of AI development requires collective commitment to preventing its misuse by authoritarian adversaries. Responsible AI companies can and should take decisive action to ensure that transformative technologies serve US and allied strategic interests and support our democratic values.