Data input controls and audit

For consumer products (such as Claude.ai) or beta/eval products, Anthropic retains users’ personal data for as long as reasonably necessary for the purposes and criteria outlined in our Privacy Policy. For business or enterprise customers, Anthropic explains its data retention periods in their specific services agreements with Anthropic. For instance:

  • If someone uses Anthropic services that allow one to save and continue conversations with Claude (e.g., Claude App for Slack or Claude via the console), Anthropic retains their prompts and outputs in the product to provide them with a consistent product experience over time in accordance with their controls.
  • For users of business or enterprise products (e.g., commercial Claude via the console, API, and API applications): Anthropic automatically deletes prompts and outputs on the backend within 30 days of receipt or generation unless mutually agreed otherwise. We do not use API prompts or conversations for any model training purposes, unless the customer has explicitly given us permission by reporting this data to us or by joining our opt-in group. In those cases, we do so at the customer’s direction as a processor and only after we have applied de-identification processes. We retain prompts and outputs for 90 days if the user submits a prompt to our commercial services that may violate our Acceptable Use Policy.
  • For users of our consumer or beta/evaluation products (such as Claude.ai, the Claude App for Slack, and the evaluation versions of Claude via the console, API, or applications), Anthropic automatically deletes prompts and outputs on the backend within 90 days of receipt or generation unless requested otherwise. We will not use these users’ conversations in our beta/evaluation services to train our models, unless: (i) their conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Acceptable Use Policy, including training models for use by our trust and safety team, consistent with Anthropic’s safety mission, or (ii) they have explicitly given us permission by reporting the data to us (for example via our feedback mechanisms) or by otherwise explicitly opting into training. We retain prompts and outputs for a maximum of 2 years if the user submits a prompt to our consumer or beta/evaluation services that is flagged by our trust and safety classifiers.
  • For all products, Anthropic retains trust and safety classification scores for 7 years.
  • Where a user has opted in or provided some affirmative consent (e.g., submitting feedback or bug reports), we retain data associated with that submission for 10 years.
  • Anthropic deletes information users instruct us to delete in accordance with Section 5 of our Privacy Policy ('Rights and Choices').

When assessing how long a user’s personal data is retained, we consider criteria such as: (i) the nature of the personal data and the activities involved; (ii) when and for how long the user interacted with Anthropic; and (iii) our legitimate interests and our legal obligations. In all cases, we may retain Prompts and Outputs as required by law or as necessary to combat violations of our Acceptable Use Policy. We may anonymize or de-identify users’ personal data for research or statistical purposes, in which case we may retain this information for longer without further notice.