Zero data retention AI
Zero data retention AI is a deployment pattern in which an AI vendor does not store customer prompts or model responses after processing is complete, so that no customer data persists on the vendor's infrastructure beyond the duration of a single API call.
For organizations in regulated industries, or those handling sensitive personal information in customer conversations, zero data retention has become a standard procurement requirement. The core concern is data residency and secondary use: without explicit zero-retention guarantees, customer messages processed by a third-party model may be stored, reviewed by vendor staff, or used to improve future model versions. Zero data retention removes that exposure, shifting the risk profile of an AI deployment from uncertain to bounded.
How zero data retention works technically
In a zero-retention configuration, the model provider processes the input, generates a response, and returns it to the caller without writing the request or response to any persistent storage, including logs, training queues, or caches. The session exists only in memory for the duration of the inference call. This is distinct from standard API usage, where providers typically retain inputs for a rolling window, often 30 days, for abuse monitoring, debugging, and model improvement purposes.
Major providers offer zero data retention under different terms. OpenAI's zero data retention option is available through its API for eligible customers and disables input and output logging at the model layer. Anthropic's enterprise agreements include equivalent provisions for the Claude API. Microsoft Azure OpenAI Service offers similar controls through its data privacy commitments, with data processing occurring within the customer's chosen Azure region and no default retention of prompts or completions. These guarantees are contractual and auditable, not merely policy statements.
Zero data retention does not mean the customer's own systems retain nothing. Organizations remain responsible for how they store, log, or process prompts and responses on their side of the API boundary. A zero-retention agreement with a model provider offers no protection if the application layer writes every conversation to an unencrypted database.
Why compliance buyers require zero data retention
Zero data retention is closely linked to the frameworks organizations use to demonstrate security and privacy compliance. SOC 2 Type II audits examine the controls an organization has in place around data availability, confidentiality, and integrity over time. When AI processing is in scope, auditors will ask how vendor data handling is managed, and zero-retention agreements provide a clean, auditable answer. Similarly, ISO 27001 certification requires organizations to assess and manage information security risks in their supply chain, which includes AI model providers who touch customer data.
The newer ISO 42001 standard, which addresses AI management systems specifically, reinforces the expectation that organizations deploying AI will identify and document how data flows through the AI supply chain. Zero data retention agreements are one of the primary controls organizations cite when mapping data flow risks under this standard.
In healthcare, financial services, and legal sectors, zero data retention often aligns with HIPAA data minimization requirements, GDPR data processing limitations, and similar sector-specific mandates. The requirement is not always framed as zero data retention by name, but the underlying principle, that data should not be retained beyond its immediate processing purpose, maps directly to it.
Zero data retention, responsible AI, and vendor evaluation
Zero data retention is one element of a broader AI compliance posture. It addresses the data residency dimension but does not cover model behavior, output safety, or fairness. Organizations building a complete compliance picture should pair zero-retention requirements with responsible AI policies that address how models are evaluated, monitored, and updated. A large language model (LLM) operating under zero-retention constraints can still produce biased, incorrect, or harmful outputs; retention policy governs data handling, not model quality.
When evaluating vendors, procurement teams should request written confirmation of zero-retention scope, the specific API endpoints covered, any exceptions for security logging, and how compliance is verified during audits. Verbal commitments or informal statements are not sufficient for regulated environments.
For a deeper dive, download Decagon's guide to agentic AI for customer experience.

