← Back to blog

What Happens to Your Data When You Use ChatGPT, Claude, or Gemini?

This is the plain-English version of what the three largest cloud AI providers — OpenAI, Anthropic, and Google — actually do with the prompts you type into them. The goal isn't to scare you. It's to replace "they probably do stuff with my data, right?" with "here's specifically what's happening, here's how to find it in the privacy policy yourself, and here's how to adjust your settings."

The companies have been reasonably transparent about this in their published policies. Most people just haven't read them. This post is that reading, condensed.

The short answer

  • Every prompt you send is transmitted to the provider's servers.
  • Every prompt and reply is logged for at least 30 days. Usually longer.
  • Metadata (account, IP, timestamp, device) is attached to every log entry.
  • Some conversations are reviewed by humans — either automatically flagged or sampled for quality control.
  • By default, consumer prompts are used to train or improve future models, though all three providers let you opt out.
  • API customers generally get stronger guarantees: no training by default, shorter retention, and sometimes zero-retention options for enterprise.

None of that is illegal, unusual, or necessarily sinister. It's how cloud AI works. The question is whether you want your data in that shape.

OpenAI (ChatGPT)

What gets logged

When you use ChatGPT's consumer product (web or mobile app), every message is stored on OpenAI's servers indefinitely in your chat history unless you delete it. Deleting a chat removes it from your history but OpenAI retains it for up to 30 days for abuse monitoring before deletion.

Temporary Chat mode doesn't save to history and has a 30-day maximum retention. It's "less stored" but not "not stored."

Training on your data

ChatGPT Free and Plus consumer accounts are, by default, used to improve OpenAI's models. You can turn this off in settings under Data Controls → Improve the model for everyone. Turning it off does not stop logging, only training use.

ChatGPT Team, Enterprise, and API customers are not used for training by default. This is one of the main reasons power users and businesses pay for those tiers even if they don't need the extra features.

Human review

OpenAI does review some conversations — primarily ones flagged by safety classifiers, but also random samples for quality control and reinforcement learning. Human review is performed by OpenAI employees and trusted contractors.

Anthropic (Claude)

What gets logged

Claude's consumer product logs prompts and responses. Retention is 30 days by default for most users, longer for abuse review or legal holds.

Training on your data

Anthropic has been more cautious than OpenAI about using consumer prompts for training. Historically, they did not train on prompts by default even on the free tier. As of 2025 their policy allows for some opt-in feedback data to be used. The default is opt-out for training but opt-in for feedback.

Claude API usage is not used for training by default. Anthropic has an explicit no-training policy for API customers.

Human review

Similar to OpenAI — safety-flagged conversations may be reviewed by human contractors, and a subset of conversations may be reviewed for quality.

Google (Gemini)

What gets logged

Gemini prompts and responses are stored in your Google Account activity by default. Retention depends on your Google Account settings — the same ones that control how long search, YouTube, and Maps history is kept. Default is 18 months; you can shorten this or turn it off entirely.

Training on your data

Gemini prompts are used to improve Google's products, including model training, unless you turn off "Gemini Apps Activity." Google says they may retain conversations for up to 3 years for quality improvement even after you delete them.

Human review

Google has had a particularly visible human review program for Gemini. Conversations reviewed by humans are, per Google's policy, retained for up to 3 years and are not fully deleted even if you delete them from your history. This is more retention than OpenAI or Anthropic.

What to actually do about it

For cloud AI in general

  1. Turn off training use. Every provider offers this, usually in settings → data controls.
  2. Use temporary chat modes. Reduces retention, even if it doesn't eliminate logging.
  3. Avoid putting sensitive information in prompts. "Sensitive" = things you wouldn't want associated with your name in a leak. Health details, legal matters, confidential work, personal crises.
  4. Consider API over consumer UI if you want stronger guarantees around training and retention.
  5. Delete chats you don't need. It triggers the retention clock.

For genuinely private tasks, use on-device

No amount of settings-tweaking makes cloud AI actually private. The architecture requires the prompt to travel to a server, and a server is a place where data can be logged, leaked, subpoenaed, or misused. On-device AI (like PocketLLM) is the only way to get an architectural guarantee that the prompt never leaves your control.

The simplest rule: use on-device for anything sensitive, cloud for anything where you wouldn't mind if it leaked.

How to read your own privacy policies

Every provider publishes a current privacy policy. Search for the words "retention," "training," "human review," and "employees" — these are the paragraphs that tell you what's actually happening. The rest is mostly legal boilerplate.

The context you should also know

A few broader facts that don't fit neatly into a per-provider breakdown:

  • Subpoenas and legal requests. All three providers receive government data requests and comply with valid ones. If a prompt exists on their servers, it can be turned over.
  • Data breaches. OpenAI had a notable incident in 2023 that briefly exposed some user chat titles and payment data. Nothing unique to them — every major service has had breaches. The question is what's there to breach.
  • Policy changes. Terms of service change. A prompt you send today under a conservative policy can live on a server through policy updates that would never have been agreed to originally.
  • Inference about you. Even if the raw prompt text isn't directly used to train, patterns in your usage can be used for analytics, product decisions, and ad targeting in Google's case.

The one sentence you should remember

When you use cloud AI, your prompt becomes part of someone else's operational infrastructure — and the only way to not be part of their infrastructure is to not send the prompt in the first place.

For more on the alternative, see our post on why your conversations should never leave your device or our ranking of ChatGPT alternatives that don't collect your data.

Or just don't send the prompt at all.

PocketLLM runs on your iPhone. Nothing transits, nothing gets logged, nothing can be subpoenaed. Join the waitlist.

Join the waitlist