"Is ChatGPT private?" is the wrong question, because privacy isn't a yes/no. What you actually want to know is: who can read the things you type? How long does OpenAI keep them? Do your chats train future models? And if all of that worries you, what should you use instead? This post answers those nine questions, in order, with quotes from OpenAI's current policy and pointers to the ones you can't quite trust at face value.
Short version: ChatGPT is not private by default, it's more private than most free alternatives, and there are legitimate ways to use it that come close to "private" if you're careful. Fully private means running the model on your own device — but we'll get to that.
Question 1: Does OpenAI read my ChatGPT conversations?
Yes, in two senses. First, automated systems analyze every conversation for policy violations, abuse, and safety classification. Second, a subset of conversations is reviewed by human trainers and safety reviewers — OpenAI says this happens to "a limited number of samples" and is required for responsible deployment. If you're on ChatGPT Free or Plus, you should assume any single conversation might be sampled. On ChatGPT Enterprise, Team, and API with "zero data retention" enabled, human review is substantially more restricted.
Question 2: Does OpenAI train on my chats?
By default, yes, on Free and Plus. OpenAI's policy has said for years that consumer conversations "may be used to improve our models" unless you opt out. The opt-out lives under Settings → Data Controls → Improve the model for everyone. Turning it off is the single most important privacy action a ChatGPT user can take, and most users don't know it exists.
On ChatGPT Team, Enterprise, Edu, and the API, training is opt-in by default — OpenAI does not train on your data unless you explicitly ask them to. If you want ChatGPT but don't want your words in the next model, a paid tier is a meaningfully different product from Free or Plus.
Question 3: How long does OpenAI keep my conversations?
Indefinitely, by default, on consumer tiers. OpenAI's retention policy for Free and Plus says chats are retained "as needed to provide our Services, comply with legal obligations, resolve disputes, and enforce our agreements." That is lawyer English for "as long as we want." Deleted conversations are purged from their systems within 30 days, but only if you actively delete them.
Temporary Chat (released in 2024) reduces retention to 30 days and is not used for training. It's the closest thing ChatGPT Free has to a private mode, and it's the default we recommend if you have to use ChatGPT for anything sensitive.
Question 4: Is Temporary Chat actually private?
Less than you'd hope. Temporary Chat means the conversation isn't saved to your history and isn't used to train models, but it still transits through OpenAI's infrastructure and is retained for up to 30 days "for safety purposes." It's not end-to-end encrypted, and OpenAI staff can still access it under the standard abuse and safety pathways. It is strictly better than a normal conversation, but it is not "off the record."
Question 5: What about ChatGPT Enterprise and Team?
ChatGPT Enterprise and Team have a genuinely different data policy. OpenAI does not train on your data by default, data is encrypted in transit and at rest, admins can configure data retention windows, and there's a compliance story with SOC 2 and GDPR processes. These tiers are as close as cloud ChatGPT gets to "private." They still involve sending your prompts to OpenAI's servers, and OpenAI staff can still access data to investigate abuse, but the policy floor is dramatically higher than Free or Plus.
Question 6: Can OpenAI hand my chats to the government?
Yes, under subpoena or warrant. OpenAI publishes a transparency report showing how many legal requests they receive and how many they comply with. Like every US company with user data, OpenAI is subject to US legal process, including the Stored Communications Act. If a court orders them to produce your conversations, they will — and you'll typically only know if the order allows you to be notified.
This is not a knock on OpenAI specifically. Every cloud provider faces the same legal constraints. It's the reason that for actually sensitive work — legal, medical, source-protected — running the model on your own device is the only privacy guarantee that doesn't depend on a third party's lawyers.
Question 7: Does ChatGPT use my IP address or location?
Yes. Standard web service logging applies: IP addresses, browser fingerprints, coarse location from IP, device info, and timestamps are all collected. The ChatGPT iOS and Android apps additionally have access to App Store / Google Play identifiers, and have historically used them for product analytics. None of this is unusual for a consumer service, and none of it is comforting.
Question 8: Are voice conversations (Advanced Voice Mode) private?
Advanced Voice Mode in ChatGPT processes audio end-to-end — your voice goes to OpenAI, gets transcribed, reasoned about, and spoken back. The audio is subject to the same retention and training policies as text, plus the additional uncertainty that voice data is harder to reliably delete. If you're worried about ChatGPT's privacy in text mode, voice mode is strictly worse.
Question 9: So is ChatGPT private or not?
ChatGPT is not private by default. ChatGPT Free and Plus log conversations indefinitely, train on them unless you opt out, and can be compelled to hand them over by legal process. Turning off training, using Temporary Chat, and avoiding sensitive content in your prompts brings it partway. ChatGPT Enterprise and Team get you further. None of those get you to "private" in the strong sense — the sense where nobody outside your device could ever read your prompts even if they wanted to.
For that, you need local AI. A model running on your iPhone, Mac, or laptop doesn't transmit your prompts anywhere, doesn't get compelled under US legal process, and can't be changed by OpenAI's product decisions. That's the version of privacy that's actually private.
The summary table
| Question | Free / Plus | Enterprise / Team | Local AI (PocketLLM) |
|---|---|---|---|
| Used for training? | Yes (opt-out) | No (default) | No (never) |
| Retained by default? | Indefinitely | Configurable | Only on your device |
| Human reviewable? | Yes, sampled | Limited | No |
| Subject to subpoena? | Yes | Yes | Subpoena your device |
| Encrypted in transit? | Yes (TLS) | Yes (TLS) | No network traffic |
| Encrypted at rest? | OpenAI side | OpenAI side | On your device |
| Can you audit it? | No | Limited (SOC 2) | Yes (you run it) |
| Status | Not private | Private-ish | Private |
PocketLLM is currently in waitlist / early access.
What to do right now if you're worried
- Turn off training. Settings → Data Controls → Improve the model for everyone → Off. Do this today.
- Use Temporary Chat for anything you wouldn't want screenshot.
- Delete old conversations you no longer need. They will be purged within 30 days of deletion.
- Don't paste secrets. API keys, passwords, signed documents, client information, patient details, source names, and diagnostic images should not go into any cloud AI, including ChatGPT Enterprise.
- Use a local model for genuinely sensitive work. See our best local LLM models roundup and how to run AI offline on your iPhone.
The quick answer
Is ChatGPT private? No. Can you make ChatGPT more private? Yes, by turning off training, using Temporary Chat, and upgrading to Enterprise if you're a business. Is there a genuinely private alternative? Yes — any on-device model where the weights run on your machine, the prompts never leave, and no third party sits between you and the answer. PocketLLM is one of those, currently on the waitlist. Join here.