← Back to blog

5 Ways to Make ChatGPT (More) Private

You can't make ChatGPT private in the strong sense — the model runs on OpenAI's servers and your prompts transit their infrastructure. But you can make it meaningfully less leaky. Most of the privacy you lose on ChatGPT comes from three defaults and one bad habit. Fix all four and you've eliminated most of the avoidable exposure. Then there's one bigger fix if you want actual privacy, not approximate privacy. Five steps.

Short version: turn off training, use Temporary Chat, delete old conversations, never paste sensitive content, and for the last 5% use a local model. PocketLLM is the local option — join the waitlist.

1. Turn off "Improve the model for everyone"

This is the single most important ChatGPT privacy action and the one most users don't know about. Settings → Data Controls → Improve the model for everyone → Off. Turning this off stops OpenAI from using your future conversations as training data for future models. It's on by default on ChatGPT Free and Plus, which means most users have been training OpenAI's models without intending to.

Two caveats: (a) this only affects future conversations — anything you sent before turning it off may already be in a training corpus, and there's no way to retroactively remove it; (b) this setting doesn't affect automated safety classifiers or sampled human review, both of which continue regardless.

2. Use Temporary Chat for anything sensitive

Temporary Chat is ChatGPT's closest thing to a private mode. The conversation isn't saved to your history, isn't used for training, and is retained for "up to 30 days" for safety review before being purged. It's strictly better than a normal conversation. It's not end-to-end encrypted, and OpenAI staff can still access it under the abuse and safety pathways, but the retention window is dramatically shorter.

Use it for: anything you wouldn't want in a screenshot, drafts you'd rather not keep, conversations about specific people, health or legal questions, and anything where the value of having a chat history doesn't exceed the cost of having it exist.

3. Delete old conversations

Conversations you don't delete sit in OpenAI's storage indefinitely. The simple fix: periodically delete things you no longer need. Deleted conversations are purged within 30 days. If you have a year of ChatGPT history and you've never cleaned it up, every one of those conversations is still sitting on OpenAI's servers, searchable by anyone who subpoenas OpenAI for your data.

The cleanest approach is to delete aggressively. If you need to save a specific conversation, copy the text into your own notes app and then delete the ChatGPT side. You then control the copy.

4. Don't paste sensitive content in the first place

No policy or setting changes whether what you've already sent got logged. The only thing you fully control is what you put in the box. If it's client work, patient notes, source identities, credentials, proprietary code, diagnostic imagery, or internal company strategy — it does not belong in any cloud AI regardless of the privacy tricks above.

The useful mental model: imagine every prompt you send showing up in a screenshot on a news site. If that's a problem, don't send it to ChatGPT. Send it to a local model that runs on your own device, where the worst-case exposure is "somebody stole your laptop."

5. For anything that actually matters, use a local model

The biggest privacy win you can get on AI tools isn't a ChatGPT setting — it's replacing ChatGPT for sensitive work with a model that runs locally. Local AI has a simple privacy story: no prompts transit a network, no logs exist on anyone else's hardware, no automated system reads your chats, no subpoena can reach data that isn't on a company's servers. On iPhone, the easiest option is PocketLLM (waitlist), Private LLM, LLM Farm, or MLC Chat. On Mac, LM Studio or Ollama. On Linux, Ollama or llama.cpp directly.

The quality gap between ChatGPT Free and a good local 3B model on an iPhone 15 Pro is smaller than you'd think. For most daily tasks — drafting, summarizing, brainstorming, explaining concepts, helping with code — you won't feel the difference. For frontier-level reasoning tasks you might. Know which category each task is in and route accordingly.

The summary table

StepWhat it doesEffortImpact
1. Turn off trainingStops future chats being training data30 secondsVery high
2. Use Temporary ChatCaps retention at 30 days1 click per conversationHigh
3. Delete old conversationsPurges stored data from OpenAIMinutesMedium
4. Don't paste sensitive stuffEliminates the worst exposuresHabit changeVery high
5. Use a local model for sensitive workFull privacy via architectureInstall one appMaximum

What about ChatGPT Plus, Team, or Enterprise?

Upgrading to a paid tier is another lever. ChatGPT Team and Enterprise have different defaults: no training on customer data, configurable retention, SOC 2 compliance, admin controls. If you're a business and you'd otherwise prohibit ChatGPT entirely, the paid tiers are the legitimate middle ground. Plus is the same privacy posture as Free — more features, not more privacy.

The quick answer

You can make ChatGPT meaningfully more private by turning off training, using Temporary Chat, deleting old conversations, and not pasting sensitive content in the first place. Those four steps eliminate most of the avoidable exposure. For the last 5% — the things where "less leaky" isn't good enough — use a local model on your own device. That's the only category of AI tool where privacy is structural rather than policy-based.

For the full breakdown of what OpenAI actually does with your data, see Is ChatGPT Private? and Does ChatGPT Keep Your Data Private?.

The last 5% is the whole point.

For work that actually matters — legal, medical, source-sensitive — a local model on your iPhone is the only answer. Join the PocketLLM waitlist.

Join the waitlist