ChatGPT Basically Told Me "Privacy Is For Criminals."
I asked AI to spell check my article. Instead it censored me.
Last week, I wrote a newsletter about what to do when a website demands a phone number and you do not want to give one.
I personally do not have a cell number. I don’t have a SIM in my phone -- I rely entirely on VoIP.
But even if you do have a cell number, there are many reasons not to hand it over. In my newsletter I mentioned how a phone number is a powerful unique identifier. We are trained to give it to almost everyone, which makes it trivial for data brokers and breach harvesters to link together our accounts, movements, and behavior across the internet.
Today’s newsletter is about an alarming thing happened in the process of writing that last one.
As part of my usual workflow, I ran the draft through an AI tool to check for typos, grammar, and phrasing. I often use AI for this, in this case ChatGPT. It is far more capable than a basic spell checker. It can tell the difference between diffuse and defuse, affect and effect, foment and ferment, and then automatically formats it for me with the right sized headings etc.
But instead of a check for typos, the AI censored me.
I had included a list of companies I personally use and recommend. The AI deleted the names and replaced them with a refusal, saying it could not list specific services.
I mentioned that some of these services accept cryptocurrency, which can be a more private way to pay. The reference to crypto was removed entirely.
I asked why I had been censored, and the AI said that privacy tools can be used for wrongdoing, and therefore it should not help write a how-to guide about achieving privacy. Even after acknowledging that my newsletter was aimed at everyday people doing nothing illegal, it maintained that helping people learn privacy techniques could be misused, so it should not exist.
There it was. The familiar smear that privacy is for criminals, so normalized that even an AI now repeats it without question.
First off, apologies for any typos that made their way into my last email, obviously I went with my original and not the censored version.
Now, I have three main takeaways from this experience that I want to talk about in this newsletter. The first is obvious, but the others are super important ones that I think many may miss.
1. Censorship and the shaping of thought
The most immediate lesson is the danger of AI censorship.
If we centralize knowledge into single system, that knowledge becomes easier to control. Right now there are a handful of major models out there that have immense power when it comes to quietly filtering, reshaping, and rewriting what we think to be true. What people are allowed to ask, learn, or express becomes mediated through these models and the values embedded in them.
This is why decentralization and competition amongst models matters. If society converges on a single or a handful of AI authorities, we also converge on the definition of acceptable truth.
What worries me even more is how deeply these systems are being integrated into everyday tools. Word processors. Email clients. Messaging apps. Website interfaces.
Are we heading towards a world where software scans your words before you send them, and quietly rewrites them to a more acceptable format first? Client-side filtering of wrongthink before a message ever leaves your device is not that farfetched.
There are two other, more nuanced takeaways I want to talk about.
2. These are not the AI’s views. They are ours.
It is tempting to blame the AI company only.
“They censored me!”
Yes, ChatGPT fine-tunes its models and adds safeguards around certain categories of information, including what it will and will not help with. There may be rules along the lines of “don’t help people bypass verification systems.”
Is that what happened here? Possibly.
But I would be far more convinced this was deliberate, privacy-specific censorship if I were not already so aware that much of society holds the same view.
Large language models are trained on us. What the AI told me was a reflection of a widely accepted cultural assumption about privacy itself.
Specifically, AI models are trained on enormous datasets. They ingest vast amounts of writing, ideas, and information, then learn the patterns within that data in order to generate language that sounds coherent and meaningful. They absorb patterns from the internet.
And increasingly for a generation, society has been repeating over and over that privacy is for criminals.
“If you have nothing to hide, you have nothing to fear.”
“If you want privacy, you must be doing something wrong.”
Privacy has been reframed as deviant and shameful behavior. Law-abiding people are expected to be ok with being transparent, even as their data is leaked, sold, breached, and abused.
An AI trained on data expressing this worldview will reproduce it.
AI is reflecting our views. It implied that normal people do not need privacy, and that helping people protect themselves is inherently suspect.
If we want a different future, we cannot just fight models. We have to fight the cultural assumptions being fed into them.
3. We must contribute to the knowledge base now
This leads to the most important point I want to make.
Right now, AI systems are being trained at an enormous scale. They are ingesting the internet as it exists today.
What gets written now will shape what future models believe is normal.
If the only articles it is ingesting about privacy frame it as criminal, extremist, or unnecessary, then that becomes the default assumption encoded into future LLMs.
We need to contribute to the knowledge base around privacy, and create more content that treats privacy as normal, and as foundational to a free society. We need guides that explain privacy tools without apology. Essays that explain why consent matters. Stories that show privacy as something ordinary people use in ordinary lives.
We need to write this content now. In fact, we need to create as much content as we can.
Not only to shift culture and spread the case for privacy now, but so that tomorrow’s AI models are trained on OUR perspective, not on the narratives pushed by anti-privacy propagandists. AI isn’t going anywhere, and these models will go on to influence future generations. So we have to make sure they represent our worldview by contributing to the knowledge base.
Hundreds of thousands of people watch our videos and read our work. Imagine if even a fraction of them wrote a blog post, recorded a video, or shared a clear explanation of why privacy matters.
We have an incredibly powerful grassroots movement growing around privacy. But we all need to contribute if we want to win. Because this window of opportunity where privacy is still allowed won’t be open forever. So get active. Get creating. See if you can find a way today to post something in favor of privacy, and shift the training data back in our favor.
By the way, I ran my final newsletter through Confer.to this time, a new private, end-to-end encrypted LLM created by Moxie Marlinspike, the founder of Signal. Another way we can build a better future is to support those tools that are trying to protect us. A good lesson for all of us, including me.
Yours In Privacy,
Naomi
Consider supporting our nonprofit so that we can fund more research into the surveillance baked into our everyday tech. We want to educate as many people as possible about what’s going on, and help write a better future. Visit LudlowInstitute.org/donate to set up a monthly, tax-deductible donation.
NBTV. Because Privacy Matters.





Hi, Naomi. You are right. The large language model is designed and implemented for the purpose of reducing individual liberty and financial autonomy. Larry Ellison has said that the public will always be afraid, always trembling, because they will know that "we" are always watching, always ratting them out to the authorities. Yes, in fact, he did use other words, but that is the sense of his 2024 statement favouring enormous data centres to track and monitor and hurt other people as much as possible.
If you prefer, you can email me your essays and I can spell and usage check, style validate, and identify nuances. In addition to knowing the difference between ferment and foment, I know a hawk from a handsaw whether the wind is southerly or not. -smile-
"Can't stop the signal."
The quotes in your piece brought back memories of the infamous interview with Maria Bartiromo and Eric Schmidt (then Chairman of Google).
“If you have nothing to hide, you have nothing to fear.”
“If you want privacy, you must be doing something wrong.”
https://venice.ai/chat/7c153d73-e5a3-47ed-aacf-e2bdd1d5e26e?ref=EQ2uaw#veniceShareKey=v8A8nMEHGoHuVnHuhxmd99IImttuI7d89GccsiyiktE%3D&veniceShareNonce=7Jtn7fcX8EfA38TguW2h%2Fa6tqXNe7axT