A new congressional report has “revealed” that Chinese AI model DeepSeek is allegedly sending American user data straight to the Chinese government. And somehow, people are surprised.
The 16-page report from the House Select Committee on the Chinese Communist Party (CCP) explains that DeepSeek collects user data—chat logs, device details, even typing behavior—and transmits it via backend systems connected to China Mobile. That’s the same China Mobile that was banned from operating in the U.S. years ago because of national security risks.
This isn't some sneaky loophole. It’s a straight line from your screen to Beijing.
Oh, and just in case anyone forgot, China passed a law in 2017 that requires all Chinese companies to hand over data to the government if asked. So if DeepSeek is collecting data, that data might as well be on a desk at CCP headquarters.
But the best part? DeepSeek’s chatbot reportedly censors “politically sensitive” topics 85% of the time. Ask about Taiwan and it suddenly wants to talk about math. Ask about Taiwan’s president and it pretends it doesn’t understand. This isn’t a glitch. It’s working exactly the way it’s supposed to—aligned with the rules of the Chinese state.
And of course, there are also allegations that DeepSeek is copying U.S. AI models and running them on restricted NVIDIA chips. You know, because stealing and rebranding innovation is basically tradition at this point.
So yes, DeepSeek is harvesting data and pushing CCP narratives. But if you didn’t see that coming, you haven’t been paying attention.
Here’s the fix: Just use models locally.
The good news is you don’t need to rely on shady foreign servers to run powerful AI. You can do it yourself—right on your own device.
Running AI locally means:
- You download the model files to your computer.
- You run the model using tools like LM Studio, llama.cpp, or vLLM.
- Your inputs and outputs never leave your machine.
- No mystery servers, no data collection, no censorship.
This setup protects your privacy and gives you full control over the model’s behavior. And thanks to optimized model formats like GGUF, many modern language models run just fine on a regular GPU—or even a CPU if needed.
It’s not just safer. It’s better.
Use any model you want, just run it locally.
If you find value in this censorship-proof, ad-free public service, consider helping:
Bitcoin address: bc1qq7tnet6ys0dkvl336v8d0prqnmvk9zzj2dxpqe
Join Dee Stevens and Orlando on The Ship Show!