this post was submitted on 30 Jan 2025
132 points (91.8% liked)

LinkedinLunatics

3797 readers
440 users here now

A place to post ridiculous posts from linkedIn.com

(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)

founded 2 years ago
MODERATORS
 

This guy is very very scared of Deepseek and all the potential malicious things it will do, seemingly due to the fact that it's Chinese. As soon as the comments point out that ChatGPT is probably worse, he disagrees with no reasoning.

Transcription:

DeepSeek as a Trojan Horse Threat.

DeepSeek, a Chinese-developed Al model, is rapidly being installed into productive software systems worldwide. Its capabilities are impressive-hyper-advanced data analysis, seamless integration, and an almost laughably low price. But here's the problem: nothing this cheap comes without a hidden agenda.

What's the real cost of DeepSeek?

  1. Suspiciously Cheap Advanced models like DeepSeek aren't "side projects." They take massive investments, resources, and expertise to develop. If it's being offered at a fraction of its value, ask yourself-who's really paying for it?

  2. Backdoors Everywhere DeepSeek's origin raises alarm bells. The more systems it infiltrates, the more it becomes a potential vector for mass compromise. Think backdoors, data exfiltration, and remote access at scale-hidden vulnerabilities deliberately built in.

  3. Wide Adoption = Global Risk From finance to healthcare, DeepSeek is being installed across critical systems at an alarming rate. If adoption continues unchecked, 80% of our systems could soon be compromised.

  4. The Trojan Horse Effect DeepSeek is a textbook example of a Trojan horse strategy: lure organizations with a cheap, powerful tool, infiltrate their systems, and quietly map or control them. Once embedded, reversing the damage will be nearly impossible.

The Fairytale lsn't Real

The story of DeepSeek being a "low-cost, side project" is just that-a fairytale. Technology like this isn't developed without strategic motives. In the world of cyber warfare, cheap tools often come at the highest cost.

What Can We Do?

Audit your systems: Is DeepSeek already embedded in your critical infrastructure?

Ask the hard questions: Why is this so cheap? Where's the transparency?

Take immediate action: Limit adoption before it's too late. The price may look attractive, but the real cost could be our collective security.

Don't fall for the fairytale.

you are viewing a single comment's thread
view the rest of the comments
[–] Rentlar@lemmy.ca 11 points 22 hours ago (1 children)

ChatGPT and CoPilot have the same concerns or worse for me.

[–] Aurenkin@sh.itjust.works 11 points 22 hours ago (3 children)

I disagree strongly, ChatGPT will gladly tell you all about the My Lai Massacre for example. Not to say it's perfect or completely uncensored but to say it's worse or the same...I just can't get there.

[–] Rentlar@lemmy.ca 14 points 22 hours ago* (last edited 22 hours ago) (1 children)

It's not about the censorship for me (though I recognize that was your main point, I should have made a top level comment), it's about the infiltration of it within our computers, tech and our daily lives, to which we become dependent on it. I'm worried that at any moment, the controlling entity could change it on a whim, publicly or covertly.

[–] Aurenkin@sh.itjust.works 3 points 21 hours ago

I see what you mean. I agree with that and in that sense DeepSeek is actually a really good thing because it gives some hope that you don't need insane amounts of money for a powerful model. Let's hope access and development doesn't get too concentrated.

[–] HobbitFoot@thelemmy.club 2 points 18 hours ago

From what others have said on DeepSeek, you can run the AI model on your own hardware and the censoring is done on after the AI outputs its response to the internal servers.

[–] h4x0r@lemmy.dbzer0.com 3 points 21 hours ago* (last edited 21 hours ago)

Openai captures everything you feed it while the other orgs that concern you provide models that can be run locally without an internet connection. I can look tianamen square up on wikipedia, but none of us can control what openai does with our data.