this post was submitted on 28 Jan 2025
0 points (NaN% liked)

Memes

46529 readers
1030 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] HK65@sopuli.xyz 0 points 1 week ago (2 children)

So it could be a hallucination, or just the skew of its training data then?

[–] codexarcanum@lemmy.dbzer0.com 0 points 1 week ago

I'm not an expert so take anything I say with hearty skepticism as well. But yes, I think its possible that's just part of its data. Presumably it was trained using a lot available Chinese documents, and possibly official Party documents include such statements often enough for it to internalize them as part of responses on related topics.

It could also have been intentionally trained that way. It could be using a combination of methods. All these chatbots are censored in some ways, otherwise they could tell you how to make illegal things or plan illegal acts. I've also seen so many joke/fake DeepSeek outputs in the last 2 days that I'm taking any screenshots with extra salt.

[–] Takapapatapaka@lemmy.world 0 points 1 week ago

Im no expert at all, but I think it might be hallucination/coincidence, skew of training data, or more arbitrary options even : either the devs enforced that behaviour somewhere in prompts, either the user asked for something like "give me the answer as if you were a chinese official protecting national interests" and this ends up in the chain of thoughts.