this post was submitted on 28 Jan 2025
312 points (100.0% liked)

Technology

37924 readers
565 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

The original post is from r/Romania, but I thought it would be interesting to share this here too.

you are viewing a single comment's thread
view the rest of the comments
[–] drspod@lemmy.ml 52 points 2 days ago (2 children)

I thought that guardrails were implemented just through the initial prompt that would say something like "You are an AI assistant blah blah don't say any of these things..." but by the sounds of it, DeepSeek has the guardrails literally trained into the net?

This must be the result of the reinforcement learning that they do. I haven't read the paper yet, but I bet this extra reinforcement learning step was initially conceived to add these kind of censorship guardrails rather than making it "more inclined to use chain of thought" which is the way they've advertised it (at least in the articles I've read).

[–] iii@mander.xyz 43 points 2 days ago* (last edited 2 days ago) (2 children)

Most commercial models have that, sadly. At training time they're presented with both positive and negative responses to prompts.

If you have access to the trained model weights and biases, it's possible to undo through a method called abliteration (1)

The silver lining is that a it makes explicit what different societies want to censor.

[–] drspod@lemmy.ml 8 points 2 days ago (1 children)

I didn't know they were already doing that. Thanks for the link!

[–] SkyeStarfall@lemmy.blahaj.zone 6 points 2 days ago (1 children)

In fact, there are already abliterated models of deepseek out there. I got a distilled version of one running on my local machine, and it talks about tiananmen square just fine

[–] Count042@lemmy.ml 4 points 2 days ago
[–] SnotFlickerman@lemmy.blahaj.zone 4 points 2 days ago* (last edited 2 days ago) (1 children)

Hi I noticed you added a footnote. Did you know that footnotes are actually able to be used like this?[^1]

[^1]: Here's my footnote

Code for it looks like this :able to be used like this?[^1]

[^1]: Here's my footnote

[–] abfarid@startrek.website 10 points 2 days ago (1 children)

Do you mean that the app should render them in a special way? My Voyager isn't doing anything.

[–] SnotFlickerman@lemmy.blahaj.zone 3 points 2 days ago (1 children)

I actually mostly interact with Lemmy via a web interface on the desktop, so I'm unfamiliar with how much support for the more obscure tagging options there is in each app.

It's rendered in a special way on the web, at least.

[–] codexarcanum@lemmy.dbzer0.com 6 points 2 days ago (1 children)

That's just markdown syntax I think. Clients vary a lot in which markdown they support though.

markdown syntax

yeah I always forget the actual name of it I just memorized some of them early on in using Lemmy.

I saw it can answer if you make it use leetspeak, but I'm not savvy enough to know what that tells about guardtails