this post was submitted on 29 Jan 2025
137 points (100.0% liked)

Technology

37954 readers
282 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] seang96@spgrn.com 2 points 6 days ago (1 children)

I thought training a model from AI data reduced its effectiveness? Wouldn't this mean they still did something crazy since they got the opposite results?

[–] crmsnbleyd@sopuli.xyz 7 points 6 days ago (1 children)

I don't think it's training from AI data, but rather distillation: which tries to mimic another model

So there's a difference in what's happening, one is taking the data as input and trying to form something new, while the other is trying to recreate the input

[–] seang96@spgrn.com 4 points 6 days ago

Ah I saw it mentioned but paywall blocked the rest lol

Distillation is basically reverse engineering for AI cool.