this post was submitted on 09 Oct 2024
610 points (96.6% liked)
Technology
59587 readers
2940 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think it's just you. Differential Transformers are pretty good at regurgitating information that's widely talked about. They fall short when it comes to specific information on niche subjects, but generally that's only a matter of understanding the jargon needed to plug into a search engine to find what you're looking for. Paired with uBlock Origin, it's all typically pretty straight forward, so long as you know which to use in which circumstance.
Almost always, I can plug some error for an OS into a LLM and get specific instructions on how to resolve it.
Additionally if you understand and learn how to use a model that can parse your own set of user-data, it's easy to feed in documentation to make it subject-specific and get better results.
Honestly, I think the older generation who fail to embrace and learn how to use this tool will be left in the dust, as confused as the pensioners who don't know how to write an email.
Stable Diffusion is an image generator. You probably meant a language model.
And no, it's not just OP. This shit has been going on for a while well before LLMs were deployed. Cue to the old "reddit" trick that some people used.
Also, they're pretty good at regurgitating bullshit. Like the famous 'glue on pizza' answer.
Or, in a deeper aspect: they're pretty good at regurgitating what we interpret as bullshit. They simply don't care about truth value of the statements at all.
That's part of the problem - you can't prevent them from doing it, it's like trying to drain the ocean with a small bucket. They shouldn't be used as a direct source of info for anything that you won't check afterwards; at least in kitnaht's use case if the LLM is bullshitting it should be obvious, but go past that and you'll have a hard time.
I'm not eating pizza at your house, that's for sure.