this post was submitted on 17 May 2024
0 points (NaN% liked)
Technology
59651 readers
2744 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn't exist in the physical world. Humans hallucinate too - all the time. It's just that our approximations are usually correct, and then we don't call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It's also why we don't notice our blinks, or why we don't see the blind spot our eyes have.
AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.
Hallucinations shouldn't be treated like a bug. They are a feature - just not one the big tech companies wanted.
When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.
Very long layman take. Why is there always so many of these on every ai post? What do you get from guesstimating how the technology works?
I'm not an expert in AI, I will admit. But I'm not a layman either. We're all anonymous on here anyways. Why not leave a comment explaining what you disagree with?
I want to just understand why people get so passionate about explaining how things work, especially in this field where even the experts themselves just don't understand how it works? It's just an interesting phenomenon to me
Hallucinations in AI are fairly well understood as far as I'm aware. Explained in high level on the Wikipedia page for it. And I'm honestly not making any objective assessment of the technology itself. I'm making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it's given, but that's something even a layman might know)
How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don't have an answer there either), but a true fix should be impossible.
I can't exactly say why I'm passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I'm also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.
Not really, no, because these aren't biological, and the scientists that work with it is more interested in understanding why it works at all.
It is very interesting how the brain works, and our sensory processing is predictive in nature, but no, it's not relevant to machine learning which works completely different