this post was submitted on 12 Jun 2024
1 points (100.0% liked)

Technology

59587 readers
2940 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Deconceptualist@lemm.ee 0 points 5 months ago (5 children)

As others are saying it's 100% not possible because LLMs are (as Google optimistically describes) "creative writing aids", or more accurately, predictive word engines. They run on mathematical probability models. They have zero concept of what the words actually mean, what humans are, or even what they themselves are. There's no "intelligence" present except for filters that have been hand-coded in (which of course is human intelligence, not AI).

"Hallucinations" is a total misnomer because the text generation isn't tied to reality in the first place, it's just mathematically "what next word is most likely".

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

[–] Tobberone@lemm.ee 0 points 5 months ago

An LLM once explained to me that it didn't know, it simulated an answer. I found that descriptive.

[–] _number8_@lemmy.world 0 points 5 months ago (1 children)

all we know about ourselves is what's in our memories. the way normal writing or talking works is just picking what words sound best in order

[–] Deconceptualist@lemm.ee 0 points 5 months ago* (last edited 5 months ago) (1 children)

That's not the whole story. "The dog swam across the ocean." is a grammatically valid sentence with correct word order. But you probably wouldn't write it because you have a concept of what a dog actually is and know its physiological limitations make the sentence ridiculous.

The LLMs don't have those kind of smarts. They just blindly mirror what we do. Since humans generally don't put those specific words together, the LLMs avoid it too, based solely on probability. If lots of people started making bold claims about oceanfaring canids (e.g. as a joke), then the LLMs would absolutely jump onboard with no critical thinking of their own.

[–] theherk@lemmy.world 0 points 5 months ago

Humans do the same thing. Have you heard of religion?

Remember the game people used to play that was something like "type my girlfriend is and then let your phone keyboards auto suggestion take it from there?" LLMs are that.

[–] neo@lemy.lol 0 points 5 months ago (3 children)

I was wondering, are people working on networks that train to create a modular model of the world, in order to understand it / predict events in the world?

I imagine that that is basically what our brains do.

[–] eestileib@sh.itjust.works 0 points 5 months ago (1 children)

Many attempts, some well-funded.

They have been successful in very limited domains. For example, the F-35 integrated sensor suite.

[–] rottingleaf@lemmy.zip 0 points 5 months ago

For example, the F-35 integrated sensor suite.

Now I know why they crash so often

[–] Natanael@slrpnk.net 0 points 5 months ago

Not really anything properly universal, but a lot of task specific models exists with integration with logic engines and similar stuff. Performance varies a lot.

You might want to take a look at wolfram alpha's plugin for chatgpt for something that's public

[–] Deconceptualist@lemm.ee 0 points 5 months ago

Yeah I'm sure folks are working on it, but I'm not knowledgeable or qualified on the details.

[–] QuantumSoul@lemmy.dbzer0.com 0 points 5 months ago (2 children)

They do have internal concepts though: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation

Probably not of what a human is, but thought process is needed for better text generarion and is therefore emergent in their neural net

[–] Deconceptualist@lemm.ee 0 points 5 months ago (1 children)

Ok, maybe there's a possibility someday with that approach. But that doesn't reflect my understanding or (limited) experience with the major LLMs (ChatGPT, Gemini) out in the wild today. Right now they confidently advise ingesting poison because it's grammatically sound and they found it on some BS Facebook post.

If ML engineers can design an internal concept of what constitutes valid information (a hard problem for humans, let alone machines) maybe there's hope.

[–] QuantumSoul@lemmy.dbzer0.com 0 points 5 months ago

Ethical and healthy is a whole harder problem lol. Having reasoning and thinking will come before

[–] Natanael@slrpnk.net 0 points 5 months ago

The problem is they have many different internal concepts with conflicting information and no mechanism for determining truthfulness or for accuracy or for pruning bad information, and will sample them all randomly when answering stuff