this post was submitted on 21 Sep 2024
52 points (78.9% liked)

Asklemmy

43945 readers
638 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

you are viewing a single comment's thread
view the rest of the comments
[–] Zexks@lemmy.world 7 points 2 months ago (5 children)

Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.

[–] chobeat@lemmy.ml 11 points 2 months ago

you use "luddite" as if it's an insult. History proved luddites were right in their demands and they were fighting the good fight.

[–] jacksilver@lemmy.world 10 points 2 months ago (1 children)

Here's an easy way we're different, we can learn new things. LLMs are static models, it's why they mention the cut off dates for learning for OpenAI models.

Another is that LLMs can't do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it's almost guaranteed to fail.

Yes, they are very impressive models, but they're a long way from AGI.

[–] DavidDoesLemmy@aussie.zone -4 points 2 months ago (1 children)

I know lots of humans who can't do maths. At least I think they're human. Maybe there LLMs, by your definition.

[–] jacksilver@lemmy.world -1 points 2 months ago (1 children)

I think you're missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).

AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it's prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.

[–] Zexks@lemmy.world 1 points 2 months ago

Yes. Some LLMs can do math. It’s a documented thing. Just because you’re unaware of it doesn’t mean it doesn’t exist.

[–] vrighter@discuss.tchncs.de 9 points 2 months ago

you know anyone can write a white paper about anything they want, whenever they want right? A white paper is not authoritative in the slightest.

[–] gravitas_deficiency@sh.itjust.works 6 points 2 months ago* (last edited 2 months ago) (1 children)

Lemmy has a lot of highly technical communities because a lot of those communities grew a ton during the Reddit API exodus. I’m one of those users.

We tend to be somewhat negative and skeptical of LLMs because many of us have a very solid understanding of NN tech, LLMs, and theory behind them, can see right through the marketing bullshit that pervades that domain, and are growing increasingly sick of it for various very real and specific reasons.

We’re not just blowing smoke out of our asses. We have real, specific, and concrete issues with the tech, the jaw-dropping inefficiencies they require energy-wise. what it’s being billed as, and how it’s being deployed.

[–] Zexks@lemmy.world 0 points 2 months ago

Yes. Many of you are. I’m one of those technicals you speak of. I work with half a dozen devs that all think like you. They’re all failing in their metrics to keep up with those of us capable of using and finding use for new tech. Including AI’s. The others are being pushed out. As will most of those in here complaining. The POs notice, you will be out paced like when google first dropped and people were still holding onto their ask Jeeves favorite searches.

[–] Omega_Jimes@lemmy.ca 3 points 2 months ago

Blog posts and peer reviewed articles are not the same thing.