this post was submitted on 22 Jul 2023
168 points (85.6% liked)

Asklemmy

43963 readers
1290 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.

I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] unknowing8343@discuss.tchncs.de 1 points 1 year ago (2 children)

I am not saying it works exactly like humans inside of the black box. I just say it works. It learns and then creates thoughts. And it works.

You talk about how human cognition is more complex and squishy, but nobody really knows how it truly works inside.

All I see is the same kind of blackbox. A kid trying many, many times to stand up, or to say "papa", until it somehow works, and now the pathway is setup in the brain.

Obviously ChatGPT is just dealing with text. But does it make it NOT intelligent? I think it makes it very text-intelligent. Just add together all the AI pieces we are building and you got yourself a general AI that will do anything we do.

Yeah, maybe it does not work like our brain. But is a human brain structure the only possible structure for intelligence? I don't think so.

[โ€“] nickwitha_k@lemmy.sdf.org 1 points 1 year ago

It does not create "thoughts", it is very good at tricking humans into believing that it does, though.

You talk about how human cognition is more complex and squishy, but nobody really knows how it truly works inside.

It is not that there is no understanding, but rather that we have incomplete understanding. We know, for example, that human cognition is not purely storing recorded stimuli and performing associative analysis against them when meeting other stimuli.

All I see is the same kind of blackbox. A kid trying many, many times to stand up, or to say "papa", until it somehow works, and now the pathway is setup in the brain.

This is a bit of a logical fallacy here, unfortunately, specifically false equivalency (ie. Thing A and Thing B both have characteristic C, therefore Thing A and Thing B are the same). This is exactly the sort of "dangerous" fallacy that a number of AI academics have warned about as well. LLMs are great at producing outputs that our socially-oriented brains can interpret as sentient thought and mistakenly anthropomorphize.

However, LLMs, as the word "model" in the name suggests, are statistical modeling software. They do not understand context or abstract meaning; only statistical occurrence of data in their stack, compared to the inputs. They are physically incapable of developing the Theory of the Mind due to the limitations in how they work.

But does it make it NOT intelligent?

No. The fact that they literally cannot actually understand anything or undertake contemplative, abstract thoughts is what makes them not intelligent. They do not understand the meaning of language; it is just data to them that has no context but how it relates to other parts of language.

Yeah, maybe it does not work like our brain.

I absolutely think that LLMs could be a component in AI but, alone, they are just like saying that a tire is a car because both can travel linear distances using rotation movements. By themselves, LLMs fail to fulfill what we tend to define as intelligence.

But is a human brain structure the only possible structure for intelligence? I don't think so.

I certainly hope that the human brain isn't the only possible structure for intelligence and find it very unlikely because our meat-computers aren't really that special, even if we can't entirely understand how they work yet (we've only really been trying for a relatively short time, compared to our species' existence). We seem to agree there. I absolutely want AI as well as other non-human intelligence to be a thing because the idea of a universe in which humanity is the only sentience is very lonely and sad to me.

[โ€“] stsquad@lemmy.ml 1 points 1 year ago (1 children)

If you consider the amount of text an LLM has to consume to replicate something approaching human like language you have to appreciate there is something else going on with our cognition. LLM's give responses that make statistical sense but humans can actually understand why one arrangement of words might not make sense over the other.

[โ€“] unknowing8343@discuss.tchncs.de 3 points 1 year ago (1 children)

Yes, it's inefficient... and OpenAI and Google are losing exactly because of that.

There's open source models already out there that are rivaling ChatGPT and that you can train on your 10 year-old laptop in a day.

And this is just the beggining.

Also... maybe we should check how many words of exposure a kid gets throughout their life to get to the point to develop arguments such as ChatGPT's... because the thing is that... ChatGPT does know way more about many things than any human being will ever do. Like, easily thousands of times more.

[โ€“] nickwitha_k@lemmy.sdf.org 1 points 1 year ago

And this is just the beggining.

Absolutely agreed, so long as protections are put in place to defang it as a weapon against labor (if few have leisure time or income to support tech development, I see great danger of stagnation). LLMs do clearly seem an important part in advancing towards real AI.