this post was submitted on 12 Jun 2024
1 points (100.0% liked)

Technology

58458 readers
4472 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kecessa@sh.itjust.works 0 points 3 months ago (2 children)

I'm 100% sure they can't because what they call AI isn't intelligence.

[–] dch82@lemmy.zip 0 points 3 months ago (1 children)

Intelligence is whatever does the job and gets it done well.

[–] CoggyMcFee@lemmy.world 0 points 3 months ago* (last edited 3 months ago) (1 children)

AI is whatever makes the dollar sign number get bigger

[–] dch82@lemmy.zip 0 points 3 months ago

It’s intelligent in that regard…

[–] iopq@lemmy.world 0 points 3 months ago (5 children)

Even people hallucinate. Under your definition intelligence doesn't exist

[–] Ultraviolet@lemmy.world 0 points 3 months ago (1 children)

"Hallucination" is an anthropomorphized term for what's happening. The actual cause is much simpler, there's no semantic distinction between true and false statements. Both are equally plausible as far as a language model is concerned, as long as it's semantically structured like an answer to the question being asked.

[–] htrayl@lemmy.world 0 points 3 months ago (2 children)

That's also pretty true for people, unfortunately. People are deeply incapable of differentiating fact from fiction.

[–] kaffiene@lemmy.world 0 points 3 months ago (1 children)

No that's not it at all. People know that they don't know some things. LLMs do not.

[–] sugar_in_your_tea@sh.itjust.works 0 points 3 months ago (1 children)

Exactly, the LLM isn't "thinking," it's just matching inputs to outputs with some randomness thrown in. If your data is high quality, a lot of the time the answers will be appropriate given the inputs. If your data is poor, it'll output surprising things more often.

It's a really cool technology in how much we get for how little effort we put in, but it's not "thinking" in any sense of the word. If you want it to "think," you'll need to put in a lot more effort.

[–] ricdeh@lemmy.world 0 points 3 months ago

Your brain is also "just" matching inputs to outputs using complex statistics, a huge number of interconnects and clever digital-analog mixed ionic circuitry.

Like how many, five?

[–] h3mlocke@lemm.ee 0 points 3 months ago (1 children)

This is some real "what else besides witches floats in water" ass-logic

Very small rocks!

[–] technocrit@lemmy.dbzer0.com 0 points 3 months ago* (last edited 3 months ago) (1 children)

Wow whoosh. The point is that "AI" isn't actually "intelligent" like a human and thus can't "hallucinate" like an intelligent human.

All of this anthropomorphic terminology is just misleading marketing bullshit.

[–] iopq@lemmy.world 0 points 3 months ago (5 children)

Who said anything about human intelligence? AIs have a different kind of intelligence, an artificial kind. I'm tired of pretending they don't

Ever heard of the Turing test? Ever since AIs could pass it it became not a thing. Before that, playing Go was the mark of AI.

Any time an AI achieves a new thing people move goalposts. So I ask you: what does AI need to achieve to have intelligence?

[–] zbyte64@awful.systems 0 points 3 months ago

Ever heard of the Turing test? Ever since AIs could pass it it became not a thing.

In place of the Turing test we have a new test that informs us whether an individual can properly identify a stochastic parrot

[–] bionicjoey@lemmy.ca 0 points 3 months ago (1 children)

The Turing Test says that any person could have any conversation with a machine and there's no chance you could tell it's a machine. It does not say that one person could have one conversation with a machine and not be able to tell.

Current text generation models out themselves all the damn time. It can't actually understand the underlying concepts of words. It just predicts what bit of text would be most convincing to a human based on previous text.

Playing Go was never the mark of AI, it was the mark of improving game-playing machines. It doesn't represent "intelligence", only an ability to predict what should happen next based on a set of training data.

It's worth noting that after Lee Se Dol lost to Alphago, researchers found a fairly trivial Go strategy that could reliably beat the machine. It was simply such an easy strategy to counter that none of the games in the training data had included anyone attempting that strategy, so the algorithm didn't account for how to counter it. Because the computer doesn't know Go theory, it only knows how to predict what to do next based on the training data.

[–] iopq@lemmy.world 0 points 3 months ago (1 children)

Detecting the machine correctly once is not enough. You need to guess correctly most of the time to statistically prove it's not by chance. It's possible for some people to do this, but I've seen a lot of comments on websites accusing HUMAN answers of being written by AIs.

If the current chat bots improve to reliably not be detected, would that be intelligence then?

KataGo just fixed that bug by putting those positions into the training data. The reason it wasn't in the training data is because the training data at first was just self-play games. When games that are losses for the AI from humans are included, the bug is fixed.

[–] petrol_sniff_king@lemmy.blahaj.zone 0 points 3 months ago (1 children)

When games that are losses for the AI from humans are included, the bug is fixed.

You're not grasping the fundamental problem here.

This is like saying a calculator understands math because when you plug in the right functions, you get the right answers.

[–] iopq@lemmy.world 0 points 3 months ago (1 children)

The AI grasps the strategic aspects of the game really well. To the point that if you don't let it "read" deeply into the game tree, but only "guess" moves (that is, only use the policy network) it still plays at a high level (below professional, but strong amateur)

[–] petrol_sniff_king@lemmy.blahaj.zone 0 points 3 months ago (1 children)

How does it "understand the strategic aspects of the game really well" if it can't solve problems it hasn't seen the answers to?

load more comments (1 replies)
[–] kaffiene@lemmy.world 0 points 3 months ago

People can mean different things. Intelligence can mean a calculator doing a sum, and it can mean the way humans talk to each other. AI can do some intelligent things without people agreeing that it's intelligent in the latter sense.

[–] homicidalrobot@lemm.ee 0 points 3 months ago (2 children)

The same thing actually passing a turing test would require. You've obviously read the words "Turing test" somewhere and thought you understood what it meant, but no robot we've ever produced as a species has passed the turing test. It EXPLICITLY requires that intelligence equal to (or indistinguishable from) HUMAN intelligence is shown. Without a liar reading responses, no AI we'll produce for decades will pass the turing test.

No large language model has intelligence. They're just complicated call and response mechanisms that guess what answer we want based on a weighted response system (we tell it directly or tell another machine how to help it "weigh" words in a response). Obviously with anything that requires massive amounts of input or nuance, like language, it'll only be right about what it was guided on, which is limited to areas it is trained in.

We don't have any novel interactions with AI. They are regurgitation engines, bringing forward sentences that aren't theirs piecemeal. Given ten messages, I'm confident no major LLM would pass a Turing test.

[–] iopq@lemmy.world 0 points 3 months ago

The chat bots will pass the Turing test in a few years, maybe 5. Would that be intelligence then?

[–] BluesF@lemmy.world 0 points 3 months ago (1 children)

The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn't be surprised if a modern LLM could pass it, at least some of the time. That doesn't mean they are intelligent, they aren't, but I don't think the Turing test is good justification.

For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn't plan a whole sentence out in advance, it works token by token... The input to each prediction is just everything so far, up to the last word. When it starts writing "As..." it has no concept of the fact that it's going to write "...an AI A language model" until it gets through those words.

Frankly, given that fact it's amazing that LLMs can be as powerful as they are. They don't check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token... An incredible piece of technology, despite it's obvious flaws.

[–] petrol_sniff_king@lemmy.blahaj.zone 0 points 3 months ago (1 children)

The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake.

This is just conjecture, but I assume this is because the question of consciousness is not really falsifiable, so you just kind of have to draw an arbitrary line somewhere.

Like, maybe tech gets so good that we really can't tell the difference, and only god knows it isn't really alive. But then, how would we know not to give the machine legal rights?

For the record, ChatGPT does not pass the turing test.

[–] BluesF@lemmy.world 0 points 3 months ago

ChatGPT is not designed to fool us into thinking it's a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn't be intelligent, in my view.

[–] h3mlocke@lemm.ee 0 points 3 months ago (1 children)

Have you ever heard of the Turing test?

https://en.m.wikipedia.org/wiki/Turing_test

Here you go since you've heard of it but don't understand it.

[–] iopq@lemmy.world 0 points 3 months ago (1 children)

Current AIs pass it, since most people can't reasonably tell between AI and human-written stuff every time

[–] ChairmanMeow@programming.dev 0 points 3 months ago (1 children)

It's dead simple to see if you're talking to an LLM. The latest models don't pass the Turing test, not even close. Asking them simple shit causes them to crap themselves really quickly.

Ask ChatGPT how many r's there are in "veryberry". When it gets it wrong, tell it you're disappointed and expect a correct answer. If you do that repeatedly, you can get it to claim there's more r's in the word than it has letters.

[–] iopq@lemmy.world 0 points 3 months ago (2 children)
[–] ChairmanMeow@programming.dev 0 points 3 months ago (4 children)
load more comments (4 replies)
[–] demonsword@lemmy.world 0 points 3 months ago (1 children)

that's it? you asked one question and that was enough for you?

[–] iopq@lemmy.world 0 points 3 months ago

It's quite easy to identify an AI when you're talking to one. To be fair, you need to actually run the Turing test since it removes confirmation bias

[–] kaffiene@lemmy.world 0 points 3 months ago

LLMs aren't even hallucinating thou. It's a euphamistic term to make it's limitations sound human like

[–] heavy@sh.itjust.works 0 points 3 months ago (1 children)

No, really, if you understood how the language models work, you would understand it's not really intelligence. We just tend to humanize it because that's what our brains do.

There's a lot of great articles that summarize how we got to this stage and it's pretty interesting. I'll try to update this post with a link later.

I think LLMs are useful (and fun) and have a place, but intelligence they are not.

[–] iopq@lemmy.world 0 points 3 months ago (14 children)

I'm still waiting for the definition of intelligence that won't have the same moving of goalposts the Turing Test did

I think the definition is "whichever is more emotionally important to you." So, in your case, they would be very, very intelligent.

load more comments (13 replies)