this post was submitted on 13 Sep 2023
64 points (100.0% liked)

Technology

37739 readers
500 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Avram Piltch is the editor in chief of Tom's Hardware, and he's written a thoroughly researched article breaking down the promises and failures of LLM AIs.

you are viewing a single comment's thread
view the rest of the comments
[–] DarkenLM@artemis.camp 12 points 1 year ago (1 children)

Machines don't Lear like humans yet.

Our brains are a giant electrical/chemical system that somehow creates consciousness. We might be able to create that in a computer. And the day it happens, then what will be the difference between a human and a true AI?

[–] CanadaPlus@lemmy.sdf.org 3 points 1 year ago (1 children)

If you read the article, there's "experts" saying that human comprehension is fundamentally computationally intractable, which is basically a religious standpoint. Like, ChatGPT isn't intellegent yet, partly because it doesn't really have long term memory, but yeah, there's overwhelming evidence the brain is a machine like any other.

[–] barsoap@lemm.ee 2 points 1 year ago (1 children)

fundamentally computationally intractable

...using current AI architecture, and the insight isn't new it's maths. This is currently the best idea we have about the subject. Trigger warning: Cybernetics, and lots of it.

Meanwhile yes of course brains are machines like any other claiming otherwise is claiming you can compute incomputable functions which a physical and logical impossibility. And it's fucking annoying to talk about this topic with people who don't understand computability. Usually turns into a shouting match of "you're claiming the existence of something like a soul, some metaphysical origin of the human mind" vs. "no I'm not" vs. "yes you are but you don't understand why".

[–] CanadaPlus@lemmy.sdf.org 0 points 1 year ago (1 children)

…using current AI architecture, and the insight isn’t new it’s maths.

That is not what van Rooij et al. said, which is who was cited in here. They published their essay here, which I haven't really read, but which appears to make an argument about any possible computer. They're psychologists and I don't see any LaTeX in there, so they must be missing something.

Unfortunately I can't open your link, although it sounds interesting. A feedforward network can approximate any computable function if it gets to be arbitrarily large, but depending on how you want to feed an agent inputs from it's environment and read it's actions a single function might not be enough.

[–] barsoap@lemm.ee 0 points 1 year ago (1 children)

They’re psychologists and I don’t see any LaTeX in there,

Oh no that's LaTeX alright. I can tell by everything from the font to the line breaking, some of it is hard to imitate with an office suite, the rest impossible. But I'll totally roll with dunking on psychologists :)

In this paper, we undercut these views and claims by presenting a mathematical proof of inherent intractability (formally, NP-hardness) of the task that these AI engineers set themselves

Yeah I don't buy it. If human cognition was inherently NP-hard we'd have brains the size of suns. OTOH it might be "close to NP" in the same sense as the travelling salesman is NP, but it's quite feasible indeed to get answers guaranteed to not be X% (with user choice of X) worse than the actually shortest path which is good enough in practice. We do, after all, have to operate largely in real-time, there's no time to be perfect when a sabre tooth tiger is trying to eat you.

Or think about SAT solvers: They can solve large classes of problems ridiculously fast even though the problem is, in its full generality, NP. And the class they're fast on is so large that people very much do treat solving SAT as tractable: Because it usually is. Maybe that is why we get headaches from hard problems.

Unfortunately I can’t open your link, although it sounds interesting.

Then let me throw citations at you. The first is for the underlying theory characterising the necessary cybernetic characteristics of human minds, the second one applies it to current approaches to AI. This comes out of German publicly-funded basic research (Max Planck / FIAS)

Nikolić, Danko. "Practopoiesis: Or how life fosters a mind." Journal of Theoretical Biology 373 (2015): 40-61.
Nikolić, Danko. "Why deep neural nets cannot ever match biological intelligence and what to do about it?." International Journal of Automation and Computing 14.5 (2017): 532-541.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

Arxiv link for the first one: https://arxiv.org/abs/1402.5332

Also, TIL people use LaTeX for normal documents with no formulas.