this post was submitted on 04 Sep 2024
1 points (100.0% liked)

Technology

59587 readers
2940 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

top 50 comments
sorted by: hot top controversial new old
[–] simple@lemm.ee 0 points 2 months ago (5 children)

The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions

Llama 2 is insanely outdated and significantly worse than Llama3.1, so this article doesn't mean much.

[–] Gloria@sh.itjust.works 0 points 2 months ago (2 children)

On July 18, 2023, in partnership with Microsoft, Meta announced Llama 2 On April 18, 2024, Meta released Llama-3

L2 it’s one year old. A study like that takes time. What is your point? I bet if they would do it with L3 and the result came back similar, you would say L3 is „insanely outdaded“ as well?

Can you confirm that you think with L3, the result would look completely opposite and the summaries of the AI would always beat the human summaries? Because it sounds like you are implying that.

[–] redscroll@lemmy.ml 0 points 2 months ago* (last edited 2 months ago)

We know the performance of L2-70b to be on par with L3-8b, just to put the difference in perspective. Surely they models continue to improve and we can only hope the same improvements will be found in L4, but I think the point is that models have improved dramatically since this study was run and they have put in way more attention in the fine-tuning and alignment phase of training, specifically for these kinds of tasks. Not saying this means the models would beat the human summaries everytime (very likely not), but at the very least the disparity between them wouldn't be nearly as large. Ultimately, human summaries will always be "ground truth", so it's hard to see how models will beat humans, but they can get close.

[–] simple@lemm.ee 0 points 2 months ago (3 children)

Can you confirm that you think with L3, the result would look completely opposite and the summaries of the AI would always beat the human summaries? Because it sounds like you are implying that.

Lemmy users try not to make a strawman argument (impossible challenge)

No, that's not what I said, and not even close to what I was implying. If Llama 2 scored a 47% then 3.1 would score significantly better, easily over 60% at least. No doubt humans can be better at summarizing but A) It needs someone that's very familiar with the work and has great English skills and B) It needs a lot of time and effort.

The claim was never that AI can summarize better than people, it was that it can do it in 10 seconds and the result would be "good enough". People are already doing AI summaries of longer articles without much complaints.

load more comments (3 replies)
[–] kromem@lemmy.world 0 points 2 months ago

This is pretty much every study right now as things accelerate. Even just six months can be a dramatic difference in capabilities.

For example, Meta's 3-405B has one of the leading situational awarenesses of current models, but isn't present at all to the same degree in 2-70B or even 3-70B.

[–] homesweethomeMrL@lemmy.world 0 points 2 months ago

Just a few more tens of millions of dollars, and it’ll be vastly improved to “pathetic” and “insipid”.

[–] Wooki@lemmy.world 0 points 2 months ago (1 children)

You didn't bother to Read the article. Read the article. Study was conducted last year

[–] simple@lemm.ee 0 points 2 months ago (1 children)

I read the article. I'm aware it's an older study. Point still stands.

[–] Wooki@lemmy.world 0 points 2 months ago* (last edited 2 months ago)

And yet your claim is still pointless unlike this study

load more comments (1 replies)
[–] jeena@piefed.jeena.net 0 points 2 months ago

My guess ist that even if it would be better when it comes to generic text, most of the texts which really mean something have a lot of context around them which a model will know nothing about and thus will not know what is important to the people working with this topic and what is not.

[–] testfactor@lemmy.world 0 points 2 months ago (4 children)

Well, not every metric. I bet the computers generated them way faster, lol. :P

[–] fine_sandy_bottom@discuss.tchncs.de 0 points 2 months ago (1 children)

This is a really valid point, especially because it's not only faster but dramatically cheaper.

The thing is, summaries which are pretty terrible might be costly. If decision makers are relying on these summaries and they're inaccurate, then the consequences might be immeasurable.

Suppose you're considering 2 cars, one is very cheap but on one random day per month it just won't start, the other is 5x the price but will work every day. If you really need the car to get to work, then the one that randomly doesn't start might be worse than no car at all.

load more comments (1 replies)
[–] Darkassassin07@lemmy.ca 0 points 2 months ago (1 children)

And for a much much smaller paycheck.

All corporate gives af about.

[–] pennomi@lemmy.world 0 points 2 months ago (7 children)

It might be all I care about. Humans might always be better, but AI only has to be good enough at something to be valuable.

For example, summarizing an article might be incredibly low stakes (I’m feeling a bit curious today), or incredibly high stakes (I’m preparing a legal defense), depending on the context. An AI is sufficient for one use but not the other.

[–] greenskye@lemm.ee 0 points 2 months ago (2 children)

And you can absolutely trust that tons of executives will definitely not understand this distinction and will use AI even in areas where it's actively harmful.

load more comments (2 replies)
load more comments (6 replies)
load more comments (2 replies)
[–] T00l_shed@lemmy.world 0 points 2 months ago

From my experience that was the case. However it was with gpt 3, and I am a sample of 1.

[–] SkyNTP@lemmy.ml 0 points 2 months ago (6 children)

LLMs == AGI was and continues to be a massive lie perpetuated by tech companies and investors that people still have not woken up to.

[–] ContrarianTrail@lemm.ee 0 points 2 months ago (2 children)

Who is claiming that LLMs are generally intelligent? Is it just "they" or can you actually name a company?

[–] kautau@lemmy.world 0 points 2 months ago

I think the idea is that every company is dumping money into LLMs and no other form of alternative AI development to the point that all AI research is LLM based and therefore to investors and those involved, it’s effectively the only only avenue to AGI, though that’s likely not true

[–] exanime@lemmy.world 0 points 2 months ago (2 children)

You mean the stuff currently peddled everywhere as "Artificial intelligence"?

Yeah, nobody is saying they are intelligent

[–] ContrarianTrail@lemm.ee 0 points 2 months ago (1 children)

AI and AGI are not the same thing.

A chess playing robot is intelligent but it's so called "narrow intelligence" because it's really good at one thing but that doesn't translate to other things. Human are generally intelligent because we can perform a wide range of cognitive tasks. There's nothing wrong at calling LLM an AI because that's what it is. I'm not aware of a single AI company claiming to posses an AGI system.

load more comments (1 replies)
[–] TheGrandNagus@lemmy.world 0 points 2 months ago (3 children)

In game NPC actions have been called "AI" for decades. Computers playing chess has been called AI for decades. Lots of stuff has been.

Nobody thought they were genuinely sentient or sapient.

The fact that people lumped LLMs, text-to-image generators, machine learning algorithms, image recognition algorithms, etc into a category and called it "AI" doesn't mean they think it is self aware or intelligent in the way a human would be.

load more comments (3 replies)
load more comments (5 replies)
[–] maegul@lemmy.ml 0 points 2 months ago (3 children)

Not a stock market person or anything at all ... but NVIDIA's stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).

What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I've seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.

I'd wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.

[–] atrielienz@lemmy.world 0 points 2 months ago (1 children)

NVIDIA has been having a lot of problems with their 13th/14th gen CPU's degrading. They are also embroiled in a potential anti-investigation. That coupled with the "growing pains of generative AI" has caused them a lot of problems where 2 months ago they were one of the world's most valuable companies.

Some of it is likely the die-off of the AI hype but their problems are farther reaching than the sudden AI boom.

[–] maegul@lemmy.ml 0 points 2 months ago
[–] homesweethomeMrL@lemmy.world 0 points 2 months ago

What are the chances that this is the investors getting cold feet about the AI hype?

Investors have proven over and over they’re credulous idiots who understand sweet fuck-all about technology and will throw money at whatever’s in their face. Creepy Sam and the Microshits will trot out some more useless garbage and prize a few more billion out of the market in just a little while.

[–] Voroxpete@sh.itjust.works 0 points 2 months ago (2 children)

"What are the chances..."

Approximately 100%.

That doesn't mean that the slide will absolutely continue. There may be some fresh injection of hype that will push investor confidence back up, but right now the wind is definitely going out of the sails.

The core issue, as the Goldman - Sachs report notes, is that AI is currently being valued as a trillion dollar industry, but it has not remotely demonstrated the ability to solve a trillion dollar problem.

No one selling AI tools is able to demonstrate with confidence that they can be made reliable enough, or cheap enough, to truly replace the human element, and without that they will only ever be fun curiosities.

And that "cheap enough" part is critical. It is not only that GenAI is deeply unreliable, but also that it costs a truly staggering amount of money to operate (OpenAI are burning something like $10 billion a year). What's the point in replacing an employee you pay $10 an hour to handle customer service issues with a bot that costs $5 for every reply it generates?

[–] maegul@lemmy.ml 0 points 2 months ago

Yea, the "cheaper than droids" line in Andor feels strangely prescient ATM.

[–] kautau@lemmy.world 0 points 2 months ago (1 children)

Yeah we are on the precipice of a massive bubble about to burst because, like the dot com bubble magic promises are being made by and to people who don’t understand the tech as if it is some magic that will net incredible profits just by pursuing it. LLMs have great applications in specific things, but they are being thrown in every direction to see where they will stick and the magic payoff will come

load more comments (1 replies)
[–] ArbitraryValue@sh.itjust.works 0 points 2 months ago* (last edited 2 months ago) (5 children)

The important thing here isn't that the AI is worse than humans. It's than the AI is worth comparing to humans. Humans stay the same while software can quickly improve by orders of magnitude.

[–] krashmo@lemmy.world 0 points 2 months ago

Theoretically that's true. Can you tell techbros and the media to shut up about AI until it happens though?

[–] ContrarianTrail@lemm.ee 0 points 2 months ago (2 children)

The AI we have today is the worst it'll ever be. I can only think of two possible scenarios where AI doesn't eventually surpass human on every single cognitive task:

  1. There's something fundamentally different about computer made of meat (our brains) that cannot be replicated in silica. I personally don't see this as very likely since both are made of matter and matter obeys the laws of physics.

  2. We destroy ourselves before we reach AGI.

Otherwise we'll keep improving our technology and inching forward. It may take 5 years or 50 but it wont stop unless either of the scenarios stated above is true.

[–] ArbitraryValue@sh.itjust.works 0 points 2 months ago* (last edited 2 months ago) (2 children)

It would be odd if AI somehow got worse. I mean, wouldn't they just revert to a backup?

Anyway, I think (1) is extremely unlikely but I would add (3) the existing algorithms are fundamentally insufficient for AGI no matter how much they're scaled up. A breakthrough is necessary which may not happen for a long time.

I think (3) is true but I also thought that the existing algorithms were fundamentally insufficient for getting to where we are now, and I was wrong. It turns out that they did just need to be scaled up...

[–] ContrarianTrail@lemm.ee 0 points 2 months ago (1 children)

It's possible that the way of generative AI and LLMs is a dead end but that wouldn't be a stop, only a speed bump. It would only mean it takes longer for us to get there, not that we wouldn't get there.

[–] ArbitraryValue@sh.itjust.works 0 points 2 months ago (2 children)

I don't disagree, but before the recent breakthroughs I would have said that AI is like fusion power in the sense that it has been 50 years away for 50 years. If the current approach doesn't get us there, who knows how long it will take to discover one that does?

[–] kautau@lemmy.world 0 points 2 months ago (2 children)

Right and all the dogs in the race are now focused on neural networks and llms, which means for now, all the effort could be focused on a dead end. Because of the way capitalism is driving AI research, other avenues of AI research have almost effectively halted, so it will take the current AI bubble to pop before alternative research ramps up again

[–] Jesus_666@lemmy.world 0 points 2 months ago (1 children)

Like every time there's an AI bubble. And like every time changes are that in a few years public interest will wane and current generative AI will fade into the background as a technology that everyone uses but nobody cares about, just like machine translation, speech recognition, fuzzy logic, expert systems...

Even when these technologies get better with time (and machine translation certainly got a lot better since the sixties) they fail to recapture their previous levels of excitement and funding.

We currently overcome what popped the last AI bubbles by throwing an absurd amount of resources at the problem. But at some point we'll have to admit that doubling the USA's energy consumption for a year to train the next generation of LLMs in hopes of actually turning a profit this time isn't sustainable.

load more comments (1 replies)
load more comments (1 replies)
[–] ContrarianTrail@lemm.ee 0 points 2 months ago

The timeline doesn't really matter to me personally. As long as we accept the fact that we'll get there sooner or later it should motivate us to start thinking about the implications that comes with. Otherwise it's like knowing there's an asteroid hurling towards the earth but we'll just dismiss it by saying: "Eh, it's still 100 years away, there's no rush here"

[–] Wooki@lemmy.world 0 points 2 months ago (1 children)

It would be odd if AI somehow got worse.

No its not odd at all, its the opposite, it is happening and multiple studies are showing its decay is being caused by feedback entropy.

load more comments (1 replies)
[–] Ultraviolet@lemmy.world 0 points 2 months ago (1 children)

LLMs are fundamentally a dead end though. If we ever create AGI, it will be a qualitatively different thing from an LLM.

[–] ContrarianTrail@lemm.ee 0 points 2 months ago (1 children)

It's not obvious to me as to why this is for 100% certainty going to be the case. Even if it's likely true, there's still a chance it might not be.

[–] Wooki@lemmy.world 0 points 2 months ago* (last edited 2 months ago) (2 children)

Zero chance IBMs most likely word predictor will become anything more than what it is programmed to be. It is not magic, witches dont exist.

[–] rottingleaf@lemmy.world 0 points 2 months ago

People were being shown deus ex machina in supposedly sci-fi movies and series for many years.

Only there it was always meant as 1 in a billion event, as a miracle.

Here a lot of people want to streamline miracles, while even one hasn't been produced yet.

It's the difference between Tolkien's dwarves and Disney's gnomes.

[–] ContrarianTrail@lemm.ee 0 points 2 months ago

So it is so because you say it's so? Okay. I remain unconvinced.

load more comments (3 replies)
[–] kromem@lemmy.world 0 points 2 months ago* (last edited 2 months ago)

Meanwhile, here's an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn't exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It's also worth noting that Claude 3 Opus doesn't have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).

[–] jawa21@lemmy.sdf.org 0 points 2 months ago

This reminds me. What happened to that tldr bot? I did appreciate the summaries, even if they weren't perfect.

[–] stoy@lemmy.zip 0 points 2 months ago (1 children)

"Just one more training on a social network"

Can't wait for the bouble to burst.

[–] finitebanjo@lemmy.world 0 points 2 months ago (6 children)

We shouldn't wait, it is already basically illegal to sample the works of others so we should just pull the plug now.

load more comments (6 replies)
[–] Glitch@lemmy.dbzer0.com 0 points 2 months ago

Nice to have though, would likely skip or half-ass a lot of stuff if I didn't have a tool like AI to do the boring parts. When I can get started on a task really quickly, I don't care what the quality is, I'll iterate until it meets my standards.

load more comments
view more: next ›