The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions
Llama 2 is insanely outdated and significantly worse than Llama3.1, so this article doesn't mean much.
This is a most excellent place for technology news and articles.
The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions
Llama 2 is insanely outdated and significantly worse than Llama3.1, so this article doesn't mean much.
On July 18, 2023, in partnership with Microsoft, Meta announced Llama 2 On April 18, 2024, Meta released Llama-3
L2 it’s one year old. A study like that takes time. What is your point? I bet if they would do it with L3 and the result came back similar, you would say L3 is „insanely outdaded“ as well?
Can you confirm that you think with L3, the result would look completely opposite and the summaries of the AI would always beat the human summaries? Because it sounds like you are implying that.
We know the performance of L2-70b to be on par with L3-8b, just to put the difference in perspective. Surely they models continue to improve and we can only hope the same improvements will be found in L4, but I think the point is that models have improved dramatically since this study was run and they have put in way more attention in the fine-tuning and alignment phase of training, specifically for these kinds of tasks. Not saying this means the models would beat the human summaries everytime (very likely not), but at the very least the disparity between them wouldn't be nearly as large. Ultimately, human summaries will always be "ground truth", so it's hard to see how models will beat humans, but they can get close.
Can you confirm that you think with L3, the result would look completely opposite and the summaries of the AI would always beat the human summaries? Because it sounds like you are implying that.
Lemmy users try not to make a strawman argument (impossible challenge)
No, that's not what I said, and not even close to what I was implying. If Llama 2 scored a 47% then 3.1 would score significantly better, easily over 60% at least. No doubt humans can be better at summarizing but A) It needs someone that's very familiar with the work and has great English skills and B) It needs a lot of time and effort.
The claim was never that AI can summarize better than people, it was that it can do it in 10 seconds and the result would be "good enough". People are already doing AI summaries of longer articles without much complaints.
This is pretty much every study right now as things accelerate. Even just six months can be a dramatic difference in capabilities.
For example, Meta's 3-405B has one of the leading situational awarenesses of current models, but isn't present at all to the same degree in 2-70B or even 3-70B.
Just a few more tens of millions of dollars, and it’ll be vastly improved to “pathetic” and “insipid”.
You didn't bother to Read the article. Read the article. Study was conducted last year
I read the article. I'm aware it's an older study. Point still stands.
And yet your claim is still pointless unlike this study
My guess ist that even if it would be better when it comes to generic text, most of the texts which really mean something have a lot of context around them which a model will know nothing about and thus will not know what is important to the people working with this topic and what is not.
Well, not every metric. I bet the computers generated them way faster, lol. :P
This is a really valid point, especially because it's not only faster but dramatically cheaper.
The thing is, summaries which are pretty terrible might be costly. If decision makers are relying on these summaries and they're inaccurate, then the consequences might be immeasurable.
Suppose you're considering 2 cars, one is very cheap but on one random day per month it just won't start, the other is 5x the price but will work every day. If you really need the car to get to work, then the one that randomly doesn't start might be worse than no car at all.
And for a much much smaller paycheck.
All corporate gives af about.
It might be all I care about. Humans might always be better, but AI only has to be good enough at something to be valuable.
For example, summarizing an article might be incredibly low stakes (I’m feeling a bit curious today), or incredibly high stakes (I’m preparing a legal defense), depending on the context. An AI is sufficient for one use but not the other.
And you can absolutely trust that tons of executives will definitely not understand this distinction and will use AI even in areas where it's actively harmful.
From my experience that was the case. However it was with gpt 3, and I am a sample of 1.
LLMs == AGI was and continues to be a massive lie perpetuated by tech companies and investors that people still have not woken up to.
Who is claiming that LLMs are generally intelligent? Is it just "they" or can you actually name a company?
I think the idea is that every company is dumping money into LLMs and no other form of alternative AI development to the point that all AI research is LLM based and therefore to investors and those involved, it’s effectively the only only avenue to AGI, though that’s likely not true
You mean the stuff currently peddled everywhere as "Artificial intelligence"?
Yeah, nobody is saying they are intelligent
AI and AGI are not the same thing.
A chess playing robot is intelligent but it's so called "narrow intelligence" because it's really good at one thing but that doesn't translate to other things. Human are generally intelligent because we can perform a wide range of cognitive tasks. There's nothing wrong at calling LLM an AI because that's what it is. I'm not aware of a single AI company claiming to posses an AGI system.
In game NPC actions have been called "AI" for decades. Computers playing chess has been called AI for decades. Lots of stuff has been.
Nobody thought they were genuinely sentient or sapient.
The fact that people lumped LLMs, text-to-image generators, machine learning algorithms, image recognition algorithms, etc into a category and called it "AI" doesn't mean they think it is self aware or intelligent in the way a human would be.
Not a stock market person or anything at all ... but NVIDIA's stock has been oscillating since July and has been falling for about a 2 weeks (see Yahoo finance).
What are the chances that this is the investors getting cold feet about the AI hype? There were open reports from some major banks/investors about a month or so ago raising questions about the business models (right?). I've seen a business/analysis report on AI, despite trying to trumpet it, actually contain data on growing uncertainties about its capability from those actually trying to implement, deploy and us it.
I'd wager that the situation right now is full a lot of tension with plenty of conflicting opinions from different groups of people, almost none of which actually knowing much about generative-AI/LLMs and all having different and competing stakes and interests.
NVIDIA has been having a lot of problems with their 13th/14th gen CPU's degrading. They are also embroiled in a potential anti-investigation. That coupled with the "growing pains of generative AI" has caused them a lot of problems where 2 months ago they were one of the world's most valuable companies.
Some of it is likely the die-off of the AI hype but their problems are farther reaching than the sudden AI boom.
Thanks!
What are the chances that this is the investors getting cold feet about the AI hype?
Investors have proven over and over they’re credulous idiots who understand sweet fuck-all about technology and will throw money at whatever’s in their face. Creepy Sam and the Microshits will trot out some more useless garbage and prize a few more billion out of the market in just a little while.
"What are the chances..."
Approximately 100%.
That doesn't mean that the slide will absolutely continue. There may be some fresh injection of hype that will push investor confidence back up, but right now the wind is definitely going out of the sails.
The core issue, as the Goldman - Sachs report notes, is that AI is currently being valued as a trillion dollar industry, but it has not remotely demonstrated the ability to solve a trillion dollar problem.
No one selling AI tools is able to demonstrate with confidence that they can be made reliable enough, or cheap enough, to truly replace the human element, and without that they will only ever be fun curiosities.
And that "cheap enough" part is critical. It is not only that GenAI is deeply unreliable, but also that it costs a truly staggering amount of money to operate (OpenAI are burning something like $10 billion a year). What's the point in replacing an employee you pay $10 an hour to handle customer service issues with a bot that costs $5 for every reply it generates?
Yea, the "cheaper than droids" line in Andor feels strangely prescient ATM.
Yeah we are on the precipice of a massive bubble about to burst because, like the dot com bubble magic promises are being made by and to people who don’t understand the tech as if it is some magic that will net incredible profits just by pursuing it. LLMs have great applications in specific things, but they are being thrown in every direction to see where they will stick and the magic payoff will come
The important thing here isn't that the AI is worse than humans. It's than the AI is worth comparing to humans. Humans stay the same while software can quickly improve by orders of magnitude.
Theoretically that's true. Can you tell techbros and the media to shut up about AI until it happens though?
The AI we have today is the worst it'll ever be. I can only think of two possible scenarios where AI doesn't eventually surpass human on every single cognitive task:
There's something fundamentally different about computer made of meat (our brains) that cannot be replicated in silica. I personally don't see this as very likely since both are made of matter and matter obeys the laws of physics.
We destroy ourselves before we reach AGI.
Otherwise we'll keep improving our technology and inching forward. It may take 5 years or 50 but it wont stop unless either of the scenarios stated above is true.
It would be odd if AI somehow got worse. I mean, wouldn't they just revert to a backup?
Anyway, I think (1) is extremely unlikely but I would add (3) the existing algorithms are fundamentally insufficient for AGI no matter how much they're scaled up. A breakthrough is necessary which may not happen for a long time.
I think (3) is true but I also thought that the existing algorithms were fundamentally insufficient for getting to where we are now, and I was wrong. It turns out that they did just need to be scaled up...
It's possible that the way of generative AI and LLMs is a dead end but that wouldn't be a stop, only a speed bump. It would only mean it takes longer for us to get there, not that we wouldn't get there.
I don't disagree, but before the recent breakthroughs I would have said that AI is like fusion power in the sense that it has been 50 years away for 50 years. If the current approach doesn't get us there, who knows how long it will take to discover one that does?
Right and all the dogs in the race are now focused on neural networks and llms, which means for now, all the effort could be focused on a dead end. Because of the way capitalism is driving AI research, other avenues of AI research have almost effectively halted, so it will take the current AI bubble to pop before alternative research ramps up again
Like every time there's an AI bubble. And like every time changes are that in a few years public interest will wane and current generative AI will fade into the background as a technology that everyone uses but nobody cares about, just like machine translation, speech recognition, fuzzy logic, expert systems...
Even when these technologies get better with time (and machine translation certainly got a lot better since the sixties) they fail to recapture their previous levels of excitement and funding.
We currently overcome what popped the last AI bubbles by throwing an absurd amount of resources at the problem. But at some point we'll have to admit that doubling the USA's energy consumption for a year to train the next generation of LLMs in hopes of actually turning a profit this time isn't sustainable.
The timeline doesn't really matter to me personally. As long as we accept the fact that we'll get there sooner or later it should motivate us to start thinking about the implications that comes with. Otherwise it's like knowing there's an asteroid hurling towards the earth but we'll just dismiss it by saying: "Eh, it's still 100 years away, there's no rush here"
It would be odd if AI somehow got worse.
No its not odd at all, its the opposite, it is happening and multiple studies are showing its decay is being caused by feedback entropy.
LLMs are fundamentally a dead end though. If we ever create AGI, it will be a qualitatively different thing from an LLM.
It's not obvious to me as to why this is for 100% certainty going to be the case. Even if it's likely true, there's still a chance it might not be.
Zero chance IBMs most likely word predictor will become anything more than what it is programmed to be. It is not magic, witches dont exist.
People were being shown deus ex machina in supposedly sci-fi movies and series for many years.
Only there it was always meant as 1 in a billion event, as a miracle.
Here a lot of people want to streamline miracles, while even one hasn't been produced yet.
It's the difference between Tolkien's dwarves and Disney's gnomes.
So it is so because you say it's so? Okay. I remain unconvinced.
Meanwhile, here's an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn't exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:
Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It's also worth noting that Claude 3 Opus doesn't have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).
This reminds me. What happened to that tldr bot? I did appreciate the summaries, even if they weren't perfect.
"Just one more training on a social network"
Can't wait for the bouble to burst.
We shouldn't wait, it is already basically illegal to sample the works of others so we should just pull the plug now.
Nice to have though, would likely skip or half-ass a lot of stuff if I didn't have a tool like AI to do the boring parts. When I can get started on a task really quickly, I don't care what the quality is, I'll iterate until it meets my standards.