this post was submitted on 20 Jun 2024
1 points (100.0% liked)

Technology

59587 readers
2940 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

....

I don't believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

top 50 comments
sorted by: hot top controversial new old
[–] droopy4096@lemmy.ca 0 points 5 months ago

Best AI rant hands-down. I can agree with every word there.

[–] Roflmasterbigpimp@lemmy.world 0 points 5 months ago (3 children)

TLDR; AI-Bad, I'm smart.

Why is this on here?

[–] FaceDeer@fedia.io 0 points 5 months ago

It echoes the chamber's preferred echo.

load more comments (1 replies)
[–] AIhasUse@lemmy.world 0 points 5 months ago

I don't know how much stock to put in this author. They can't even read the chart that they shared. They saw that 8% didn't get use from gen ai and so assumed that 92% did. There are also 7% that haven't tried using it yet. Ironically, pretty much any LLM with vision would have done a better job of comprehending the chart than this author did.

[–] jaaake@lemmy.world 0 points 5 months ago (1 children)

After reading that entire post, I wish I had used AI to summarize it.

I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I'm no longer as confident that I know what's going on.

This pull quote feels like it’s antithetical to their entire argument and makes me feel like all they’re doing is whinging about the fact that people who don’t know what they’re talking about have loud voices. Which has always been true and has little to do with AI.

[–] AIhasUse@lemmy.world 0 points 5 months ago

Yeah, this paper is time wasted. It is hilarious that they think that 3 years is a long time as a data scientists and this somehow gives them such wisdom. Then, they can't even accurately extract the data from the chart that they posted in the article. On top of all this, like you pointed out, they can't even keep a clear narrative, and they blatantly contradict themself on their main point. They want to pile drive people who come to the same conclusion as themself. What a strange take.

[–] madsen@lemmy.world 0 points 5 months ago (1 children)

This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

[–] AIhasUse@lemmy.world 0 points 5 months ago (3 children)

It blatantly contradicts itself. I would wager good money that you read the headline and didn't go much further because you assumed it was agreeing with you. Despite the subject matter, this is objectively horribly written. It lacks a cohesive narrative.

[–] Alphane_Moon@lemmy.world 0 points 5 months ago* (last edited 5 months ago)

I don't think it's supposed to have a cohesive narrative structure (at least in context of a structured, more formal critique). I read the whole thing and it's more like a longer shitpost with a lot of snark.

[–] AIhasUse@lemmy.world 0 points 5 months ago (9 children)

There is literally not a chance that anyone downvoting this actually read it. It's just a bunch of idiots that read the title, like the idea that llms suck and so they downvoted. This paper is absolute nonsense that doesn't even attempt to make a point. I seriously think it is ppprly ai generated and just taking the piss out of idiots that love anything they think is anti-ai, whatever that means.

[–] decivex@yiffit.net 0 points 5 months ago

It's not a paper, it's a stream-of-consciousness style blog post.

[–] Feathercrown@lemmy.world 0 points 5 months ago

I hate anti-ai mania as much as the next person but the post is funny and it does have a point.

load more comments (7 replies)
load more comments (1 replies)
[–] deweydecibel@lemmy.world 0 points 5 months ago (1 children)

Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I'm going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you're an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

This aspect and of it isn't getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn't devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there's actual people working round the clock pruning and curating it.

You know, humans, with actual human intelligence, without which these miraculous "artificial intelligence" tools would not work as they seem to.

[–] 0x0@programming.dev 0 points 5 months ago

I don't think the author was referring to people pruning AI data but rather to mechanical turk instances like recently happened with Amazon.

[–] IHeartBadCode@kbin.run 0 points 5 months ago (6 children)

I had my fun with Copilot before I decided that it was making me stupider - it's impressive, but not actually suitable for anything more than churning out boilerplate.

This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.

[–] AIhasUse@lemmy.world 0 points 5 months ago (6 children)

Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of "if the simplest tools from over a year aren't very good, then there's no way there are any good tools now."

It's like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you'll do nothing for anyone who actually saw the new movies.

[–] foenix@lemm.ee 0 points 5 months ago

I've used crewai and autogen in production... And I still agree with the person you're replying to.

The 2 main problems with agentic approaches I've discovered this far:

  • One mistake or hallucination will propagate to the rest of the agentic task. I've even tried adding a QA agent for this purpose but what ends up happening is those agents aren't reliable and also leads to the main issue:

  • It's very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.

One last consideration: the current LLM providers are very aware of these issues or they wouldn't be as concerned with finding "clean" data to scrape from the web vs using agents to train agents.

If you're using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you're interested in the code.

Personally, I'm kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.

load more comments (5 replies)
[–] Zikeji@programming.dev 0 points 5 months ago (2 children)

Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

So it's helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it's not going to do the job itself.

[–] deweydecibel@lemmy.world 0 points 5 months ago* (last edited 5 months ago)

So it's helpful for saving time typing some stuff

Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it'll do the bulk of it, and then I just go through and edit out all little mistakes.

And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they're leaving out the part where they have to edit the output too.

At the end of the day, we've had this technology for a while, it's just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it's just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.

load more comments (1 replies)
[–] 0x0@programming.dev 0 points 5 months ago (2 children)

I use them like wikipedia: it's a good starting point and that's it (and this comparison is a disservice to wikipedia).

[–] SandbagTiara2816@lemmy.dbzer0.com 0 points 5 months ago (2 children)

Yep! It’s a good way to get over the fear of a blank page, but I don’t trust it for more than outlines or summaries

[–] ripcord@lemmy.world 0 points 5 months ago (1 children)

Man, I need to build some new shit.

I can't remember the last time I looked at a blank page.

load more comments (1 replies)
load more comments (1 replies)
[–] grrgyle@slrpnk.net 0 points 5 months ago

I agree with your parenthetical, but Wikipedia actually agrees on your main point: Wikipedia itself is not a source of truth.

[–] grrgyle@slrpnk.net 0 points 5 months ago

I think we all had that first moment where copilot generates a good snippet, and we were blown away. But having used it for a while now, I find most of what it suggests feels like jokes.

Like it does save some typing / time spent checking docs, but you have to be very careful to check its work.

I've definitely seen a lot more impressively voluminous, yet flawed pull requests, since my employer started pushing for everyone to use it.

I foresee a real reckoning of unmaintainable codebases in a couple years.

[–] Shadywack@lemmy.world 0 points 5 months ago

Looks like two people suckered by the grifters downvoted your comment (as of this writing). Should they read this, it is a grift, get over it.

[–] sugar_in_your_tea@sh.itjust.works 0 points 5 months ago (14 children)

I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn't correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn't catch it, and we ended up pointing it out again.

Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they'd need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they'd be ready to ship it.

They didn't pass the interview.

And that's generally my opinion about AI in general, it's probably making you stupider.

[–] deweydecibel@lemmy.world 0 points 5 months ago* (last edited 5 months ago) (3 children)

I've seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don't need to.

And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn't making them dumber. They were already dumb. What the AI does is make code they don't understand more accessible, which is to say, it's just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.

[–] AdamBomb@lemmy.sdf.org 0 points 5 months ago

Spot on description

load more comments (2 replies)
load more comments (13 replies)
[–] Spesknight@lemmy.world 0 points 5 months ago (2 children)

I don't fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.

[–] bionicjoey@lemmy.ca 0 points 5 months ago (1 children)

I know AI can't replace me. But my boss's boss's boss doesn't know that.

[–] sugar_in_your_tea@sh.itjust.works 0 points 5 months ago (2 children)

Fortunately, it's my job as your boss to convince my boss and boss' boss that AI can't replace you.

We had a candidate spectacularly fail an interview when they used AI and didn't catch the incredibly obvious errors it made. I keep a few examples of that handy to defend my peeps in case my boss or boss's boss decide AI is the way to go.

I hope your actual boss would do that for you.

[–] Kaput@lemmy.world 0 points 5 months ago (1 children)

They'll replace you first, so they can replace your employees.. even though you are clearly right.

Yup. But at least I tried, and I'll have some decent stories for the soup kitchen.

[–] bionicjoey@lemmy.ca 0 points 5 months ago (1 children)

My boss is a non-technical manager.

[–] sugar_in_your_tea@sh.itjust.works 0 points 5 months ago (1 children)

I'm so sorry.

My boss asked if I wanted to be a manager, and I said no, but I'll take the position if offered so it doesn't go to a non-technical person. I wish that was more common elsewhere.

Good luck sir or madame.

[–] bionicjoey@lemmy.ca 0 points 5 months ago* (last edited 5 months ago) (1 children)

Well, my office recently announced that we'll be going from 0 days mandatory in office to 3 days a week. After working fully remote for the last few years, I'll kms before going back, so I'm on the way out anyway.

[–] sugar_in_your_tea@sh.itjust.works 0 points 5 months ago (1 children)

That sucks. We do 2-days in office, but that was also always the agreement, we were just temporarily remote during COVID (though almost all of us were hired during COVID). My boss tried 3-days in office due to company policy, but we hated it and went back to two.

I cannot stand orgs going back on their word without agreement from the team. I hope you find someplace better.

[–] bionicjoey@lemmy.ca 0 points 5 months ago

Thanks, I'm sure I'll land on my feet. I have a pretty unique skillset for IT (Science HPC admin) and I'm thinking about maybe going back to school and doing a Master's.

load more comments (1 replies)
[–] Shadowcrawler@discuss.tchncs.de 0 points 5 months ago* (last edited 5 months ago) (3 children)

The Author's Frustration with the Overhyped Use of AI in Businesses

• The author, a former data scientist, expresses frustration with the excessive hype surrounding AI and its implementation in businesses.

• They argue that most companies lack the expertise and infrastructure to effectively utilize AI and should focus on addressing fundamental issues like testing database backups and developing basic applications.

• The author criticizes the lack of genuine understanding and competence among many individuals promoting AI initiatives, leading to a culture of grifters and incompetents.

• They emphasize the importance of solving basic operational and cultural problems before attempting to implement complex technologies like AI.

• The author warns against the盲adoption of AI without a clear understanding of its benefits and feasibility, likening it to a recipe for disaster.

https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

Yes i'm fully aware of the irony that i used AI for this summary

[–] foenix@lemm.ee 0 points 5 months ago (1 children)

And you didn't even proofread the output...

[–] Shadowcrawler@discuss.tchncs.de 0 points 5 months ago (1 children)

Which kind of proves that the author has some valid points...

[–] foenix@lemm.ee 0 points 5 months ago
load more comments (2 replies)
[–] Rumbelows@lemmy.world 0 points 5 months ago (2 children)

I feel like some people in this thread are overlooking the tongue in cheek nature of this humour post and taking it weirdly personally

[–] amio@kbin.run 0 points 5 months ago

Even for the internet, this place is truly extremely fond of doing that.

[–] Eccitaze@yiffit.net 0 points 5 months ago

Yeah, that's what happens when the LLM they use to summarize these articles strips all nuance and comedy.

[–] rimu@piefed.social 0 points 5 months ago

I will learn enough judo to throw you into the sun

best line

[–] Shadywack@lemmy.world 0 points 5 months ago (2 children)

Using satire to convey a known truth some already understand implicitly, some don't want to acknowledge, some refuse it outright, but when you think about it, we've always known how true it is. It's tongue-in-cheek but it's necessary in order to convince all these AI-washing fuckheads what a gimmick it is to really be making sweeping statements about a chatbot that still can't spell lollipop backwards.

load more comments (2 replies)
load more comments
view more: next ›