this post was submitted on 20 Jun 2024
1 points (100.0% liked)

Technology

58480 readers
3964 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

....

I don't believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

you are viewing a single comment's thread
view the rest of the comments
[–] IHeartBadCode@kbin.run 0 points 3 months ago (36 children)

I had my fun with Copilot before I decided that it was making me stupider - it's impressive, but not actually suitable for anything more than churning out boilerplate.

This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.

[–] sugar_in_your_tea@sh.itjust.works 0 points 3 months ago (17 children)

I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn't correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn't catch it, and we ended up pointing it out again.

Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they'd need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they'd be ready to ship it.

They didn't pass the interview.

And that's generally my opinion about AI in general, it's probably making you stupider.

[–] IHeartBadCode@kbin.run 0 points 3 months ago (2 children)

Similar story, I had a junior dev put in a PR for SQL that gets lat and long and gives back distance. The request was using the Haversine formula but was using the km coefficient, rather than the one for miles.

I asked where they got it and they indicated AI. I sighed and pointed out why it was wrong and that we had PostGIS and that's there is literally scalar functions available that will do the calculations way faster and they should use those.

There's a clear over reliance on code generation. That said, it's pretty good for things that I can eye scan and verify that's what I would have typed anyway. But I've found it suggesting things I wouldn't remotely permit to things that are "sort of" correct. I'll let it pop on the latter case and go back and clean it up. But yeah, anyone blind trusting AI shouldn't be allowed to make final commits.

[–] sugar_in_your_tea@sh.itjust.works 0 points 3 months ago (1 children)

I just don't bother, under the assumption that I'll spend more time correcting the mistakes than actually writing the code myself. Maybe that's faulty, as I haven't tried it myself (mostly because it's hard to turn on in my editor, vim).

[–] IHeartBadCode@kbin.run 0 points 3 months ago (1 children)

Maybe that's faulty, as I haven't tried it myself

Nah perfectly fine take. Each their own I say. I would absolutely say that where it is, not bothering with it is completely fine. You aren't missing all that much really. At the end of the day it might have saved me ten-fifteen minutes here and there. Nothing that's a tectonic shift in productivity.

[–] sugar_in_your_tea@sh.itjust.works 0 points 3 months ago (1 children)

Yeah, most of my dev time is spent reading, and I'm a pretty fast typist, so I never bothered.

Maybe I'll try it eventually. But my boss isn't a fan anyway, so I'm in no hurry.

[–] SkyeStarfall@lemmy.blahaj.zone 0 points 3 months ago

It can be useful in explaining concepts you're unsure about, in regards to the reading part, but you should always verify that information.

But it has helped me understand certain concepts in the past, where I struggled with finding good explanations using a search engine.

[–] manicdave@feddit.uk 0 points 3 months ago

it's pretty good for things that I can eye scan and verify that's what I would have typed anyway. But I've found it suggesting things I wouldn't remotely permit to things that are "sort of" correct.

Yeah. I haven't bothered with it much but the best use I can see of it is just rubber ducking.

Last time I used it was to asked how to change contrast in a numpy image. It said to multiply each channel by contrast. (I don't even think this is right and it should be ((original value-128) * contrast) + 128) not original value * contrast as it suggested), but it did remind me I can just run operations on colour channels.

Wait what's my point again? Oh yeah, don't trust anyone that can't tell you what the output is supposed to do.

load more comments (14 replies)
load more comments (32 replies)