this post was submitted on 15 Sep 2024
2 points (100.0% liked)

Technology

58424 readers
4221 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] RobotToaster@mander.xyz 0 points 2 weeks ago (2 children)
[–] YtA4QCam2A9j7EfTgHrH@infosec.pub 0 points 2 weeks ago (1 children)

This must sound terrible. So high pitched

load more comments (1 replies)
[–] ulkesh@lemmy.world 0 points 2 weeks ago (1 children)

Oh geez…who could have seen this coming?

Oh wait, every single senior developer who is currently railing against their moron AI-bandwagoning CEOs.

load more comments (1 replies)
[–] Telorand@reddthat.com 0 points 2 weeks ago (11 children)

Wow, the text generator that doesn't actually understand what it's "writing" is making mistakes? Who could have seen that coming?

I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.

[–] blackjam_alex@lemmy.world 0 points 2 weeks ago (3 children)

My experience with ChatGPT goes like this:

  • Write me a block of code that makes x thing
  • Certainly, here's your code
  • Me: This is wrong.
  • You're right, this is the correct version
  • Me: This is wrong again.
  • You're right, this is the correct version
  • Me: Wrong again, you piece of junk.
  • I'm sorry, this is the correct version.
  • (even more useless code) ... and so on.
[–] TaintPuncher@lemmy.ml 0 points 2 weeks ago (1 children)

That sums up my experience too, but I have found it good for discussing functions for SQL and Powershell. Sometimes, it’ll throw something into its garbage code and I’ll be like “what does this do?” It’ll explain how it’s supposed to work, I’ll then work out its correct usage and solve my problem. Weirdly, it’s almost MORE helpful than if it just gave me functional code, because I have to learn how to properly use it rather than just copy/paste what it gives me.

[–] Telorand@reddthat.com 0 points 2 weeks ago

That's true. The mistakes actually make learning possible!

Man, designing CS curriculum will be easy in future. Just ask it to do something simple, and ask your CS students to correct the code.

[–] saltesc@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago)

All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.

All the while realising you did this to save time but at this point coding from scratch would have been faster.

[–] sugar_in_your_tea@sh.itjust.works 0 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn't notice.

AI is fine if you know what you're doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you're doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It'll do a decent job at certain prompts (i.e. generate me a function/class that...), but you're going to need to go through line-by-line and make sure it's actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don't bother w/ AI, but YMMV.

An area where it's probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, "find the docs for a function in library X that does..." I know what I want, I just may not remember the name or the module, and I certainly don't remember the argument order.

load more comments (3 replies)
[–] SketchySeaBeast@lemmy.ca 0 points 2 weeks ago (4 children)

I wish we could say the students will figure it out, but I've had interns ask for help and then I've watched them try to solve problems by repeatedly asking ChatGPT. It's the scariest thing - "Ok, let's try to think about this problem for a moment before we - ok, you're asking ChatGPT to think for a moment. FFS."

[–] USSEthernet@startrek.website 0 points 2 weeks ago (6 children)

Critical thinking is not being taught anymore.

load more comments (6 replies)
[–] sugar_in_your_tea@sh.itjust.works 0 points 2 weeks ago (10 children)

I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn't recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I've had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.

I absolutely think SW engineering is going to be a great career long-term, I just can't encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you're passionate about it, you're going to ignore whatever I say anyway, and you'll succeed. But if my recommendation changes your mind, then you probably aren't passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.

I'm not worried at all about my job or anyone on my team, I'm worried for the next batch of CS grads who chatGPT'd their way through their degree. "Cs get degrees" isn't going to land you a job anymore, passion about the subject matter will.

load more comments (10 replies)
load more comments (2 replies)
[–] skillissuer@discuss.tchncs.de 0 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Nobody should trust LLMs with anything

ftfy

also any inputs are probably scrapped and used for training, and none of these people get GDPR

[–] mox@lemmy.sdf.org 0 points 2 weeks ago* (last edited 2 weeks ago)

also any inputs are probably scraped

ftfy

Let's hope it's the bad outputs that are scrapped. <3

[–] curbstickle@lemmy.dbzer0.com 0 points 2 weeks ago (2 children)

Eh, I'd say mostly.

I have one right now that looks at data and says "Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause."

Which is pretty well within what they are good at, especially if you are doing the training yourself.

load more comments (2 replies)
[–] jaggedrobotpubes@lemmy.world 0 points 2 weeks ago

AI created 17 Security Corporation™️s in response to this comment.

load more comments (7 replies)
[–] eager_eagle@lemmy.world 0 points 2 weeks ago (2 children)

as opposed to human-generated code

[–] FaceDeer@fedia.io 0 points 2 weeks ago

But at least that crappy bug-riddled code has soul!

load more comments (1 replies)
[–] SaharaMaleikuhm@feddit.org 0 points 2 weeks ago (1 children)

But are the shareholders pleased?

load more comments (1 replies)
[–] _sideffect@lemmy.world 0 points 2 weeks ago

"AI" is just good for simple code snippets. (Which it stole from Github repos).

This whole ai bs needs to die already, and the people who lie about it held accountable.

[–] melroy@kbin.melroy.org 0 points 2 weeks ago (5 children)

No sh*t, this is what I predicted from day one.

[–] Eheran@lemmy.world 0 points 2 weeks ago (3 children)

So you predicted that security flaws in software are not going to vanish with AI?

load more comments (3 replies)
load more comments (4 replies)
[–] dinckelman@lemmy.world 0 points 2 weeks ago

I have a lot of empathy for a lot of people. Even ones, who really don't deserve it. But when it comes to people like these, I have absolutely none. If you make a chatbot do your corporate security, it deserves to burn to the ground

[–] fluxion@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago)

Debugging and maintenance was always the hardest aspect of large code bases... writing the code is the easy part. Offloading that part to AI only makes the hard stuff harder

[–] SuperFola@programming.dev 0 points 2 weeks ago (2 children)

How come the hallucinating ghost in the machine is generating code so bad the production servers hallucinate even harder and crash?

[–] henfredemars@infosec.pub 0 points 2 weeks ago (3 children)

I’m not sure how AI supposed to understand code. Most of the code out there is garbage. Even most of the working code out there in the world today is garbage.

[–] SuperFola@programming.dev 0 points 2 weeks ago (1 children)

Heck, I sometimes can’t understand my own code. And this AI thing tries to tell me I should move this code over there and do this and that and then poof it doesn’t compile anymore. The thing is even more clueless than me.

[–] elvith@feddit.org 0 points 2 weeks ago (1 children)

Randomly rearranging non working code one doesn’t understand… sometimes gets working code, sometimes doesn’t fix the bug, sometimes it won’t even compile anymore? Has no clue what the problem is and only solves it randomly by accident?

Sounds like the LLM is as capable as me /s

[–] henfredemars@infosec.pub 0 points 2 weeks ago (6 children)

Sometimes you even get newer and more interesting bugs!

load more comments (6 replies)

Can confirm. At our company, we have a tech debt budget, which is really awesome since we can fix the worst of the problems. However, we generate tech debt faster than we can fix it. Adding AI to the mix would just make tech debt even faster, because instead of senior devs reviewing junior dev code, we'd have junior devs reviewing AI code...

load more comments (1 replies)
[–] Telorand@reddthat.com 0 points 2 weeks ago (1 children)

You have to be hallucinating to understand.

[–] Drunemeton@lemmy.world 0 points 2 weeks ago (1 children)

I’ve licked the frog twice! How many does it take?

[–] Telorand@reddthat.com 0 points 2 weeks ago (1 children)

A-one. A-two-hoo. A-three... *Crumch*

[–] MelodiousFunk@slrpnk.net 0 points 2 weeks ago

I take it that frog hadn't been de-boned.

[–] henfredemars@infosec.pub 0 points 2 weeks ago* (last edited 2 weeks ago)

AI can be a useful tool, but it’s not a substitute for actual expertise. More reviews might patch over the problem, but at the end of the day, you need a competent software developer who understands the business case, risk profile, and concrete needs to take responsibility for the code if that code is actually important.

AI is not particularly good at coding, and it’s not particularly good at the human side of engineering either. AI is cheap. It’s the outsourcing problem all over again and with extra steps of having an algorithm hide the indirection between the expertise you need and the product you’re selling.

[–] ShittyBeatlesFCPres@lemmy.world 0 points 2 weeks ago (11 children)

If I was still in a senior dev position, I’d ban AI code assistants for anyone with less than around 10 years experience. It’s a time saver if you can read code almost as fluently as you can read your own native language but even besides the A.I. code introducing bugs, it’s often not the most efficient way. It’s only useful if you can tell that at a glance and reject its suggestions as much as you accept them.

Which, honestly, is how I was when I was first starting out as a developer. I thought I was hot shit and contributing and I was taking half a day to do tasks an experienced developer could do in minutes. Generative AI is a new developer: irrationally confident, not actually saving time, and rarely doing things the best way.

[–] GetOffMyLan@programming.dev 0 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I've found they're great as a learning tool where decent docs are available. Or as an interactive docs you can ask follow up questions to.

We mostly use c# and it's amazing at digging into the MS docs to pull out useful things from the bcl or common patterns.

Our new juniors got up to speed so fast by asking it to explain stuff in the existing codebases. Which in turn takes pressure off more senior staff.

I got productive in vuejs in a large codebase in a couple days that way.

Using to generate actual code is insanely shit haha It is very similar to just copy pasting code and hacking it in without understanding it.

[–] ShittyBeatlesFCPres@lemmy.world 0 points 2 weeks ago

You make a good point about using it for documentation and learning. That’s a pretty good use case. I just wouldn’t want young developers to use it for code completion any more than I’d want college sophomores to use it for writing essays. Professors don’t have you write essays because they like reading essays. Sometimes, doing a task manually is the point of the assignment.

load more comments (1 replies)
[–] Windex007@lemmy.world 0 points 2 weeks ago

Even worse than it being wrong, is that by nature of the tool it looks right.

load more comments (9 replies)
[–] BrianTheeBiscuiteer@lemmy.world 0 points 2 weeks ago (1 children)

The thing I dislike most about code assisting tools is that they're geared to answering your questions instead of giving advice. I'm sure they also give bad recommendations but I've seen LLMs basically double down on bad code.

load more comments (1 replies)
[–] squid_slime@lemm.ee 0 points 2 weeks ago (1 children)

and here's me learning C programming language from a selfhosted AI :/

load more comments (1 replies)
[–] JordanZ@lemmy.world 0 points 2 weeks ago (8 children)

Except it’s a computer writing the code that somebody probably ran once and said ‘looks good’ for their ‘happy path’ and committed it. So it’s inevitably probably full of weird edge case bugs…have fun.

load more comments (8 replies)
[–] Ilandar@aussie.zone 0 points 2 weeks ago (8 children)

The point of the article isn't that AI is outright useless as a coding tool but that it lulls programmers into a false sense of security regarding the quality and security of their code. They aren't reviewing their work as frequently because of this new reliance on AI as a time saver, and as such are more likely to miss any mistakes that they or the AJ made.

[–] 9point6@lemmy.world 0 points 2 weeks ago

Now now, AJ may not know everything, but he'll learn

load more comments (7 replies)
[–] fuzzy_feeling@programming.dev 0 points 2 weeks ago

ahahahaha...

load more comments
view more: next ›