this post was submitted on 21 Sep 2024
2 points (100.0% liked)

Technology

59692 readers
2123 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

top 50 comments
sorted by: hot top controversial new old
[–] simplymath@lemmy.world 1 points 2 months ago* (last edited 1 month ago) (6 children)

People who use LLMs to write code (incorrectly) perceived their code to be more secure than code written by expert humans.

https://arxiv.org/abs/2211.03622

load more comments (6 replies)
[–] small44@lemmy.world 0 points 2 months ago (2 children)

Many lazy programmers may just copy paste without thinking too much about the quality of generated code. The other group of person who oppose it are those who think it will kill the programmer job

[–] cm0002@lemmy.world 0 points 2 months ago (1 children)

Many lazy programmers may just copy paste without thinking too much about the quality of generated code

Tbf, they've been doing that LONG before AI came along

load more comments (1 replies)
[–] OpenStars@discuss.online 0 points 2 months ago

There is an enormous difference between:

rm -rf / path/file

vs.

rm -rf /path/file

[–] cy_narrator@discuss.tchncs.de 0 points 2 months ago (1 children)

Also if you are interested, here are those scripts I wrote with chatGPT

https://gitlab.com/cy_narrator/lukshelper

[–] jwelch55@lemmy.world 0 points 2 months ago (2 children)
[–] cy_narrator@discuss.tchncs.de 1 points 2 months ago

Changed and added a feature to check if volume exists before override

load more comments (1 replies)
[–] AreaKode@lemmy.world 0 points 2 months ago (5 children)

I've found it to be extremely helpful in coding. Instead of trying to read huge documentation pages, I can just have a chatbot read it and tell me the answer. My coworker has been wanting to learn Powershell. Using a chatbot, his understanding of the language has greatly improved. A chatbot can not only give you the answer, but it can break down how it reached that conclusion. It can be a very useful learning tool.

[–] cyberpunk007@lemmy.ca 0 points 2 months ago (2 children)

I've been using it for CLI syntax and code for a while now. It's not always right but it definitely helps in getting you almost all the way there when it doesn't. I will continue to use it 😁

[–] mp3@lemmy.ca 0 points 2 months ago (1 children)

It's really useful to quickly find the parameters to convert something in a specific way using ffmpeg.

[–] cyberpunk007@lemmy.ca 0 points 2 months ago

Hell yeah it is. So much faster than reading the man pages and stuff

[–] cy_narrator@discuss.tchncs.de 0 points 2 months ago (1 children)

When was it wrong? I am curious like how much wrong it was and what AI assistent you asked.

load more comments (1 replies)
[–] Eldritch@lemmy.world 0 points 2 months ago (1 children)

It's great for regurgitating pre written text. For generating new or usable code it's largely useless. It doesn't have an actual understanding of what it says. It can recombine information and elements its seen before. But not generate anything truly unique.

[–] JohnnyCanuck@lemmy.ca 0 points 2 months ago (1 children)

That isn't what the comment you replied to was talking about so that's why you're getting downvoted even though some of what you said is right.

[–] Eldritch@lemmy.world 0 points 2 months ago (1 children)

The first sentence addressed what they talked about. It's great as an assistant to cut through documentation to get at what you need. In fact, here's a recent video from Perry Fractic doing just that with microtext for the C64.

Anything else like having it generate the code itself, it's more of a liability than an asset. Since it doesn't really understand what its doing.

Perhaps I should have separated the two thoughts initially? Either way I've said my piece.

load more comments (1 replies)
load more comments (3 replies)
[–] Eczpurt@lemmy.world 0 points 2 months ago

Sounds like it's just another tool in a coding arsenal! As long as you take care to verify things like you did, I can't see why it'd be a bad idea. It's when you blindly trust that things go wrong.

My workplace of 5 employees and 2 owners have embraced it as an additional tool.

We have Copilot inside Visual studio professional and it’s a great time saver. We have a lot of boiler plate code that it can learn from and why would i want to waste valuable time writing the same things over and over. If every list page follows the same pattern then it’s boring we are paid to solve problems not just write the same things.

We even have a tool powered by AI made by the owner which we can type commands and it will scaffold all our boiler plate. Or it can watch the project and if I update a model it will do the mutations and queries in c# set up the graphql layer and then implement some views in react typescript.

[–] HakFoo@lemmy.sdf.org 0 points 2 months ago (1 children)

My objections:

  1. It doesn't adequately indicate "confidence". It could return "foo" or "!foo" just as easily, and if that's one term in a nested structure, you could spend hours chasing it.
  2. So many hallucinations-- inventing methods and fields from nowhere, even in an IDE where they're tagged and searchable.

Instead of writing the code now, you end up having to review and debug it, which is more work IMO.

[–] CarbonatedPastaSauce@lemmy.world 0 points 2 months ago

I stopped using it after the third time it just wholesale made up powershell cmdlets that don’t exist.

Until it has fidelity it’s just a toy.

[–] Bougie_Birdie@lemmy.blahaj.zone 0 points 2 months ago (1 children)

A lot of the criticism comes with AI results being wrong a lot of the time, while sounding convincingly correct. In software, things that appear to be correct but are subtly wrong leads to errors that can be difficult to decipher.

Imagine that your AI was trained on StackOverflow results. It learns from the questions as well as the answers, but the questions will often include snippets of code that just don't work.

The workflow of using AI resembles something like the relationship between a junior and senior developer. The junior/AI generates code from a spec/prompt, and then the senior/prompter inspects the code for errors. If we remove the junior from the equation to replace with AI, then entry level developer jobs are slashed, and at the same time people aren't getting the experience required to get to the senior level.

Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

Another argument would be that if I generate code that I have to take time to review and figure out what might be wrong with it, it might just be quicker and easier to write it correctly the first time

Business often doesn't understand these subtleties. There's a ton of money being shovelled into AI right now. Not only for developing new models, but for marketing AI as a solution to business problems. A greedy executive that's only looking at the bottom line and doesn't understand the solution might be eager to implement AI in order to cut jobs. Everyone suffers when jobs are eliminated this way, and the product rarely improves.

[–] clif@lemmy.world 0 points 2 months ago (1 children)

Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

This really resonated with me and is an excellent point. I'm going to have to remember that one.

[–] vinnymac@lemmy.world 0 points 2 months ago (5 children)

A developer who is afraid of peer review is not a developer at all imo, but more or less an artist who fears exposing how the sausage was made.

I’m not saying a junior who is nervous is not a dev, I’m talking about someone who has been at this for some time, and still can’t handle feedback productively.

load more comments (5 replies)
[–] PixelProf@lemmy.ca 0 points 2 months ago (1 children)

Lots of good comments here. I think there's many reasons, but AI in general is being quite hated on. It's sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here's a few perspectives:

  • Training data is questionable/debatable ethics,
  • Amateur programmers don't build up the same "code muscle memory",
  • It's being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
  • The time saved writing code isn't being used to review and test the code more carefully than it was before,
  • The AI is being used for problem solving, where it's not ideal, as opposed to code-from-spec where it's much better,
  • Non-Local AI is scraping your (often confidential) data,
  • Environmental impact of the use of massive remote LLMs,
  • Can be used (according to execs, anyways) to replace entry level developers,
  • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
  • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what's important and don't see the skills they strengthen along the way to the answer.

I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

I think there is a lot of reasons to hate on it, but I think it's because the reasons to use it effectively are still being figured out.

Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don't really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It'll take time to adopt and adapt.

load more comments (1 replies)
[–] boatswain@infosec.pub 0 points 2 months ago (1 children)

As a cybersecurity guy, it's things like this study, which said:

Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

[–] eerongal@ttrpg.network 0 points 2 months ago* (last edited 2 months ago) (2 children)

FWIW, at this point, that study would be horribly outdated. It was done in 2022, which means it probably took place in early 2022 or 2021. The models used for coding have come a long way since then, the study would essentially have to be redone on current models to see if that's still the case.

The people's perceptions have probably not changed, but if the code is actually insecure would need to be reassessed

[–] boatswain@infosec.pub 0 points 2 months ago* (last edited 2 months ago)

Sure, but to me that means the latest information is that AI assistants help produce insecure code. If someone wants to perform a study with more recent models to show that's no longer the case, I'll revisit my opinion. Until then, I'm assuming that the study holds true. We can't do security based on "it's probably fine now."

load more comments (1 replies)
[–] unmagical@lemmy.ml 0 points 2 months ago

It gives a false sense of security to beginner programmers and doesn't offer a more tailored solution that a more practiced programmer might create. This can lead to a reduction in code quality and can introduce bugs and security holes over time. If you don't know the syntax of a language how do you know it didn't offer you something dangerous? I have copilot at work and the only thing I actually accept its suggestions for now are writing log statements and populating argument lists. While those both still require review they are generally faster than me typing them out. Most of the rest of what it gives me is undesired: it's either too verbose, too hard to read, or just does something else entirely.

[–] NuXCOM_90Percent@lemmy.zip 0 points 2 months ago (1 children)

Lemmy is an outlier where anything "AI" immediately triggers the luddites to scream and rant (and occasionally send threats over PMs...) that it is bad because it is "AI" and so forth. So... massive grain of salt.

Speaking as (for simplicity's sake) a software engineer who wears both a coder and a manager hat?

"AI" is incredibly useful for charlie work. Back in the day you would hire an intern or entry level staff to write your unit tests and documentation and utility functions. But, for well over a decade now, documentation and even many unit tests can be auto-generated by scripts for vim or plugins for an IDE. They aren't necessarily great but... the stuff that Fred in Accounting's son wrote was pretty dogshit too.

What LLMs+RAG do is step that up a few notches. You still aren't going to have them write the critical path code. But you can farm off a LOT more charlie work to the point where you just need to do the equivalent of review an MR that came from a plugin rather than a kid who thinks we don't know he reeks of weed.

And... that is good and bad. Good in that it means smaller companies/teams are capable of much bigger projects. And bad because it means a lot fewer entry level jobs to teach people how to code.

So that is the manager/mentor perspective. Let's dig a bit deeper on your example:

I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

Honestly? That sounds to me like foundational issues. You already articulated what you need but you wanted to find an all in one guide rather than googing "bash function input example" or "bash function return example" or "strip trailing strash from directory path linux" and so forth. Also, I am pretty sure I very regularly find a guide that covers every one of those questions except for string processing every time I forget the syntax to a for loop in bash and need to google it.

And THAT is the problem with relying on these tools. I know plenty of people who fundamentally can't write documentation because their IDE has always generated (completely worthless) doxygen for them. And it sounds like you don't know how to self-educate on how to solve a problem.

Which is why, generally speaking:

I still prefer to offload the charlie work to newbies because it helps them learn (and it lets me justify their paycheck). And usually what I do is tell them I want to "walk you through our SDLC. it is kind of annoying" to watch over their shoulder and make sure they CAN do this by hand. Then... whatever. I don't care if they pass everything through whatever our IT/Cybersecurity departments deem legit.

Which... personally? I generally still prefer "dumb" scripts to generate the boilerplate for myself. And when I do ask chatgpt or a "local" setup: I ask general questions. I don't paste our codebase in. I say "Hey chatgpt, give me an example of setting the number of replicas of a pod based upon specific metrics collected with prometheus". And I adapt that. Partially to make sure I understand what we are adding to our codebase and mostly because I still don't trust those companies with my codebase and prompts. Which... is probably going to mean moving away from VSCode within the next year (yay Copilot) but... yeah.

[–] CarbonatedPastaSauce@lemmy.world 0 points 2 months ago (1 children)

What the hell is Microsoft doing to VS Code? Are they going to REQUIRE copilot or something? I’d I have to give it up I’ll be sad.

load more comments (1 replies)
[–] tal@lemmy.today 0 points 2 months ago* (last edited 2 months ago)

I don't think that the current approaches being used by generative AIs are sufficient to reliably produce correct code; I think that they're more-amenable to human-consumable output (and even there, I'm much more enthusiastic about their use for images than text, as things stand). A human needs approximately-correct material to cue their brain; CPUs are more particular.

We'll probably get there, in the same sense that we can ultimately produce human-level AI for anything, but I think that it'll entail higher-level reasoning about a problem, which present generative text approaches don't do.

I did start with internet search....I could not find how to pass values into the function and return from a function easily,

So, now, this I have a hard time with.

When I search for "pass value function bash", this is the first page I get, which clearly shows an example:

https://stackoverflow.com/questions/6212219/passing-parameters-to-a-bash-function

This isn't where I'd consider generative AI to be a useful example; it's something that there will be existing material already readily-available via a search.

The other issue with using generative AI for coding is that for taking pre-existing code for common tasks and using it in multiple programs, we already have an approach: use libraries. That way code gets maintained and such, but doesn't need to be reimplemented by humans over-and-over.

Say someone says "I need linked-list code". Okay, I mean, that's a pretty common, plain Jane thing to need.

But if you use a library, and there's a bug in that code, and it gets fixed, then the bugfix propagates when you update to a newer library. If you generate a linked-list implementation, even if you wind up with working linked-list code at the end, then that isn't gonna happen.

[–] moon@lemmy.cafe 0 points 2 months ago

It's a tool just like everything else, but people are just now sobering up after all the hype that it's incredibly wrong a lot.

[–] count_dongulus@lemmy.world 0 points 2 months ago

It doesn't pass judgment. It just knows what "looks" correct. You need a trained person to discern that. It's like describing symptoms to WebMD. If you had a junior doctor using WebMD, how comfortable would you be with their assessment?

[–] TheGrandNagus@lemmy.world 0 points 2 months ago* (last edited 2 months ago)

A lot of people are very reactionary when it comes to LLMs and any of the other "AI" technologies.

For myself, I definitely roll my eyes at some of the "let's shoehorn 'AI' into this!" marketing, and I definitely have reservations about some datasets stealing/profiting from user data, and part of me worries about the other knock-on effects of AI (e.g. recently it was found that some foraging books on Amazon were AI generated and, if followed, would've led to people being poisoned. That's pretty fucking bad).

...but it can also be a great tool, too. My sister is blind, and honestly, AI-assisted screen readers will be a game changer. AI describing images online that haven't been properly tagged for blind people (most of them, btw!) is huge too. This is a thing that is making my little sister's life better in a massive way.

It's been useful for me in terms of translation (Google translate is bad), in terms of making templates that take a lot of the tedious legwork out of programming, effortlessly clearing up some audio clarity issues for some voluntary voice acting "work" I've done for a huge game mod, and for quickly spotting programming or grammar mistakes that a human could easily miss.

I wish people could just have rational, adult discussions about AI tech without it just descending into some kind of almost religious shouting match.

[–] lvxferre@mander.xyz 0 points 2 months ago

[NB: I'm no programmer. I can write some few lines of bash because Linux, I'm just relaying what I've read. I do use those bots but for something else - translation aid.]

The reasons that I've seen programmers complaining about LLM chatbots are:

  1. concerns that AI will make human programmers obsolete
  2. concerns that AI will reduce the market for human programmers
  3. concerns about the copyright of the AI output
  4. concerns about code quality (e.g. it assumes libraries and functions out of thin air)
  5. concerns about the environmental impact of AI

In my opinion the first one is babble, the third one is complicated, but the other three are sensible.

[–] kibiz0r@midwest.social 0 points 2 months ago (1 children)

Basically this: Flying Too High: AI and Air France Flight 447

Description

Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.

[–] General_Effort@lemmy.world 0 points 2 months ago (1 children)

I know about this crash and don't see the connection. What's the argument?

[–] kibiz0r@midwest.social 0 points 2 months ago

I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.

The theme that I would highlight here though:

More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.

But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.

How will you be ready for that if you don’t get practice along the way?

[–] EncryptKeeper@lemmy.world 0 points 2 months ago (2 children)

If you’re a seasoned developer who’s using it to boilerplate / template something and you’re confident you can go in after it and fix anything wrong with it, it’s fine.

The problem is it’s used often by beginners or people who aren’t experienced in whatever language they’re writing, to the point that they won’t even understand what’s wrong with it.

If you’re trying to learn to code or code in a new language, would you try to learn from somebody who has only half a clue what he’s doing and will confidently tell you things that are objectively wrong? Thats much worse than just learning to do it properly yourself.

load more comments (2 replies)
[–] madsen@lemmy.world 0 points 2 months ago

but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language.

Last time I checked (because I was writing Bash scripts based on the same assumption), Python was actually present on more Linux systems out of the box than Bash.

[–] Grofit@lemmy.world 0 points 2 months ago

One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.

In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.

An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.

An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.

So anyway I don't see a problem with the tool, it's just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it's more than this and can do their job for them.

People are in denial. AI is going to take programmer's jobs away, and programmers perceive AI as a natural enemy and a threat. That is why they want to discredit it in any way possible.

Honestly, I've used chatGPT for a hundred tasks, and it has always resulted in acceptable, good-quality work. I've never (!) encountered chatGPT making a grave or major error in any of the questions that I asked it (physics and material sciences).

[–] socsa@piefed.social 0 points 2 months ago

Because most people on Lemmy have never actually had to write code professionally.

[–] tabular@lemmy.world 0 points 2 months ago (5 children)

If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.

Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]

load more comments (5 replies)
[–] KairuByte@lemmy.dbzer0.com 0 points 2 months ago

As someone who just delved into a related but unfamiliar language for a small project, it was relatively correct and easy to use.

There were a few times it got itself into a weird “loop” where it insisted on doing things in a ridiculous way, but prior knowledge of programming was enough for me to reword and “suggest” different, simpler, solutions.

Would I have ever got to the end of that project without knowledge of programming and my suggestions? Likely, but it would have taken a long time and been worse off code.

The irony is, without help from copilot, I’d have taken at least three times as long.

load more comments
view more: next ›