this post was submitted on 22 Aug 2024
1 points (100.0% liked)
Technology
59601 readers
3449 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Not really, it's doable with chatgpt right now for programs that have a relatively small scope. If you set very clear requirements and decompose the problem well it can generate fairly high quality solutions.
right now not a chance. it's okay ish at simple scripts. it's alright as an assistant to get a buggy draft for anything even vaguely complex.
ai doing any actual programming is a long ways off.
This is incorrect. And I'm in the industry. In this specific field. Nobody in my industry, in my field, at my level, seriously considers this effective enough to replace their day to day coding beyond generating some boiler plate ELT/ETL type scripts that it is semi-effective at. It still contains multiple errors 9 times out of 10.
I cannot be more clear. The people who are claiming that this is possible are not tenured or effective coders, much less X10 devs in any capacity.
People who think it generates quality enough code to be effective are hobbyists, people who dabble with coding, who understand some rudimentary coding patterns/practices, but are not career devs, or not serious career devs.
If you don't know what you're doing, LLMs can get you close, some of the time. But there's no way it generates anything close to quality enough code for me to use without the effort of rewriting, simplifying, and verifying.
Why would I want to voluntarily spend my day trying to decypher someone else's code? I don't need chatGPT to solve a coding problem. I can do it, and I will. My code will always be more readable to me than someone else's. This is true by orders of magnitude for AI-code gen today.
So I don't consider anyone that considers LLM code gen to be a viable path forward, as being a serious person in the engineering field.
It's just a tool like any other. An experienced developer knows that you can't apply every tool to every situation. Just like you should know the difference between threads and coroutines and know when to apply them. Or know which design pattern is relevant to a given situation. It's a tool, and a useful one if you know how to use it.
This is like applying a tambourine made of optical discs as a storage solution. A bit better cause punctured discs are no good.
A full description of what a program does is the program itself, have you heard that? (except for UB, libraries, ... , but an LLM is no better than a human in that too)