this post was submitted on 29 Jan 2024
262 points (100.0% liked)
Technology
37739 readers
500 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah. This is something that to me isn't getting enough attention in the whole conversation. I'm trying to get myself up to speed on how to code effectively with AI tools, but I feel like understanding the code at a deep level is required in order to be able to do that effectively.
In the future, I think the "earning" that gives you that type of knowledge won't be something that people are forced to go through anymore, because AI can do the simple stuff for them, and so the inevitable result is that very few people will be able to do more than rely on the AI tools to either get it right or not, because they don't understand the underlying systems. I'm honestly not sure what future is in store a couple generations from now other than most people being forced to trust the AI (whatever its capabilities or incapabilities are at that point). That doesn't sound like a good scenario.
The future is already here. This will sound like some old man yelling at clouds, but the tools available for advanced structural design (automatic environmental loading, finite element modeling) are used by young engineers as magical black boxes which spit out answers. That's little different than 30 years ago when the generation before me would complain that calculators, unlike sliderules, were so disconnected from the problem that you could put in two numbers, hit the wrong operation, and get a non-sensical answer but believe it to be correct because the calculator told you so.
This evolution is no different, it's just that the process of design (wither programming or structures or medical evaluation) will be further along before someone realizes that everything that's being offered is utter shit. I'm actually excited about the prospect of AI/ML, but it still needs to be handled like a tool. Modern machinery can do amazing things faster, and with higher precision, than hand tools - but when things go sideways they can also destroy things much quicker and with far greater damage.
My turn.
Almost 30 years ago, in sunny Spain, a friend of mine was studying to become an Electrical Engineer. Among the things he told me would be under his responsibility, would be approving the plans for industrial buildings. "So your curriculum includes some architecture?", I asked. "No need", he responded, "you just put the numbers into a program and it spits out all that's needed".
Fast forward to 2006, when an industrial hall in Poland, built by a Spanish company, and turned into a disco, succumbed under the weight of snow on its roof, killing 65 people.
Wonder if someone forgot to check the "it snows in winter" option... 🙄
The difference is that calculators are deterministic and correct. If you get a wrong answer, it is you that made the mistake.
LLMs will frequently output nonsense answers. If you get a wrong answer, it is probably the machine that made the mistake.