this post was submitted on 26 Mar 2024
398 points (100.0% liked)

Technology

37737 readers
379 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] chahk@beehaw.org 177 points 8 months ago* (last edited 8 months ago) (4 children)

"AI is nowhere near to being ready to replace you at your job. It is, however, ready enough to convince your boss that it's ready to replace you at your job."

[–] BarryZuckerkorn@beehaw.org 43 points 8 months ago (2 children)

I remember reading an article or blog post years ago that persuasively argued that the danger of AI is not going to be that it ends up doing things better than humans, but that it causes a lot of harm when entrusted with tasks it actually isn't good at. I think that thesis seems much more plausible now, watching people respond to clearly flawed AI systems.

[–] intensely_human@lemm.ee 13 points 8 months ago

Never attribute to malevolence that which can be explained by incompetence.

Including the end of humanity at the hands of the robots apparently

load more comments (1 replies)
[–] ShepherdPie@midwest.social 26 points 8 months ago (2 children)

This is nothing new though. For decades, managers have fallen for "solution in a box" sales pitches even though front line workers know it's doomed to fail as soon as they set eyes on it. This time the solution just happens to be "AI."

[–] megopie@beehaw.org 8 points 8 months ago* (last edited 8 months ago)

It’s worse now than ever though, many managers have been steeped in tech optimism their whole working careers. The failures of “revolutionary new systems” have been forgotten about while the success of other things are lauded.

They’ve been primed to jump on any new “innovation” and at the same time B2B marketing has started adopting some of the most manipulative practices that used to be only used on consumers. They’ve crafted a narrative that shapes discourse so the main objections that appear are irrelevant to the actual issues managers might run in to.

Stuff like “but what if it is TOO good?!” and “what if the wrong people get their hands on this AMAZINGLY POWERFUL new tech?!”

Instead of “but does this actually understand anything or is it just giving output that looks correct?” or “ Wait, so, how was this training data obtained? Will there be legal issues from deliverables made with this?”

The average manager has been primed by the zeitgeist to ask the sales rep the kinds of questions they want to answer.

load more comments (1 replies)
[–] summerof69@lemm.ee 8 points 8 months ago

Probably bosses are trying to convince AI that AI is ready.

load more comments (1 replies)
[–] anlumo@feddit.de 163 points 8 months ago (6 children)

Using a Large Language Model for image detection is peak human intelligence.

[–] PerogiBoi@lemmy.ca 118 points 8 months ago (5 children)

I had to prepare a high level report to a senior manager last week regarding a project my team was working on.

We had to make 5 professional recommendations off of data we reported.

We gave the 5 recommendations with lots of evidence and references to why we came to that decision.

The top question we got was: “What are ChatGPT’s recommendations?”

Back to the drawing board this week because LLMs are more credible than teams of professionals with years of experience and bachelor-masters level education on the subject matter.

[–] rho50@lemmy.nz 89 points 8 months ago (3 children)

It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness... are somehow an authority on anything.

[–] PerogiBoi@lemmy.ca 63 points 8 months ago (3 children)

All you need to succeed on this planet is the self confidence to say things. It literally does not matter the accuracy. It’s how you express it. I wish I knew this when I was younger. I’d cut out all the imposter syndrome that held me back.

[–] CanadaPlus@lemmy.sdf.org 17 points 8 months ago

I wish it was that easy. If you go too long it's boring, and if you're too confident you sound arrogant. At this point I've kind of just accepted there are people who can sell, and that I'm not one of those people.

load more comments (2 replies)
load more comments (2 replies)
[–] rutellthesinful@kbin.social 35 points 8 months ago (2 children)

you fool

"these are chatgpt's recommendations we just provided research to back them up and verify the ai's work"

[–] snooggums@midwest.social 26 points 8 months ago (1 children)

"What do we pay you guys for then? You are all fired and Tummy the intern will do everything with ChatGPT from here on out!"

[–] PerogiBoi@lemmy.ca 22 points 8 months ago (2 children)

You joke but several sections of our HR department got cut and replaced with Enterprise GPT-4. We talk to an internal chatbot now about HR questions and some forms.

[–] MagicShel@programming.dev 20 points 8 months ago (1 children)

You should see if you can get it to hallucinate a pay raise or 3 months vacation.

[–] PerogiBoi@lemmy.ca 18 points 8 months ago (3 children)

It did the opposite lmao. I asked it what my vacation leave was because you need to verify leave amounts before you’re allowed to request any additional leave. It said I had 0 in my balance and I know for a fact I have at least a week left 🤪 took almost a month to sort it out. Had to provide balance screenshots and everything. I’d be probably fucked if I hadn’t manually screenshot my leave amounts beforehand.

[–] MagicShel@programming.dev 15 points 8 months ago

You work for a crazy company, my friend.

[–] intensely_human@lemm.ee 14 points 8 months ago

Jesus christ that thing’s real??

[–] Flax_vert@feddit.uk 12 points 8 months ago

Why can't they just use a simple calendar app system where you book it off???? Who would use a large language model for that rubbish?

[–] snooggums@midwest.social 18 points 8 months ago

That is the least worst implementation!

I knew one HR person who cared about employees and did her best to help out. She only lasted 6 months.

[–] PerogiBoi@lemmy.ca 8 points 8 months ago (1 children)

Haha and then the conversation would then be “Yes but can we see ChatGPT’s research?”

[–] MagicShel@programming.dev 9 points 8 months ago (1 children)

That's when you drop trou, bend over, spread the cheeks, and ask them to let you know when they're done reviewing ChatGPT's "research".

[–] PerogiBoi@lemmy.ca 15 points 8 months ago

My butt is much too perky for these goons. They don’t deserve it.

[–] Steve@communick.news 19 points 8 months ago (1 children)

"It came up with more or less the same recommendations. Though it didn't fully understand the specific target goals of your project, so our recommendations are more complete and actionable ready."

load more comments (1 replies)
[–] SolarMech@slrpnk.net 9 points 8 months ago

I think this points to a large problem in our society is how we train and pick our managers. Oh wait we don't. They pick us.

I mean, as long as you are the one prompting ChatGPT, you can probably get it to spit out the right recommendations. Works until they fire you because they are convinced AI made you obsolete.

[–] tigeruppercut@lemmy.zip 15 points 8 months ago (2 children)

AI cars are still running over pedestrians and people think computers are to the point of medical diagnosis?

[–] rho50@lemmy.nz 27 points 8 months ago (10 children)

There are some very impressive AI/ML technologies that are already in use as part of existing medical software systems (think: a model that highlights suspicious areas on an MRI, or even suggests differential diagnoses). Further, other models have been built and demonstrated to perform extremely well on sample datasets.

Funnily enough, those systems aren't using language models 🙄

(There is Google's Med-PaLM, but I suspect it wasn't very useful in practice, which is why we haven't heard anything since the original announcement.)

load more comments (10 replies)
load more comments (1 replies)
[–] intensely_human@lemm.ee 8 points 8 months ago

A picture is worth a thousand words

load more comments (3 replies)
[–] enjoytemple@kbin.social 40 points 8 months ago

I am glad that "I googled why I was coughing and it said I had cancer and would die in 7 days so farewell you are a good friend" will live on for more years.

[–] NeatNit@discuss.tchncs.de 37 points 8 months ago (4 children)

I'm not following this story..

a friend sent me MRI brain scan results and I put it through Claude

...

I annoyed the radiologists until they re-checked.

How was he in a position to annoy his friend's radiologists?

[–] Cube6392@beehaw.org 33 points 8 months ago (2 children)
load more comments (2 replies)
[–] jarfil@beehaw.org 9 points 8 months ago

Money. Guy is loaded, he can annoy anyone he wants.

load more comments (2 replies)
[–] rufus@discuss.tchncs.de 27 points 8 months ago* (last edited 8 months ago)

Maybe consider a tool made for the task and not just some random Claude, which isn't trained on this at all and just makes up some random impression of what an expert could respond in a dramatic story?!

[–] rho50@lemmy.nz 25 points 8 months ago (2 children)

I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone's large bowel as "likely to be an aggressive malignancy." Leading to said person fully expecting they'd be dead by July, when in fact they were perfectly healthy.

These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

The misinformation is causing real harm.

[–] JohnEdwa@sopuli.xyz 28 points 8 months ago* (last edited 8 months ago)

This is nothing but a modern spin on "hey internet, what's wrong with me? WebMD: it's cancer."

[–] B0rax@feddit.de 12 points 8 months ago (1 children)

To be honest, it is not made to diagnose medical scans and it is not supposed to be. There are different AIs trained exactly for that purpose, and they are usually not public.

load more comments (1 replies)
[–] Aatube@kbin.melroy.org 17 points 8 months ago* (last edited 8 months ago) (1 children)

Didn't he conclude with "We're still early"? How is that believing the success?

[–] nxdefiant@startrek.website 16 points 8 months ago

Claude told him to be confident

[–] kibiz0r@midwest.social 16 points 8 months ago (3 children)

I need help finding a source, cuz there are so many fluff articles about medical AI out there...

I recall that one of the medical AIs that the cancer VC gremlins have been hyping turned out to have horribly biased training data. They had scans of cancer vs. not-cancer, but they were from completely different models of scanners. So instead of being calibrated to identify cancer, it became calibrated to identify what model of scanner took the scan.

load more comments (3 replies)
[–] Seasoned_Greetings@lemm.ee 11 points 8 months ago* (last edited 8 months ago) (1 children)

Unpopular opinion incoming:

I don't think we should ignore AI diagnosis just because they are wrong sometimes. The whole point of AI diagnosis is to catch things physicians don't. No AI diagnosis comes without a physician double checking anyway.

For that reason, I don't think it's necessarily a bad thing that an AI got it wrong. Suspicion was still there and physicians double checked. To me, that means this tool is working as intended.

If the patient was insistent enough that something was wrong, they would have had them double check or would have gotten a second opinion anyway.

Flaming the AI for not being correct is missing the point of using it in the first place.

[–] rho50@lemmy.nz 13 points 8 months ago* (last edited 8 months ago) (1 children)

I don't think it's necessarily a bad thing that an AI got it wrong.

I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.

There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.

load more comments (1 replies)
[–] noodlejetski@lemm.ee 9 points 8 months ago* (last edited 8 months ago)

that's surprising, LLMs are actually incredibly good at reading MRI scans https://hachyderm.io/@dfeldman/112149278408570324

[–] akrz@programming.dev 7 points 8 months ago

And that guy is loaded and in investment. Really goes to show how capitalism fosters investments in the best minds and organizations....

https://potentiacap.com/team/

load more comments
view more: next ›