this post was submitted on 22 Nov 2024
-21 points (26.7% liked)

Technology

59566 readers
3555 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 20 comments
sorted by: hot top controversial new old
[–] kat_angstrom@lemmy.world 6 points 7 hours ago

a virtual replica of you is able to embody your values and preferences with stunning accuracy.

I'm calling BS on this one. "Values and preferences" are such a far cry from Actual Personality that it's meaningless. Just more LLM hype

[–] ElectroLisa@lemmy.blahaj.zone 3 points 8 hours ago

Just one or all of them? /s

[–] fl42v@lemmy.ml 16 points 12 hours ago (3 children)

But is it convincing enough to attend meetings for me

[–] BertramDitore@lemm.ee 4 points 11 hours ago (3 children)

Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didn’t seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. It’s not exactly the same thing, but no, we’re not there yet.

[–] phoenixz@lemmy.ca 1 points 1 hour ago

You just have to love that these assholes are so lazy that they first use an LLM to write their work, but then are also too lazy to quickly proof read what the LLM spat out.

People caught doing this should be fired on the spot, you're not doing your job.

[–] Patch@feddit.uk 2 points 3 hours ago

I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.

It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were "is that a chat bot?" "Steve, are you actually on this meeting?" "I'm going to kick Steve out in a minute if nobody can get him to answer", etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.

Also: what a dick move.

[–] nimble@lemmy.blahaj.zone 2 points 11 hours ago (1 children)

Wait until you hear about doctors using AI to summarize visits 😎

[–] Imgonnatrythis@sh.itjust.works 1 points 9 hours ago (1 children)
[–] nimble@lemmy.blahaj.zone 1 points 9 hours ago (1 children)

All the above would apply to doctor visit notes. Would you find that helpful?

Plus, they can hallucinate phrases or entire sentences

[–] Imgonnatrythis@sh.itjust.works 2 points 5 hours ago

Have you seen current doctor visit note summaries? The bar is pretty low. A lot of these are made with conventional dictation software that has no sense of context when it misunderstands. Agree the consequences can be worse if the context is wrong, but I would guess a well programmed AI could summarize better on average than most visit summaries are currently. With this sort of thing there will be errors, but let's not forget that there ARE errors.

[–] ThePowerOfGeek@lemmy.world 1 points 9 hours ago (1 children)

Or family reunions.

...Asking for a friend.

[–] stringere@sh.itjust.works 1 points 5 hours ago

What does an AI look like in jorts?

[–] Telorand@reddthat.com 1 points 12 hours ago

Asking the important questions

[–] Telorand@reddthat.com 16 points 14 hours ago (2 children)

Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.

Okay, but can it embody my traumas?

lol because people always behave in ways consistent with how they tell an interviewer they will.

[–] moistclump@lemmy.world 3 points 12 hours ago

Maybe some of the symptoms of the traumas that you exhibited during the interview.

[–] nimble@lemmy.blahaj.zone 11 points 13 hours ago* (last edited 13 hours ago)

I'm pretty sure we already explored this timeline in a black mirror episode

[–] Kolanaki@yiffit.net 12 points 14 hours ago* (last edited 14 hours ago)

I actually have wanted to try this out to see how accurate it can actually be. I already have conversations with myself, so I can truly compare the reality to a LLM. It's actually weird that even the supposed "unlocked and able to generate anything from anything" tools I've found still don't let you just input direct forum posts and shit to use. Though, I can totally understand why; most people probably aren't gonna use it with their own shit, but someone else's.

[–] jdw@links.mayhem.academy 12 points 14 hours ago (1 children)

Great. Now I can see first hand how annoying I am 🤔

[–] Curious_Canid@lemmy.ca 2 points 11 hours ago

I like you.