this post was submitted on 21 Nov 2024
148 points (97.4% liked)

Technology

59566 readers
4758 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

you are viewing a single comment's thread
view the rest of the comments
[–] Railcar8095@lemm.ee 0 points 1 day ago (9 children)

So you need to have a model that generates CP to begin with. Flawless reasoning there.

Look, it's clear you have no clue what you're talking about. Stop demonstrating it, moron.

[–] horse_tranquilizers@sh.itjust.works 2 points 1 day ago* (last edited 23 hours ago) (2 children)

Not CP, but normal porn and select on CP traits, moron

[–] Railcar8095@lemm.ee 1 points 23 hours ago (1 children)

https://en.m.wikipedia.org/wiki/False_positives_and_false_negatives

Not that I think you will understand. I'm posting this mostly for those moronic enough to read your comments and think "that seems reasonable"

load more comments (6 replies)