this post was submitted on 21 Nov 2024
150 points (97.5% liked)

Technology

59566 readers
4890 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It's the earliest AI technology striving to expose unreported CSAM at scale.

top 50 comments
sorted by: hot top controversial new old
[–] Wes4Humanity@lemm.ee 21 points 23 hours ago (1 children)

Man... That AI is going to be so fucked up when it gains sentience

[–] Blackmist@feddit.uk 5 points 11 hours ago (1 children)

Skynet's real origin story. We might just deserve judgement day.

[–] Brickhead92@lemmy.world 0 points 9 hours ago

Oh we definitely do! Definitely for this, and definitely for many other things.

[–] sexual_tomato@lemmy.dbzer0.com 14 points 1 day ago* (last edited 1 day ago) (2 children)

Jesus Christ. If someone ever got their hands on this model they could use it to generate new material. The grossest possible AI model to date

[–] todd_bonzalez@lemm.ee 22 points 23 hours ago (1 children)

No. This is an inference model, not a generative model. You generally cannot train a model for both, unless you do it on purpose, and they certainly did not (especially since inference models are way easier to train than generative models).

[–] sexual_tomato@lemmy.dbzer0.com 7 points 23 hours ago* (last edited 23 hours ago) (1 children)

A generative model uses the classifier as part of its training. If you generate a picture of pure random noise, then iteratively pick random noise that the classifier says "looks" more like csam, then you can effectively generate images that the classifier says it's 100% certain is csam. Whether or not that looks anything like what a human would consider to be csam depends on other factors but it remains a possibility.

[–] todd_bonzalez@lemm.ee 6 points 19 hours ago

You are describing the way deepdream works, not the way modern Diffusion models work. It's the difference between psychedelic dog faces and a highly adherent generative image of a German Sheppard.

I can't imagine you're going to get anything out of this model that actually looks like CSAM, unless there's some sort of breakthrough in using these models for previously unrealized generative purposes.

load more comments (1 replies)
[–] TheHobbyist@lemmy.zip 127 points 1 day ago* (last edited 1 day ago) (2 children)

Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.

https://fortune.com/europe/2023/09/26/thorn-ashton-kutcher-ylva-johansson-csam-csa-regulation-european-commission-encryption-privacy-surveillance/

[–] Erasmus@lemmy.world 64 points 1 day ago (2 children)

Just remember folks. Kutcher is a slimeball too.

The guy went from a D list star and hanging out with the likes of Danny Masterson and going to Diddy’s infamous parties - to suddenly overnight courting the US government and being the face of ‘helping’ children everywhere.

Yeah right…..

[–] chonglibloodsport@lemmy.world 23 points 1 day ago (3 children)

I’d be wary of calling him guilty by association. Maybe when he realized who he was really hanging out with he was so horrified and disgusted that he just had to get involved and do something to fight back?

[–] phoenixz@lemmy.ca 1 points 14 hours ago

Nah, it's much easier to chastise people for not knowing what nobody knew

load more comments (2 replies)
[–] ninekeysdown@lemmy.world 19 points 1 day ago (1 children)

People can grow and change. Not saying he did or didn’t. Just saying that people aren’t a monolith. It’s plausible he just grew and his views changed / evolved.

That being said, it’s highly convenient where he’s positioned himself these days…

load more comments (1 replies)
load more comments (1 replies)
[–] Kyrgizion@lemmy.world 135 points 1 day ago (2 children)

Not a single peep about false positives.

I'm sure it won't be abused though. And if anyone does complain, just get their electronics seized and checked, because they must be hiding something!

[–] JackbyDev@programming.dev 14 points 1 day ago

It could also, of course, make mistakes, but Kevin Guo, Hive's CEO, told Ars that extensive testing was conducted to reduce false positives or negatives substantially. While he wouldn't share stats, he said that platforms would not be interested in a tool where "99 out of a hundred things the tool is flagging aren't correct."

I take this to mean it is at least 1% accurate lol.

[–] oldfart@lemm.ee 81 points 1 day ago (4 children)

Reminds me of the A cup breasts porn ban in Australia a few years ago, because only pedos would watch that

[–] JackbyDev@programming.dev 15 points 1 day ago

This sort of rhetoric really bothers me. Especially when you consider that there are real adult women with disorders that make them appear prepubescent. Whether that's appropriate for pornography is a different conversation, but the idea that anyone interested in them is a pedophile is really disgusting. That is a real, human, adult woman and some people say anyone who wants to live them is a monster. Just imagine someone telling you that anyone who wants to love you is a monster and that they're actually protecting you.

[–] baldingpudenda@lemmy.world 43 points 1 day ago (2 children)

There was a a porn studio that was prosecuted for creating CSAM. Brazil i belive. Prosecutors claimed that the petite, A-cup woman was clearly underaged. Their star witness was a doctor who testified that such underdeveloped breasts and hips clearly meant she was still going through puberty and couldn't possible be 18 or older. The porn star showed up to testify that she was in fact over 18 when they shot the film and included all her identification including her birth certificate and passport. She also said something to the effect of women come in all shapes and sizes and a doctor should know better.

I can't find an article. All I'm getting is GOP trump pedo nominees and brazil laws on porn.

[–] oldfart@lemm.ee 4 points 19 hours ago

I'm just glad they protected her

Pretty sure the adult star was lil Lupe. She was everywhere at the time because she did, indeed, look underage.

[–] Clinicallydepressedpoochie@lemmy.world 47 points 1 day ago (2 children)

Awe man, I love all titties. Variety is the spice of life.

[–] DScratch@sh.itjust.works 45 points 1 day ago (1 children)

Not to mention the self image impact such things would have on women with smaller breasts, who (as I understand it) generally already struggle with poor self image due to breast size.

[–] sunzu2@thebrainbin.org 23 points 1 day ago (1 children)

Clearly the state gives zero fucks about these women, or anyone else or even "the children"

Catholic Church is still around for a reason

[–] Halosheep@lemm.ee 7 points 22 hours ago* (last edited 22 hours ago)

Typically the state only cares about things they perceive as children.

[–] user224@lemmy.sdf.org 20 points 1 day ago (1 children)

Believe it or not, straight to jail.

[–] Clinicallydepressedpoochie@lemmy.world 23 points 1 day ago (1 children)

If this is the price I must pay, I will pay it, sir! No man should be deprived of privately viewing a consenting adults perfectly formed small tit's. They can take my liberty, they can take my livelihood, but they will never take away my boner for puffy nipples on a small chested half Japanese woman!

load more comments (1 replies)
load more comments (1 replies)
[–] sunzu2@thebrainbin.org 52 points 1 day ago (2 children)

I am a bit confused how it is legal for them to have the training data here?

Like is there anything a corpo can't do?

Like why can't subway Jared and Catholic church "train the AI"

Only half way joking, what's the catch here?

[–] MentalEdge@sopuli.xyz 36 points 1 day ago (1 children)

There are laws around it. Law enforcement doesn't just delete any digital CSAM they seize.

Known CSAM is archived and analyzed rather than destroyed, and used to recognize additional instances of the same files in the wild. Wherever file scanning is possible.

Institutions and corporation can request licenses to access the database, or just the metadata that allows software to tell if a given file might be a copy of known CSAM.

This is the first time an attempt is being made at using the database to create software able to recognize CSAM that isn't already known.

I'm personally quite sceptical of the merit. It may well be useful for scanning the public internet, but I'm guessing the plan is to push for it to be somehow implemented for private communication, no matter how badly that compromises the integrity of encryption.

[–] melroy@kbin.melroy.org 17 points 1 day ago (1 children)

So doesn't that make the law enforcement having the biggest CP collection from everybody? This sounds kinda dangerous...

[–] MentalEdge@sopuli.xyz 25 points 1 day ago* (last edited 1 day ago)

It does. Kinda.

The police are seldom allowed to be in possession of CSAM, except for in terms of grabbing the hardware which contains it in an arrest. The database used in modern detection tools is maintained by NCMEC which has special permission to do so.

And of course there are risks, but it's just digital data. Unless you are creating more, you're not actively harming anyone. And law enforcement absolutely needs that data to take some of the most obvious steps to prevent it being spread further.

Obviously, someone has access, but to get to the actual media files wouldn't be simple. What typically happens, is that anyone wanting to detect CSAM, is given a hashed version of the database. They can then scan their systems for CSAM by hashing any media they are hosting, and seeing whether there are any matches.

Whenever possible, people aren't handling the actual media. But for any detection to be possible to begin with, the database of the actual media does need to be maintained somewhere.

AI is a touchier subject, as you can't train a model to recognize CSAM not already in the database using hashes, so in those cases you have to work with actual real media. This is only recently becoming a thing.

It also leaves open the possibility for false positives. An oft cited example is parents taking pictures of their own children for innocent reasons, or doctors and parents handling images for valid medical reasons. In a system that flagged such content, it would mean someone else would be seeing that "private" content because it was flagged.

load more comments (1 replies)
[–] db0@lemmy.dbzer0.com 40 points 1 day ago* (last edited 1 day ago) (1 children)

It's the earliest AI technology striving to expose unreported CSAM at scale.

horde-safety has been out for a year now. Just saying... It's not a trained AI model in this way, but it's still using Neural Networks (i.e. "AI Technology")

[–] qaz@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (1 children)
[–] db0@lemmy.dbzer0.com 6 points 1 day ago

haha, nah people reported some unexpected censors, and we investigated what part of their prompt might be causing it.

[–] feedum_sneedson@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago)

me
no
rikey

[–] hendrik@palaver.p3x.de 23 points 1 day ago (5 children)

And will we get that technology to keep the Fediverse and free platforms safe? Probably not. All the predecessors have been kept away for sole use of the big players, despite populism always claiming we need to introduce total surveillance to keep the children safe...

[–] db0@lemmy.dbzer0.com 4 points 1 day ago (1 children)

IFTAS is already working with Thorn towards this goal. But you already have access to such technology through my toolset.

[–] hendrik@palaver.p3x.de 2 points 23 hours ago* (last edited 23 hours ago) (1 children)

This one? I loosely followed your work... Maybe I should try it someday. See how it does on a regular VPS. Thanks for the link to the IFTAS. Seems they have curated some useful links... I'll have a look at their articles. Hope they get somewhere with that. At this point, I don't think there is any blocklist accessible to the average Fediverse admin?!

Edit: Thx, saw your other comment with the link to horde-safety.

[–] db0@lemmy.dbzer0.com 2 points 23 hours ago (1 children)

Ye, a normal VPS would be too slow for production use, as a GPU is recommended. But you can plug in any home PC to do it without risks

[–] hendrik@palaver.p3x.de 1 points 23 hours ago* (last edited 23 hours ago) (1 children)

Do you think this approach would be worth a try for the threaded Fediverse (aka Lemmy)? I mean your use-case is very different. We have some rudimentary image detection to flag other kinds of unwanted images in Piefed. I could experiment with something like https://github.com/monatis/clip.cpp. Have it go through the media cache and see if it can do something useful for us. But I don't think it'd be worth all the effort unless the whole approach is somewhat accurate and runs in real time on average VPSes.

[–] db0@lemmy.dbzer0.com 3 points 23 hours ago* (last edited 23 hours ago)

This approach was developed precicely for threaded fediverse. The initial use-case was protecting my own lemmy from CSAM! Check out fedi-safety and pictrs-safety

load more comments (4 replies)
load more comments
view more: next ›