this post was submitted on 31 Aug 2023
863 points (98.6% liked)
Fediverse
28489 readers
635 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Wrong. The next terrible thing is mass-AI-generated propaganda and disinformation. Like in the "dead internet" theory
Next? I think you misspelled "current":-D
My bad. But I think we haven't seen the full extent of it yet
Tbf, it seems like the current "mass-AI-generated propaganda and disinformation" has actual humans behind it i.e. state-sponsored disinformation as part of modern warfare, as opposed to just sheer random BS pooped out of an algorithm designed to maximize short-term profits for the person trying to use enough buzzwords to get their algorithm bought out by someone dumb enough to fall for their pitch and short-sighted enough to not realize the wider implications... or worse yet, if they realize, who simply does not care.
It reminds me of the story behind the USA tax preparation software companies who intentionally went on a campaign to confuse military veterans and students (seriously!? what kind of evil mfers...!?), and while they got caught and even punished & fined, it was something like a decade later and ofc the original CEO and also the next one etc. had long since received their fat bonus checks, leaving the company holding the bag (liability). Thus it was "a smart move", so long as you entirely disregard ethics. What was presented as a "free gift", to generate good PR for the company, was in reality predating upon people that they deemed would be highly trusting or at least minimally likely to sue them... and they were correct. Now, watching interviews of these tech-bros, I get the same vibe as in like who cares so long as I get mine.
Web of trust solves this problem, until people start intentionally trusting AIs as much as they do other humans, at which point it's no longer a problem.