this post was submitted on 13 Dec 2024
74 points (97.4% liked)

Fediverse

17937 readers
55 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

The Fediverse is a great system for preventing bad actors from disrupting "real" human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to "trust and safety" teams more concerned about user retention).

Right now it seems that the Fediverses main protection is that it just isn't a juicy enough target for wide scale spam and bad faith agenda pushers.

But assuming the Fediverse does grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much.

For example, I have a feeling all "good" instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it's not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term.

Any thoughts on this topic?

all 44 comments
sorted by: hot top controversial new old
[–] OpenStars@piefed.social 45 points 1 month ago (2 children)
[–] bigfoot@lemm.ee 12 points 1 month ago (2 children)

What's the incentive to operate an LLM on the fediverse that is truly helpful and not just trying to secretly sell something/push an agenda?

[–] OpenStars@piefed.social 8 points 1 month ago* (last edited 1 month ago)

Well, I am not saying that the scenario is a perfect match, just that it reminded me of that:-).

Though to answer your question, if Reddit were all AI slop whereas we were not, then they would be foolish to not exploit (for moar profitz) the source of legitimately true info that could be useful to answer people's questions, e.g. on topics such as whether and how to use Arch Linux btw. :-P

[–] mukt@lemmy.ml 1 points 1 month ago

To train it to mimic genuine human behaviour for applications elsewhere.

[–] frozenspinach@lemmy.ml 3 points 1 month ago (1 children)

The trouble with this is that I think bots and bad faith trolls can split the difference, passing some minimum threshold of constructive and marrying it to usual trolling behaviors.

[–] OpenStars@startrek.website 2 points 1 month ago* (last edited 1 month ago)

Agreed. Though it is not just that one isolated user - the admins of Lemmy.ml are quite well-known themselves for administering their server in bad faith as well. The side-bar just says "A community of privacy and FOSS enthusiasts, run by Lemmy’s developers" (and then a link to "What is Lemmy.ml" that returns an error when I try to click it - btw for you with an account, does it go anywhere? maybe a community that is only visible to those locally with an account? for me it says "There was an error on the server. Try refreshing your browser. If that doesn't work, come back at a later time. If the problem persists, you can seek help in the Lemmy support community or Lemmy Matrix room." - but what about when you click it?). And it while people on that instance constantly criticize the USA's support for Israel's genocide in Gaza, nonetheless if you whisper a criticism towards the likes of Russia, China, or North Korea, you will be banned even from communities that you have never once visited. That is simply how they do things over there. (further reading, see also so, so very many examples in !yepowertrippinbastards@lemmy.dbzer0.com or !fediverselore@lemmy.ca or !meanwhileongrad@sh.itjust.works etc.)

Sadly, I am not anywhere close to joking or exaggerating. Also, while they ban people for mentioning that e.g. people died in the Tiananmen Square massacre, they also protect mods who act horribly towards their fellow human being. Here's an interesting example that you can read it for yourself e.g. at https://hexbear.net/post/3706906/5518427 where after the mod told the poster (over a misunderstanding of an in-game event) that he wanted to kill them, and then even the unremoved comments from the mod doubles down with “nono I don’t want to shoot for pointing that it’s a game, I want to shoot you because…”, and then later tripled down still further, e.g. stating “I hope you die soon.”). To be clear, this post shows up on hexbear.net (for some reason, despite the original having been removed entirely), but the incident occurred on and the mod is from lemmy.ml - those instances are often intertwined, along with lemmygrad.ml.

So you may want to consider switching instances. A further thought: I am having to reply to you from a different instance than my original comment since I have blocked all users from lemmy.ml (although PieFed's Notifications system is newly implemented and still not fully functional yet, causing me to have to hunt down why I received a Notification for a comment that I could not see:-). You will often face similar prejudice when speaking from that account on that instance - e.g. the apps Sync and Connect can also do such user-blocking of instances, and several instances such as lemmy.cafe and quokk.au and dubvee.org have outright defederated from lemmy.ml entirely. Thus you may sometimes feel like you are speaking into a void and wondering why nobody will respond to you - I am explaining that this may well be a reason why.

I hope that you don't feel that I am picking on you personally, just trying to share that thought that could help you understand the contentious situation between the "tankie" vs. "liberal" instances on the Fediverse:-). If you wanted an instance that is specifically leftist, slrpnk.net seems awesome? In contrast, lemmy.ml merely pretends to be leftist, while actually advocating solely for formerly communist powers, despite them being currently capitalistic, and definitely authoritarian - e.g. you will see people praising the virtues of North Korea there, but nowhere else on the Fediverse that I have yet seen! Although for me, it's not even what those users believe, so much as their improper argumentation form about it, e.g. here's an example from the bad-faith user you mentioned, posted just prior to the USA election, which seems to be an attempt to encourage the BuT bOtH sIdEs EqUaL ThO rhetoric:

img

And I see this kind of thing so often from users on lemmy.ml, that I just blocked the entire instance - again, I hope you personally don't feel attacked by this, just sharing my reasoning in case that may be helpful for you.:-)

[–] th3raid0r@tucson.social 17 points 1 month ago (2 children)

Hi there! Admin of Tucson.social here.

I think that the only way the fediverse can honestly handle this is through local/regional nodes not interest based global nodes.

Ideally this would manifest as some sort of non-profit entity that would work with municipalities to create community owned spaces that have paid moderation.

So then comes the problem of folks not agreeing with a local nodes moderation staff - but that's also WHY it should be local. It's much easier to petition and organize against someone who exists in your town than some guy across the globe who happens to own a large fediverse node.

This model just doesn't work (IMO) if nodes can't be accountable to a local community. If you don't like how Mastodon, or lemmy.world are moderated you have zero recourse. For Tucson.social - citizens of Tucson can appeal to me directly, and because they are my fellow citizens I take them FAR more seriously.

Only then will people be trusting enough to allow for the key element to protecting against AI Slop. Human Indemnification Systems. Right now, if you wanted to ask the community of lemmy.world to provide proof they are human, you'd wind up with an exodus. There's just no trust for something like that and it would be hard to acquire enough trust.

With a local node, that conversation is still difficult, but we can do things that just don't scale with global nodes. Things like validating a person by meeting them to mark them as "indemnified" on a platform, or utilizing local political parties to validate if a given person is "real" or not using voter rolls.

But yeah, this is a bit rambly, but I'll conclude that this is a problem that exists at the intersection between trust and scale and that I believe that local nodes are the only real solution that can handle both.

[–] Blaze@feddit.org 6 points 1 month ago (2 children)
[–] th3raid0r@tucson.social 4 points 1 month ago (2 children)

???

I don't particularly have any issues with them.

But if a user did, they don't have much recourse. I'm talking about that as a structural aspect. Not a moral one.

But sure if you just want to claim this puts me in the !yepowertrippinbastards@lemmy.dbzer0.com community by ripping it out from any relevant context, go ahead I guess?

[–] Blaze@feddit.org 5 points 1 month ago (2 children)

I didn't say you were power tripping.

I was mentioning that community as a way to handle power tripping mods.

It also works, !lotrmemes@midwest.social is being replaced by !lotrmemes@lemmy.dbzer0.com after the admin started power tripping.

So it's not just moral, it also has a real impact by allowing users to organize and switch communities

[–] th3raid0r@tucson.social 5 points 1 month ago (1 children)

Oh okay! I'm sorry about the misunderstanding.

[–] Blaze@feddit.org 4 points 1 month ago

No worries!

[–] OpenStars@piefed.social 2 points 1 month ago (1 children)

Oh wow you are fast - I just commented with the identical example. :-)

[–] Blaze@feddit.org 2 points 1 month ago

Nice comment!

[–] OpenStars@piefed.social 3 points 1 month ago

Fwiw, Blaze I'm sure was saying that the recourse could be to post the infraction there, so that people become aware of a "power tripping bastard", i.e. the lemmy.world mod hypothetical example mentioned earlier.

Multiple times communities have been shifted from one instance to another due to precisely this effect. A recent example is how !lotrmemes@midwest.social now has an alternative !lotrmemes@lemmy.dbzer0.com to help people get out from under the heel of the power tripping admin of that particular instance (described in a recent post in the !yepowertrippinbastards@lemmy.dbzer0.com community).

[–] bigfoot@lemm.ee 1 points 1 month ago (2 children)

"Power tripping mods" definitionally cannot exist on the fediverse where anyone can create an instance or community. Even on Reddit, 99% of the time someone said a mod was "power tripping" it was just a right winger upset that the mod removed their disruptive nonsense.

The purpose of communities like the one you linked to is to shame mods into employing a passive, generic bare-minimum style of moderation, when we should be encouraging the opposite if we want diversity in the fediverse.

[–] _haha_oh_wow_@sh.itjust.works 1 points 1 month ago (1 children)

Power tripping mods can exist anywhere there are mods, even here. The rest of your point stands though.

[–] bigfoot@lemm.ee 1 points 1 month ago (1 children)

It's theirs. They can do whatever they want. Any limits their power within the instance/community is purely voluntary on the part of the owner.

[–] _haha_oh_wow_@sh.itjust.works 2 points 1 month ago

Instance = admin, community = mod, but either can still power trip within the confines of their little worlds.

[–] Blaze@feddit.org 1 points 1 month ago (1 children)

Three examples from that community, where other people can discuss the moderation, and see whether it's power tripping or not.

right winger upset

Right wingers aren't that numberous of Lemmy, but when this happens it gets quickly disqualified by the people commenting

anyone can create an instance or community

Enjoy your empty community nobody cares about because people post on the one where most of the people are, where the power tripping mod is operating

[–] bigfoot@lemm.ee -1 points 1 month ago* (last edited 1 month ago) (2 children)

Mods and admins on the Fediverse are not democratically elected, they have complete control. Accusing one of "power tripping", in their own community, on the instance they presumably pay for, is not a rational accusation, since they definitionally cannot exist in a state of less power. What that community is trying to do is use the threat of public shaming to influence behavior. It's how you get weak moderation and generic communities where bad actors can thrive. A community dedicated to "Stopping bad mods" sounds good on the surface, but it's an argument made in bad faith.

[–] orcrist@lemm.ee 3 points 1 month ago

The first sentence you wrote is either misleading or incorrect, and I think it's important to reexamine. Each administrator has control over the instance they run, but they don't have control over the Fediverse itself, and because it's so easy for people to move to other instances, they have little control over other users.

[–] Blaze@feddit.org 2 points 1 month ago

Accusing one of “power tripping”, in their own community, on the instance they presumably pay for, is not a rational accusation, since they definitionally cannot exist in a state of less power

Mods don't pay for the instance, they aren't in charge of any of it.

Some admins have strong policies against getting involved into moderation of communities, leaving potential power tripping mods unchecked.

What that community is trying to do is use the threat of public shaming to influence behavior. It’s how you get weak moderation and generic communities.

  • A community is the most popular on a topic, it's by far the most active community on that topic across the whole platform
  • The single mod, who was just the first one to create the community when everyone came to Lemmy, starts to power trip
  • The admin does not want to intervene
  • What solution do the users have besides organizing on a community like !yepowertrippinbastards@lemmy.dbzer0.com ?
[–] bigfoot@lemm.ee 5 points 1 month ago (1 children)

Thanks for the thoughtful response. I too think that regional instances would be ideal for a "backbone" of the social web. But at the same time, I feel that interest-based connection is a truly unique strength of the internet and it would be a sad thing to lose to the slop.

Ultimately, I think that more, smaller instances is likely the best "ultimate" defense against slop since there is no incentive for them to scale beyond their needs. But every instance admin is technically responsible for the content on all federated instances. Which can get overwhelming!

[–] th3raid0r@tucson.social 4 points 1 month ago (1 children)

I mean, regional instances don't have to stop folks from engaging primarily with interest based communities.

Some regions will dominate certain interests for example - here in Tucson we're consider one of the Amateur Astronomy capitals of the world. If mander.xyz were to disappear tomorrow, Tucson would make a good home for all of the fediverse's astronomy needs even though its a region based instance.

Further, there's nothing that states an interest-based instance needs any registration. One could imagine a world where local instances have all the users and identities, and the interest based instances simply provide communities to the larger fediverse with no users of their own.

But yeah, it's definitely a paradigm shift that makes interest based communities a bit more difficult to find.

[–] bigfoot@lemm.ee 1 points 1 month ago

Further, there’s nothing that states an interest-based instance needs any registration. One could imagine a world where local instances have all the users and identities, and the interest based instances simply provide communities to the larger fediverse with no users of their own.

Yes, I've had this same thought and I think it's a great model! If it comes to pass or not remains to be seen. But the concept is good!

[–] haui_lemmy@lemmy.giftedmc.com 8 points 1 month ago

Maybe it was silently assumed but nobody so far mentioned the endless stream of scrapers that go through my probably juicy but private instance. I‘m banning a new bot every week and by now they have switched to distributed actions. I get over 400 requests per hour by a couple ips for the same stuff with changing useragents because I wrote automated detection mechanisms. I might just make my instance login only.

[–] AbouBenAdhem@lemmy.world 7 points 1 month ago* (last edited 3 weeks ago)

Instead of trying to detect and block it, just disincentivize it.

Most AI spam on social media tries to exploit various systems intended to predict “good” content on the basis of a user’s past activity, by tracking reputation/karma/etc. Bots build up karma by posting a massive amount of innocuous (but usually insipid) content, then leverage that karma to increase the visibility of malicious content. Both halves of this process result in worse content than if the karma system didn’t exist in the first place.

[–] WatDabney@lemmy.dbzer0.com 6 points 1 month ago (1 children)

"The fediverse" really can't. That's just the reality of a decentralized system. It's going to be up to individual instances to sort it out.

But that's a good thing, because what it means is that different instances can and will try different approaches, and between them, they'll sooner or later hit on the one(s) that will be most effective.

[–] bigfoot@lemm.ee 1 points 1 month ago (2 children)

Any speculation as to what those tools might look like?

[–] happybadger@hexbear.net 1 points 1 month ago

Ban it outright in the rules of individual instances, bully AI piglets for printing the lowest-value content online in the same way NFT goobers are ostracised, run AI image and writing detectors on suspect posts. The common denominator of any AI post is that it's going to be shit and it should just be treated like someone repeatedly posting a Lorem ipsum copypasta or spam email.

[–] WatDabney@lemmy.dbzer0.com 1 points 1 month ago

I don't have the foggiest idea.

And really, if I did have a good idea, I wouldn't post it publicly anyway. That'd just be tipping my hand to the astroturfers.

[–] frozenspinach@lemmy.ml 5 points 1 month ago

The fediverse architecture was built from the beginning to allow instance-by-instance exercise of discretion to mute any systemic effects that could take over the network as a whole.

This was I think oriented toward limiting swarming behavior from trolls, but I think it also applies to AI bots.

Right now it seems that the Fediverses main protection is that it just isn’t a juicy enough target for wide scale spam and bad faith agenda pushers.

If you ask me they are already here right now, but I think it's not the architecture of the fediverse, but the judgment of individual mods that have let us down in this case.

[–] yogthos@lemmy.ml 5 points 1 month ago

I think that being human scale is largely the appeal of the Fediverse. Each instance isn't meant to grow to the size of a centralized platform, but to be a relatively small community of people with some shared interests. I look at it similarly to the way IRC channels worked back in the day. You tend to have a group of people whom you interact with frequently and that's how you know they're human. If some bot enters the community then it becomes obvious very quickly.

[–] Breve@pawb.social 4 points 1 month ago (1 children)

I don't think there is any way to have a genuine "open forum" amongst complete strangers. There have always been human troll farms pushing narratives using sock puppet accounts, AI is just enabling it to reach new scales.

I actually am for echo chambers when it comes to social media, but one in which you only follow people you know or trust and ignore complete strangers and to make sure you get news and critical information from OUTSIDE social media, again with institutions you trust.

[–] bigfoot@lemm.ee 1 points 1 month ago

Yes, strong moderation by members of the community is sufficient to recognize and remove bad (human) actors. The question is one of volume and overwhelming those human mods. GPT can create hundreds of bad-faith accounts.

[–] Corgana@startrek.website 3 points 1 month ago

I have had similar thoughts, I think the answer ultimately lies in active mods that can really get to know a community and it's users and identify when users are pushing a narrative even if they can't confirm if they are a bot or not.

Also as @dessalines@lemmy.ml pointed out, user registrations. On startrek.website we have a question that is easy for a star trek fan to answer but not easy for a bot (although getting back to your concern, chatGPT probably would have no problem)

[–] metaStatic@kbin.earth 1 points 1 month ago

Same way email handles spam ...

[–] fishos@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

What can be done? Smarter people can probably list plenty of things. But in the end, it's a constant race trying to out compete. And with LLMs/AI, you can literally train it on the system you want it to overcome with that express purpose and let it work out the "how" and you're back to square one again.

I think it can best be put in song

Or put another way: how do you make a bear proof trashcan that can defeat a bear but not the dumbest of humans?

[–] Blaze@feddit.org -1 points 1 month ago (1 children)

As you said, a 44k monthly active users plateform is probably not worth investing time from spammers and agenda pushers.

If at some point we'll make it, we'll see. Seems like we are still quite far.

[–] catloaf@lemm.ee 4 points 1 month ago (1 children)

You say that, but they're already here. I see completely automated commercial spam posts every few days. And we all know there's already political agenda-pushers. Hell, Lemmy was created by some.

[–] Blaze@feddit.org 2 points 1 month ago

see completely automated commercial spam posts every few days.

Don't get those accounts banned quite fast?

And we all know there’s already political agenda-pushers. Hell, Lemmy was created by some.

It's community-dependent. Lemmy.ml communities are far from being the most popular on Lemmy: https://lemmyverse.net/communities?order=active_month