this post was submitted on 15 Nov 2023
-53 points (15.6% liked)

Fediverse

28496 readers
613 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

Even though millions of people left Twitter in 2023 – and millions more are ready to move as soon as there's a viable alternative – the fediverse isn't growing.1 One reason why: today's fediverse is unsafe by design and unsafe by default – especially for Black and Indigenous people, women of color, LGBTAIQ2S+ people2, Muslims, disabled people and other marginalized communities. ‌

you are viewing a single comment's thread
view the rest of the comments
[–] TheBeege@lemmy.world 8 points 1 year ago (2 children)

Maybe I'm part of the problem, and if so, please educate me, but I'm not understanding why blocking is ineffective...?

And block lists seem like an effective method to me.

The security improvements described seem reasonable, so it would be nice to get those merged.

I understand that curation and block lists require effort, but that's the nature of an open platform. If you don't want an open platform, that's cool, too. Just create an instance that's defederated by default and whitelist, then create a sectioned-off Fediverse of instances that align with your moderation principles.

I feel like I've gotta be missing something here. These solutions seem painfully obvious, but that usually means I'm missing some key caveat. Can someone fill me in?

[–] MHLoppy@fedia.io 4 points 1 year ago* (last edited 1 year ago) (1 children)

I’m not understanding why blocking is ineffective…?

As I understand it, because it requires harm to be experienced before the negating action is taken.

A parallel might be having malware infect a system before it can be identified and removed (harm experienced -> future harm negated), vs proactively preventing malware from infecting the system in the first place (no harm experienced before negation).

[–] Haui@discuss.tchncs.de 2 points 1 year ago (1 children)

Which is exactly how the real world works. Harm has to be identified to suggest solutions. Otherwise you‘re becoming the helicopter parent that denies their kid every opportunity to learn and cause allergies and other bad outcomes. Translated back to the fediverse: it is great the way it is and improvements are always encouraged. We have much bigger and more pressing issues. This is not it.

[–] MHLoppy@fedia.io 5 points 1 year ago* (last edited 1 year ago) (1 children)

Which is exactly how the real world works. Harm has to be identified to suggest solutions.

According to the submission, some harms have been identified, and some solutions have been suggested [that could reduce the same and similar harms from occurring to new and existing users] (but mostly it sounds like a "more work needs to be done" thing).

I imagine your perspective on the issues being discussed are different from those of the author. The helicopter parent analogy makes sense in a low-danger environment; I think what the author has suggested is that some people don't feel like it's a low-danger environment for them to be in (though I of course -- not being the author or one such person -- may be mistaken).

Edit: [clarified] because I realised it might seem contradictory if read literally.

[–] TheBeege@lemmy.world 1 points 1 year ago

This makes sense, especially considering the features the author cited. The by design parts may just be for clickbait purposes

At some level you're not missing anything: there are obvious solutions, and they're largely ignored. Blocking is effective, and it's a key part of why some instances actually do provide good experiences; and an allow-list approach works well. But, those aren't the default; so new instances don't start out blocking anybody. And, most instances only block the worst-of-the-worst; there's a lot of stuff that comes from large open-registration instances like .social and .world that relatively few instances block or even limit.