Lemmy

12572 readers
26 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
76
 
 

Post made on lemmy.ml can't be seen on lemmy.world

77
 
 

Ive always wondered. - Should I be adding relevant hashtags to Lemmy posts related to the subject of the post to make the content more findable on other Activitypub services?

78
 
 

A new post with 1 deleted comment shows as "comment symbol 0 (-1 New)"

... which looks goofy.

But not in this /c, maybe there's some kind of /c setting that shows quantities of new comments?

Maybe I made and deleted the comment too soon after I created this post?

79
15
submitted 8 months ago* (last edited 8 months ago) by ff0000@lemmy.ml to c/lemmy@lemmy.ml
 
 

UPDATE: running pnpm run translations:generate (and also the other translations tasks, just to be sure) fixed the issue.


When trying to setup a local Lemmy instance (for development), the lemmy-ui repository throws an error when trying to start the dev server.

I followed the guide: https://join-lemmy.org/docs/contributors/02-local-development.html

When running pnpm dev to start the dev server, it presents me with two errors:


ERROR in ./src/shared/services/I18NextService.ts 14:0-40
Module not found: Error: Can't resolve '../translations/en' in '/home/ff0000/workspace/lemmy/lemmy-ui/src/shared/services'
resolve '../translations/en' in '/home/ff0000/workspace/lemmy/lemmy-ui/src/shared/services'
  using description file: /home/ff0000/workspace/lemmy/lemmy-ui/package.json (relative path: ./src/shared/services)
    using description file: /home/ff0000/workspace/lemmy/lemmy-ui/package.json (relative path: ./src/shared/translations/en)
      no extension
        /home/ff0000/workspace/lemmy/lemmy-ui/src/shared/translations/en doesn't exist
      .js
        /home/ff0000/workspace/lemmy/lemmy-ui/src/shared/translations/en.js doesn't exist
      .jsx
        /home/ff0000/workspace/lemmy/lemmy-ui/src/shared/translations/en.jsx doesn't exist
      .ts
        /home/ff0000/workspace/lemmy/lemmy-ui/src/shared/translations/en.ts doesn't exist
      .tsx
        /home/ff0000/workspace/lemmy/lemmy-ui/src/shared/translations/en.tsx doesn't exist
      as directory
        /home/ff0000/workspace/lemmy/lemmy-ui/src/shared/translations/en doesn't exist
 @ ./src/shared/dynamic-imports.ts 6:0-69 48:17-41
 @ ./src/server/index.tsx 16:0-65 47:2-22

ERROR in ./src/shared/services/I18NextService.ts 159:43-129
Module not found: Error: Can't resolve '../translations' in '/home/ff0000/workspace/lemmy/lemmy-ui/src/shared/services'
 @ ./src/shared/dynamic-imports.ts 6:0-69 48:17-41
 @ ./src/server/index.tsx 16:0-65 47:2-22

webpack 5.91.0 compiled with 2 errors in 14393 ms

I do see that these files nor the folder exist where I18NextService is trying to locate them. But i also see in the root a lemmy-translations folder.

I am able to get it sort of working by updating the paths in I18NextService, but i guess that is not the preferred approach.

80
 
 

Hasn't worked for me in 3 days or so. Not on my browser or in app. Is the instance having issues or did it close down or smth?

81
 
 
82
102
submitted 8 months ago* (last edited 8 months ago) by zabadoh@lemmy.ml to c/lemmy@lemmy.ml
 
 

There have been a number of comment spam attacks in various posts in a couple of /c's that I follow by a user/individual who uses account names like Thulean*

For example: ThuleanSneed@lemmy.tf in !coffee@lemmy.world

and ThuleanPerspective2@eviltoast.org in !anime@ani.social

edit: Also ThuleanSneed@startrek.website in !startrek@startrek.website

The posts have been removed or deleted by the respective /c's mods, and the offending accounts banned, but you can see the traces of them in those /c's modlogs.

The comments consist of an all-caps string of words with profanities, and Simpsons memes.

An attack on a post may consist of several repeated or similar looking comments.

This looks like a bored teenager prank, but it may also be an organization testing Lemmy's systemic and collective defenses and ability to respond against spam and bot posts.

83
 
 

A few days ago, there was a spammer going around instances spamming randomly generated text along with a series of images of the spine-chilling bone-tingling Simpson's character by the name of Sneed, some of them including George Floyd photoshopped between his ass cheeks. This spam reached many comment sections, typically those of recently created posts.

The spammer managed to create thousands of comments within a few minutes, which definitely shouldn't be possible, especially on such a new account. I have noticed from the lemmy source code that it indeed does have rate limits, but only on IPs, not on accounts. It's possible that the spammer used proxies, perhaps scraped from a public list to bypass the simple rate limits already in place.

The spammer seemed to have only a few accounts, therefore, adding a rate limit on accounts could help slow down such bots and minimize the damage they might cause. Another options I could think of are a more advanced form of spam detection and, albeit a bit scummy, reddit-style shadowbans, maybe a combination of a few such methods.

Implementing such measures will help lemmy become a more usable platform and less of an easy target for trolls and 'channers with nothing better to do.

84
15
submitted 8 months ago* (last edited 8 months ago) by Nom@lemm.ee to c/lemmy@lemmy.ml
 
 

Content is hidden due to not choosing the languages in the settings.

Making the language settings opted in by default should solve this initial struggle, opting out should be done manually. Putting check boxes next to the languages on there for selection should make things easier too.

This has been an issue long before the guy in the linked post reported it. ~~I've had to face this same issue on other instances as well so it's obviously a Lemmy issue and not instance specific.~~

Please understand that these sorts of small but very visible troubles are what keep people away from this platform. You sign up and go to any community only to see no posts or only those not tagged with a language (You could even miss those if you don't choose "Undetermined"). Any new user would be confused "Are there no user at all?" "Is this community banned?" "Is this instance defederated?" Each of these doubts would just push any new users out.

Edit: This PSA was over 9 months ago now.

Edit 2: It seems to be instance specific, lemmy.world and lemm.ee both have this issue as far as I've seen.

Edit 3: Thank you everyone for your help.

85
 
 

The block feature should be renamed to “mute”, which is what it seems to actually be. Currently I can apply this to a user and they can still see all my posts. So it’s a good mute feature but a terrible block feature.

86
 
 

After I've saved a post to a /c hosted by another instance than the one that I'm logged into, I can open that post for editing, but I'm unable to save my edits to that post.

For example: I made a post to !ukraine@sopuli.xyz, while logged in elsewhere. Something or other in the webpage link is forcing a download, so I tried to edit the URL in the post, but I can't save it.

This also happened to a post I made to !coffee@lemmy.world where I was trying to edit the text in the post's Body after saving the post.

I can save edits to my posts to /c's on my native instance just fine.

87
 
 

Was just browsing my favorite communities, commenting on posts I found really interesting, and engaging with other users who wanted to have conversations.

… and not a single paid ad in my feed. I effing love this platform.

88
 
 

E.g.:
https://sh.itjust.works/u/testaccount789
https://lemmy.ml/u/testaccount789@sh.itjust.works

I know in past I've successfully updated my display name, and it shows on other instances, so perhaps this problem is new to 0.19.x, but I am not at all sure about that.

89
 
 

Is there a setting page on the lemmy instance where I can download all my data?

90
 
 

I can set it up . This idea popped into my head after i joined some good foss telegram channels and i personnely think matrix is not there yet also telegram does supports third party apps there are some privacy concerns but i don't think it matters to our use case and it is all speculation but matrix instance hoster could be violating this too as most groups are not encrypted . I also do think telegram is just a better experience and more polished to use but i would advise you to download a third party app from fdroid instead of using the official client . So anyways lemmy know.

PS : I am in no way a telegram shill just thinks it is a good platform for this use case as mercurygram, fossify,telegram foss etc. hosts theirs there and it is really a good overall experience . Again not a SHILL.

EDIT : This was just an idea i had and if you disagree please comment so and i will not move forward ciao

EDIT : This was a for the community idea but if most of the community doesn't want it i will ofcourse not go forward but do write your thoughts in the comments as i don't know what you think by seeing the upvotes or downvotes (as some people do it accidentally by swiping or because they did'nt like something specific in the post which can be changed of you comment) and if there is anything i can do to make the experience better .

EDIT : Its all about options i am not proposing to move the community from matrix but just starting another one.

91
 
 

This tab works like Subscribed only in reverse; it only shows stuff from comms you're not subscribed to. Perfect for finding new content to subscribe to without needing to sift through All.

92
 
 

Kind of like how you can do [in-line links](to link people to a website), allow the user to use the same syntax to create contextual information that appears when the [user mouses over](Similar to alt-text on an image). This way users who know the context won't have to slog through a tedious wall of text while those who don't can optionally bring themselves up to speed. For clarity sake in-line context will be a different color to a link, and those on mobile (or desktop) can click it to expand out the contained text as if it was part of the original comment.

EDIT: it might be a good idea to potentially use different syntax so that you can link websites within in-line context.

93
 
 

This article will describe how lemmy instance admins can purge images from pict-rs.

Nightmare on Lemmy St - A GDPR Horror Story
Nightmare on Lemmy Street (A Fediverse GDPR Horror Story)

This is (also) a horror story about accidentally uploading very sensitive data to Lemmy, and the (surprisingly) difficult task of deleting it.

94
 
 

cross-posted from: https://discuss.online/post/5772572

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

95
 
 

Heads up for anyone running mlmym on their instance, your site is probably being used for google SEO manipulation: https://github.com/rystaf/mlmym/issues/101

If you're running an old version, update to v0.0.40!

96
 
 

cross-posted from: https://sh.itjust.works/post/8431810

Not sure how feasible this would be, but my favourite Reddit app (Joey) has this feature.

It allows a user to "follow" an arbitrary post or comment and receive an notifications when new comments or replies are posted. E.g. "Post 'abc' has 9 new comments" or "Comment 'xyz' has 3 new replies".

Useful for keeping up with active threads which have gotten buried in the feed due to age, as well as cases of "I have that question too, and would love to know the answer if someone responds".

Anyway, thanks Bazsalanszky for an already great app! It is my current most-used Lemmy app.

I originally submitted this as an app feature request, but realized that it could also be implemented into Lemmy itself. Would anyone else find this feature useful?

97
 
 

Or i just pissed off someone very dedicated .

98
42
submitted 9 months ago* (last edited 9 months ago) by silas@programming.dev to c/lemmy@lemmy.ml
 
 

I see talk here and there about how any company or individual can easily use anything we post on Lemmy however they want. This could include AI training, behavior analysis, or user profiling. With the recent news of Reddit data being sold and licensed for AI training, I thought this would be a great time to preemptively discuss how we feel about this topic and brainstorm ways to discourage unwanted use of the content we post.

I’ve seen some users add a license to the end of each of their comments. One idea might be this: Add a feature to Lemmy where each user can choose a content license that applies to everything they post. For example, one user might choose to no rights for their content (like CC0) because they don’t care how their data is used. Another user might not want companies profiting off their posts, so they’d choose a more restrictive license.

I’m eager to here everyone’s thoughts on the whole topic, so to kick things off:

  1. Do you care how your public data and posted content is used? Why or why not?
  2. What do you think of choosing a content license for your Lemmy account? Does this contradict the FOSS model?
  3. Should Lemmy have features to protect user data/content in this way, or should that be left up to the user to figure out on their own?

Data is becoming an increasingly valuable commodity in the digital world. Hopefully these big-picture conversations can help us see what we value as a community and be more prepared for the future.

99
 
 
100
159
submitted 9 months ago* (last edited 9 months ago) by sunaurus@lemm.ee to c/lemmy@lemmy.ml
 
 

The RFC PR is here: https://github.com/LemmyNet/rfcs/pull/6

Reposting RFC contents below:


Summary

Rather than combining all reports into a single report inbox, we should allow users to select whether they are reporting to mods or admins, and we should split reports into different inboxes based on that selection.

Motivation

The current approach has some shortcomings:

  • Users are not currently able to bypass mods and report directly to admins - this may allow mods to conceal instance rule breaking in specific communities
  • Admins are not aware of community rules, so they may wish to take no action for most community rule breaking reports. However, if an admin resolves such a report, the relevant community mods most likely never see it.
  • Different instances may have different rules, but somebody resolving a report on one instance will resolve it for other instances as well, thus potentially resulting in missed reports.
  • Mods might take local action on a report and mark it as resolved even in cases where a user should be banned from the entire instance. In this case, admins are very unlikely to see the report.

Guide-level explanation

When creating reports, users will be able to select if it's a mod report, or an admin report (or both)

image

Note: labels on the sreenshot are illustrative, actual labels can be more user-friendy. Maybe something like:

  • Breaks community rules (report sent to moderators)
  • Breaks instance rules (report sent to admins)

Instead of the current single report inbox, there will be three different kinds of inboxes

  • Admin reports - show all reports sent to admins (only visible to admins)
  • Mod reports - show all reports sent to mods for any communities the user moderates (visible to admins in case they are explicit mods in any communities)
    • This is equivalent to the report view that mods currently have in Lemmy already
  • All reports - Shows a view of all (admin and mod) reports, only visible to admins
    • This is akin to the current 0.19.3 admin report view, and would allow admins to still keep an eye on mod actions on their instance if they wish

The UI wouldn't need to change for mods, but for admins, there would be a new selection at the top of the reports page (the "mod reports" tab would only be visible if the admin is also a mod in any community): image

Resolving reports should be more granular

  • Reports in the "admin reports" tab can only be manually resolved for admins of the local instance
    • To reduce overhead, banning the reported user on the user's home instance + removing reported content should automatically resolve reports for remote admins as well.
  • Reports in the "mod reports" tab should be manually resolved by relevant mods (including admins, if they are explicit mods in the relevant community).
    • To reduce overhead, admins banning the reported user on the community instance OR the user's home instance + removing reported content should automatically resolve reports for mods as well
  • Admins could still resolve reports in the "all reports" tab
    • If it's not an admin report, and not a mod report from a community the admin explicitly moderates, then there should be an additional warning/confirmation when resolving a report here. This is to prevent cases of admins accidentally preventing mods from moderating according to their own community rules.

To further clarify automatic resolution of reports: in any case where there is no further action possible, the report should be automatically resolved.

Mods should be able to escalate reports to admins

This would generate a corresponding report in the admin inbox.

Reference-level explanation

  • In the UI, changes are needed for both reporting as well as the reports inbox views
  • In the database and API, we should split reports by intended audience
  • Federation needs to be changed as well in order to allow distinguishing the report target audience

Drawbacks

It might make reporting slightly more confusing for end users - the mod/admin distinction might not be fully clear to all.

Rationale and alternatives

Alternatively, we could make reporting even more granular. It would be possible to allow users to select only a specific instances admins as the intended report audience, for example. However, I think this has several downsides:

  • Makes the report UI even more confusing
  • Potentially takes away valuable information from other admins (imagine a user only reports CSAM to their own instances admins, while leaving the offending post authors home admins in the dark)

Prior art

Most other social networks allow users to select whether they are reporting a violation of community rules, or site rules as whole.

Unresolved questions

Does ActivityPub properly support splitting up reports like this?

Future possibilities

In the future, it might be a nice addition to have some automation to always escalate to admins, even if they're submitted as mod reports, based on report keywords. For example, "CSAM", "Spam", etc.

view more: ‹ prev next ›