this post was submitted on 13 Feb 2025
27 points (90.9% liked)

Asklemmy

45273 readers
1352 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Am I oversimplifying stuff too much?

top 13 comments
sorted by: hot top controversial new old
[–] lily33@lemm.ee 26 points 1 week ago* (last edited 1 week ago) (1 children)

I don't understand how this will help deep fake and fake news.

Like, if this post was signed, you would know for sure it was indeed posted by @lily33@lemm.ee, and not by a malicious lemm.ee admin or hacker*. But the signature can't really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat - or a deep fake video of NASA'a administrator admitting so.

Maybe I'm missing your point?

(*) unless the hacker hacked me directly

[–] qprimed@lemmy.ml 5 points 1 week ago* (last edited 1 week ago) (1 children)

But the signature can't really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat.

important point, but in a federated or distributed system, ~~this~~ signed posts/comments may actually be highly beneficial ~~for~~ when tying content directly to an account for interaction purposes. I have already seen well-ish known accounts seemingly spoofed on similar looking instance domains.

distribution of trusted public keys would be an interesting problem to address but the ability to confirm the association of a specific account to specific content (even if the account is "anonymous" and signing is optional) may lend a layer ~~to~~ of veracity to interactions even if the content quality itself is questionable.

edit: clarity (and potential case in point - words matter, edits matter).

[–] lily33@lemm.ee 7 points 1 week ago* (last edited 1 week ago) (1 children)

Sure, but that has little to do with disinformation. Misleading/wrong posts don't usually spoof the origin - they post the wrong information in their own name. They might lie about the origin of their "information", sure - but that's not spoofing.

[–] AbouBenAdhem@lemmy.world 1 points 1 week ago (1 children)

Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name.

You could argue that that’s because there’s no widely-accepted method for verifying sources—if there were, information relayed without a verifiable source might come to be treated more skeptically.

[–] lily33@lemm.ee 4 points 1 week ago* (last edited 1 week ago)

No, that's because social media is mostly used for informal communication, not scientific discourse.

I guarantee you that I would not use lemmy any differently if posts were authenticated with private keys than I do now when posts are authenticated by the user instance. And I'm sure most people are the same.

Edit: Also, people can already authenticate the source, by posting a direct link there. Signing wouldn't really add that much to that.

[–] jjjalljs@ttrpg.network 15 points 1 week ago

Among other problems, people knowingly spread falsehoods because they feel truthy.

The problem is people. We're all emotional but some people are just full on fact free gut feel almost all of the time.

[–] brokenlcd@feddit.it 11 points 1 week ago (1 children)

The problem is the chain of trust. What tells you that the key you have is the right one and not a fake interposing between you and the real one?

That has been a problem for a substantial amount of time.

[–] AbouBenAdhem@lemmy.world 4 points 1 week ago

I could see it working if (say) someone tries to modify or fabricate video from a known news source, where you could check the key against other content from the same source.

[–] n_emoo@lemmy.ca 10 points 1 week ago

Deepfakes are about impersonating the person in the video, fake news is.. Well, just someone lying. Signatures are meant to verify the source of information, not the contents of it.

Simple example, we can be nearly 100% confident that the person posting tweets under the Trump account is (atleast authorized by) Trump. Doesnt stop him from lying nor uploading a deepfake video.

[–] ShellMonkey@lemmy.socdojo.com 5 points 1 week ago

I think I get what you mean, but validating the origin of a particular piece wouldn't do much for verifying the content. So such of the misinfo that's put out is taking some small snip of a broader story and reframing it in a way that makes the situation out differently.

[–] Badabinski@kbin.earth 4 points 1 week ago

This already exists in theory, although not many companies or products are implementing it: https://en.wikipedia.org/wiki/Content_Authenticity_Initiative

I think Leica cameras can sign their images, but I don't know if any other cameras support it yet.

[–] chicken@lemmy.dbzer0.com 3 points 1 week ago

Well one reason is probably that signing your article content to help it be verified when it is repackaged elsewhere is kind of the opposite of what news sources are trying to do with their paywalls.

[–] Karmmah@lemmy.world 3 points 1 week ago

How would key signing prevent deep fakes?