this post was submitted on 26 Jun 2023
117 points (97.6% liked)

Asklemmy

43945 readers
638 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
117
Deleted (lemmy.dbzer0.com)
submitted 1 year ago* (last edited 1 year ago) by IsThisLemmyOpen@lemmy.dbzer0.com to c/asklemmy@lemmy.ml
 

Deleted

(page 2) 50 comments
sorted by: hot top controversial new old
[โ€“] maximus@lemmy.sdf.org 4 points 1 year ago* (last edited 1 year ago) (16 children)

LLMs, IIRC, are really bad at IQ-test type questions that require abstract reasoning, especially if they require multiple steps. So, something like

The box is yellow and red.
If the box is yellow, it is good.
If the box is blue, it is unhappy.
If the box is good and happy, the box is awesome.
If the box is red, it is happy.
Is the box awesome?

is what I'd use.

Um wtf, I'm starting to doubt if I'm a human. ๐Ÿค”

load more comments (15 replies)
[โ€“] datendefekt@lemmy.ml 4 points 1 year ago

Wait a minute - GPT-4 - is that you asking this question?

[โ€“] cccc@aussie.zone 4 points 1 year ago (2 children)

Show a picture, video, audio clip or text designed to elicit an emotion. Ask how the user feels.

[โ€“] risottinopazzesco@feddit.it 2 points 1 year ago (2 children)

How would you discriminate answers?

load more comments (2 replies)
[โ€“] hschen@sopuli.xyz 3 points 1 year ago

Say to it

This statement is false

[โ€“] jerkface@lemmy.ca 2 points 1 year ago (1 children)

It's not so important to tell the difference between a human and a bot as it is to tell the difference between a human and ten thousand bots. So add a very small cost to passing the test that is trivial to a human but would make mass abuse impractical. Like a million dollars. And then when a bot or two does get through anyway, who cares, you got a million dollars.

load more comments (1 replies)
[โ€“] CanadaPlus@lemmy.sdf.org 2 points 1 year ago

Any bot? That's just impossible. We're going to have to tie identity back to meatspace somehow eventually.

An existing bot? I don't think I can improve on existing captchas, really. I imagine an LLM will eventually tip their hand, too, like giving an "as an AI" answer or just knowing way too much stuff.

[โ€“] Lemvi@lemmy.sdf.org 2 points 1 year ago

Some kind of biometric scan.

[โ€“] troyunrau@lemmy.ca 2 points 1 year ago (6 children)

I'd ask for their cell number and send a verification code. That'll stop 95% of all duplicate accounts. Keep the hash of their phone number in a hash list, rather than the number itself. Don't allow signups from outside whatever region you can SMS for free.

I realize this would mean relying on an external protocol (SMS), but it might just keep the crap out. Would help for ban evasion too, at least within an instance.

[โ€“] Hexarei@programming.dev 3 points 1 year ago (4 children)

Until someone uses a bunch of Google Voice numbers and gets each of them banned before someone a few months later happens to get one of the banned numbers and tries to sign up.

Only bringing it up because a similar thing happened to me; I got a Google Voice number and found out it was already related to a spam account on a site I wanted to use. Their support team understood and it had been like 6 months so they undid it but still. Bit of a pain.

load more comments (4 replies)
load more comments (5 replies)
load more comments
view more: โ€น prev next โ€บ