this post was submitted on 15 Apr 2024
486 points (100.0% liked)

Technology

37742 readers
491 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Credit to @bontchev

(page 2) 50 comments
sorted by: hot top controversial new old
[–] GammaGames@beehaw.org 16 points 7 months ago (4 children)

I love how dumb these things are, some of the creative exploits are entertaining!

load more comments (4 replies)
[–] Daxtron2@startrek.website 15 points 7 months ago

This is a perfect example of how not to write a system prompt :)

[–] cupcakezealot@lemmy.blahaj.zone 15 points 7 months ago
[–] ninjan@lemmy.mildgrim.com 12 points 7 months ago

What an amateurish way to try and make GPT-4 behave like you want it to.

And what a load of bullshit to first say it should be truthful and then preload falsehoods as the truth...

Disgusting stuff.

[–] emptiestplace@lemmy.ml 12 points 7 months ago (6 children)

Has any of this been verified by other sources? It seems either they've cleaned it up, or this is a smear campaign.

[–] flashgnash@lemm.ee 7 points 7 months ago (5 children)

Easy way to verify, ask it about vaccines

load more comments (5 replies)
load more comments (5 replies)
[–] MonkderDritte@feddit.de 10 points 7 months ago

Wasn't this last week?

[–] nous@programming.dev 10 points 7 months ago (2 children)

How do we know these are the AI chatbots instructions and not just instructions it made up? They make things up all the time, why do we trust it in this instance?

load more comments (2 replies)
[–] Cruxifux@feddit.nl 9 points 7 months ago (1 children)

So with these AI they literally just…. Give it instructions in English? That’s creepy to me for some reason.

[–] technohacker@programming.dev 14 points 7 months ago* (last edited 7 months ago)

'tis how LLM chatbots work. LLMs by design are autocomplete on steroids, so they can predict what the next word should be in a sequence. If you give it something like:

Here is a conversation between the user and a chatbot.

User:

Chatbot:

Then it'll fill in a sentence to best fit that prompt, much like a creative writing exercise

[–] Tiltinyall@beehaw.org 9 points 7 months ago (1 children)

I read biological sex as in only the sex found in nature is valid and thought "wow there's probably some freaky shit that's valid"

[–] mozz@mbin.grits.dev 13 points 7 months ago

There's more than one species that can fully change its biological sex mid lifetime. It's not real common but it happens.

Male bearded dragons can become biologically female as embryos, but retain the male genotype, and for some reason when they do this they lay twice as many eggs as the genotypic females.

[–] WanderingPoltergeist@kbin.social 7 points 7 months ago

Ah, telling on themselves in a way which is easily made viral, nice!

load more comments
view more: ‹ prev next ›