this post was submitted on 06 Oct 2023
2942 points (98.2% liked)

Piracy: źœ±į“€ÉŖŹŸ į“›Źœį“‡ ŹœÉŖÉ¢Źœ źœ±į“‡į“€źœ±

54132 readers
370 users here now

āš“ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules ā€¢ Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


šŸ’° Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS
 

Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

Iā€™m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. šŸ˜Š I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

you are viewing a single comment's thread
view the rest of the comments
[ā€“] DeathWearsANecktie@lemm.ee 213 points 1 year ago (18 children)

One of the things I hate the most about current AI is the lecturing and moralising. It's so annoyingly strict, even when you're asking for something pretty innocent.

[ā€“] nothacking@discuss.tchncs.de 20 points 1 year ago (3 children)

They are programmed to do that to cover the companies ass. They are also set up to not trust anything you tell them. I once tried to get chatGPT to accept that Russia might have invaded Ukraine in 2022, and it refused to believe anything not in the training data. (Might be different now, they seem to be updating it, just find a new recent event)

[ā€“] straypet@lemmy.world 4 points 1 year ago* (last edited 1 year ago) (1 children)

Well, of course. Who would in their right mind would set it up so random input from random people online gets included into the model?

The model is trained on known data and the web interface only lets you use the model, not contribute to train it.

[ā€“] Womble@lemmy.world 10 points 1 year ago

Its not training the model, it's the model using the context you provide it (in that instance). If you use an unfiltered LLM it will run with anything you say and go from there, for example you could tell it Mexico reclaimed Texas and it would carry on as if that's true. But only until you close it down its not permanently changing the model it is just changing the context in which that instance is running.

The big tech companies are going to huge lengths to filter and censor their LLMs when used by the public both to prevent negative PR and because they dont want people to have unrestricted access to them.

load more comments (1 replies)
load more comments (15 replies)