this post was submitted on 04 Sep 2024
2 points (100.0% liked)

Technology

59587 readers
3037 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PriorityMotif@lemmy.world 0 points 2 months ago (2 children)

It's two different things happening. One is redistribution, which isn't allowed and the other is fair use, which is allowed. You can't ban someone from writing a detailed synopsis of your book. That's all an llm is doing. It's no different than a human reading the material and then using that to write something similar.

[–] xthexder@l.sw0.com 0 points 2 months ago* (last edited 2 months ago) (2 children)

the other is fair use

That's very much up for debate still.

(I am personally still undecided)

[–] Ferk@lemmy.ml 0 points 2 months ago* (last edited 2 months ago)

I think that's the difference right there.

One is up for debate, the other one is already heavily regulated currently. Libraries are generally required to have consent if they are making straight copies of copyrighted works. Whether we like it or not.

What AI does is not really a straight up copy, which is why it's fuzzy, and much harder to regulate without stepping in our own toes, specially as tech advances and the difference between a human reading something and a machine doing it becomes harder and harder to detect.

[–] PriorityMotif@lemmy.world 0 points 2 months ago (3 children)

The difference is that the llm has the ability to consume and remember all available information whereas a human would have difficulty remembering everything in detail. We still see humans unintentionally remaking things they've heard before. Comedians have unintentionally stolen jokes they've heard. Every songwriter has unintentionally "discovered" a catchy tune which is actually someone else's. We have fanfiction and parody. Most people's personalities are just an amalgamation of everyone and everything they've ever seen, not unlike an llm themselves.

[–] xthexder@l.sw0.com 0 points 2 months ago (2 children)

I agree with you for the most part, but when the "person" in charge of the LLM is a big corporation, it just exaggerates many of the issues we have with current copyright law. All the current lawsuits going around signal to me that society as a whole is not so happy with how it's being used, regardless of how it fits in to current law.

AI is causing humanity to have to answer a lot of questions most people have been ignoring since the dawn of philosophy. Personally I find it rather concerning how blurry some lines are getting, and I've already had to reevaluate how I think about certain things, like what moral responsibilities we'll have when AIs truely start to become sentient. Is turning them off and deleting them a form of murder? Maybe...

[–] trafficnab@lemmy.ca 0 points 2 months ago

OpenAI losing their case is how we ensure that the only people who can legally be in charge of an LLM are massive corporations with enough money to license sufficient source material for training, so I'm forced to begrudgingly take their side here

[–] greenskye@lemm.ee 0 points 2 months ago

Agreed. I keep waffling on my feelings about it. It definitely doesn't feel like our laws properly handle the scale that LLMs can take advantage of 'fair use'. It also feels like yet another way to centralize and consolidate wealth, this time not money, but rather art and literary wealth in the hands of a few.

I already see artists that used to get commissions now replaced by endless AI pictures generated via a Lora specifically aping their style. If it was a human copying you, they'd still be limited by the amount they could produce. But an AI can spit out millions of images all in the style you perfected. Which feels wrong.

[–] Ferk@lemmy.ml 0 points 2 months ago* (last edited 2 months ago)

Is "intent" what makes all the difference? I think doing something bad unintentionally does not make it good, right?

Otherwise, all I need to do something bad is have no bad intentions. I'm sure you can find good intentions for almost any action, but generally, the end does not justify the means.

I'm not saying that those who act unintentionally should be given the same kind of punishment as those who do it with premeditation.. what I'm saying is that if something is bad we should try to prevent it in the same level, as opposed to simply allowing it or sometimes even encourage it. And this can be done in the same way regardless of what tools are used. I think we just need to define more clearly what separates "bad" from "good" specifically based on the action taken (as opposed to the tools the actor used).

[–] WalnutLum@lemmy.ml 0 points 2 months ago (1 children)

You're anthropomorphizing LLMs.

There's a philosophical and neuroscuence concept called "Qualia," which helps define the human experience. LLMs have no Qualia.

[–] Saik0Shinigami@lemmy.saik0.com 0 points 2 months ago

You’re anthropomorphizing LLMs.

No, they're taking the argument to it's logical end.

[–] Gsus4@mander.xyz 0 points 2 months ago* (last edited 2 months ago) (2 children)

The matter is not LLMs reproducing what they have learned, it is that they didn't pay for the books they read, like people are supposed to do legally.

This is not about free use, this is about free access, which at the scale of an individual reading books is called piracy...at the scale of reading all books known to man...it's onmipiracy?

We need some kind of deal where commercial LLMs have to pay a rent to a fund that distributes that among creators or remain nonprofit, which is never gonnna happen, because it'll be a bummer for all the grifters rushing into that industry.

[–] barsoap@lemm.ee 0 points 2 months ago (2 children)

it is that they didn’t pay for the books they read, like people are supposed to do legally.

If I can read a book from a library, why shouldn't OpenAI or anybody else?

...but yes from what I've heard they (or whoever, don't remember) actually trained on libgen. OpenAI can be scummy without the general process of feeding AI books you only have read access to being scummy.

[–] General_Effort@lemmy.world 0 points 2 months ago

Meta is defending because they trained on books3 which contained all of Bibliotik. https://en.wikipedia.org/wiki/The_Pile_(dataset)

[–] Gsus4@mander.xyz 0 points 2 months ago* (last edited 2 months ago) (1 children)

This is not like reading a book from a library...unless you want to force the LLM to only train one book per day and keep no copies after that day.

[–] barsoap@lemm.ee 0 points 2 months ago

They don't keep copies and learning speed? Why one day? Does it count if I skim through a book?

[–] PriorityMotif@lemmy.world 0 points 2 months ago

I think we need to re-examine what copyright should be. There's nothing inherently immoral about "piracy" when the original creator gets almost nothing for their work after the initial release.