this post was submitted on 03 Sep 2024
9 points (84.6% liked)

Technology

60585 readers
3549 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
(page 8) 50 comments
sorted by: hot top controversial new old
[–] forgotmylastusername@lemmy.ml 0 points 4 months ago* (last edited 4 months ago) (2 children)

The internet has been primarily derivative content for a long time. As much as some haven't wanted to admit it. It's true. These fancy algorithms now take it to the exponential factor.

Original content had already become sparsely seen anymore as monetization ramped up. And then this generation of AI algorithms arrived.

The several years before prior to LLMs becoming a thing, the internet was basically just regurgitating data from API calls or scraping someone else's content and representing it in your own way.

[–] Mcdolan@lemmy.world 0 points 4 months ago

Are algorithms considered LLMs now? I didn't think algorithms of the past (5-10 yrs) were considered AI.

load more comments (1 replies)
[–] FatCat@lemmy.world 0 points 4 months ago (3 children)

Those claiming AI training on copyrighted works is "theft" are misunderstanding key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

This process is more akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was found to be legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.

[–] TowardsTheFuture@lemmy.zip 0 points 4 months ago (1 children)

So the issue being, in general to be influenced by someone else's work you would have typically supported that work... like... literally at all. Purchasing, or even simply discussing and sharing with others who may purchase said material are both worth a lot more than not at all, and directly competing without giving source material, influences, or etc.

[–] NikkiDimes@lemmy.world 0 points 4 months ago

If it is on the open internet and visible to anyone with a web browser and you have an adblocker like most people, you are not paying to support that work. That's what it was trained on.

[–] Eccitaze@yiffit.net 0 points 4 months ago

Fucking Christ I am so sick of people referencing the Google books lawsuit in any discussion about AI

The publishers lost that case because the judge ruled that Google Books was copying a minimal portion of the books, and that Google Books was not competing against the publishers, thus the infringement was ruled as fair use.

AI training does not fall under this umbrella, because it's using the entirety of the copyrighted work, and the purpose of this infringement is to build a direct competitor to the people and companies whose works were infringed. You may as well talk about OJ Simpson's criminal trial, it's about as relevant.

[–] sue_me_please@awful.systems 0 points 4 months ago* (last edited 4 months ago) (1 children)

the “Bob Dylan-ness” or “Hemingway-ness”

This is a dumb argument and it's still wrong. Likeness is protected by copyright laws. See Midler v. Ford.

load more comments (1 replies)
[–] obbeel@lemmy.eco.br 0 points 4 months ago (1 children)

Honestly, copyright is shit. It is created on the basis of an old way of doing things. That is, where big editors and big studios make mass productions of physical copies of a said 'product'. George R. R. Martin , Warner Studios & co are rich. Maybe they have everything to lose without their copy'right' but that isn't the population's problem. We live in an era where everything is digital and easily copiable and we might as well start acting like it.

I don't care if Sam Altman is evil, this discussion is fundamental.

[–] wise_pancake@lemmy.ca 0 points 4 months ago (1 children)

How did GRRM get rich again?

oh yeah he sold books he worked on for decades, totally the same WB.

[–] obbeel@lemmy.eco.br 0 points 4 months ago (6 children)

He didn't just sell books, he got signed by editors who published him worldwide. That's what I'm talking about. He was 'chosen' by the market.

load more comments (6 replies)
[–] bonus_crab@lemmy.world 0 points 4 months ago (1 children)

Copyright =/= liscence, so long as they arent reproducing the inputs copyright isnt applicable to AI.

That said they should have to make sure they arent reproducing inputs. Shouldnt be hard.

[–] Poem_for_your_sprog@lemmy.world 0 points 4 months ago

Seems the same as a band being influenced by other bands that came before them. How many bands listened to Metallica and used those ideas to create new music?

[–] Etterra@lemmy.world 0 points 4 months ago

Wow, that's a shame. Anyway, take all his money and throw him in a ditch someplace.

[–] Dumpdog@lemmy.world 0 points 4 months ago

My goodness! This is unfair! What kind of Mickey Mouse rule is this anyway?!

[–] General_Effort@lemmy.world 0 points 4 months ago (3 children)

In a way this thread is heart-warming. There are so many different people here - liberals, socialists, anarchists, communists, progressives, ... - and yet they can all agree on 1 fundamental ethical principle: The absolute sanctity of intellectual property.

[–] trafficnab@lemmy.ca 0 points 4 months ago* (last edited 4 months ago) (2 children)

Depending on how important these large language models end up being to society, I'd rather everyone be able to freely use copyrighted works to train them, rather than reserve their use solely for the corporations rich enough to pay for the licensing or lucky enough to already have the rights to a trove of source material

OpenAI losing this battle is how we ensure that the only people that can legally train these things are the Microsofts, Googles, and the Adobes of the world

load more comments (2 replies)
load more comments (2 replies)
load more comments
view more: ‹ prev next ›