If your company can't exist without breaking the law, then it shouldn't exist.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Well, some laws are made to be broken, the question is whether this is one of them.
I disagree. Laws aren't always moral. Texas could outlaw donations to the Rainbow Railroad and it would be wrong, the organization should still exist.
But in this case it is pretty clear that the plagiarism machine is in fact, bad and should not exist, at least not in it's current form.
He has committed the greatest crime imaginable! A crime against capitalism!
The internet has been primarily derivative content for a long time. As much as some haven't wanted to admit it. It's true. These fancy algorithms now take it to the exponential factor.
Original content had already become sparsely seen anymore as monetization ramped up. And then this generation of AI algorithms arrived.
The several years before prior to LLMs becoming a thing, the internet was basically just regurgitating data from API calls or scraping someone else's content and representing it in your own way.
Are algorithms considered LLMs now? I didn't think algorithms of the past (5-10 yrs) were considered AI.
Those claiming AI training on copyrighted works is "theft" are misunderstanding key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.
This process is more akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.
This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.
Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was found to be legal despite protests from authors and publishers. AI training is arguably even more transformative.
While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.
So the issue being, in general to be influenced by someone else's work you would have typically supported that work... like... literally at all. Purchasing, or even simply discussing and sharing with others who may purchase said material are both worth a lot more than not at all, and directly competing without giving source material, influences, or etc.
If it is on the open internet and visible to anyone with a web browser and you have an adblocker like most people, you are not paying to support that work. That's what it was trained on.
Fucking Christ I am so sick of people referencing the Google books lawsuit in any discussion about AI
The publishers lost that case because the judge ruled that Google Books was copying a minimal portion of the books, and that Google Books was not competing against the publishers, thus the infringement was ruled as fair use.
AI training does not fall under this umbrella, because it's using the entirety of the copyrighted work, and the purpose of this infringement is to build a direct competitor to the people and companies whose works were infringed. You may as well talk about OJ Simpson's criminal trial, it's about as relevant.
Honestly, copyright is shit. It is created on the basis of an old way of doing things. That is, where big editors and big studios make mass productions of physical copies of a said 'product'. George R. R. Martin , Warner Studios & co are rich. Maybe they have everything to lose without their copy'right' but that isn't the population's problem. We live in an era where everything is digital and easily copiable and we might as well start acting like it.
I don't care if Sam Altman is evil, this discussion is fundamental.
How did GRRM get rich again?
oh yeah he sold books he worked on for decades, totally the same WB.
Copyright =/= liscence, so long as they arent reproducing the inputs copyright isnt applicable to AI.
That said they should have to make sure they arent reproducing inputs. Shouldnt be hard.
Seems the same as a band being influenced by other bands that came before them. How many bands listened to Metallica and used those ideas to create new music?
Wow, that's a shame. Anyway, take all his money and throw him in a ditch someplace.
My goodness! This is unfair! What kind of Mickey Mouse rule is this anyway?!
Perhaps they should go back to what they were before the greed machine was spun up.