this post was submitted on 11 Jul 2023
64 points (100.0% liked)

Technology

37739 readers
500 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

A long form response to the concerns and comments and general principles many people had in the post about authors suing companies creating LLMs.

you are viewing a single comment's thread
view the rest of the comments
[–] Guilvareux@feddit.uk 8 points 1 year ago (1 children)

They’re “complaining” about unique qualities of their art being used, without consent, to create new things which ultimately de-value their original art.

It’s a debate to be had, I’m not clearly in favour of either argument here, but it’s quite obvious what they’re upset with.

[–] FaceDeer@kbin.social 9 points 1 year ago (1 children)

If it's a debate to be had then it's something that should have been debated hundreds of years ago when copyright was first invented, because every author or artist re-uses the "unique qualities" of other peoples' art when making their own new stuff.

There's the famous "good authors copy, great authors steal" quote, but I rather like the related one by C. E. M. Joad mentioned in that article: "the height of originality is skill in concealing origins."

[–] Syrup@lemmy.cafe 3 points 1 year ago

I think the main difference between derivative/inspired works created by humans and those created by AI is the presence of "creative effort." This is something that humans can do, but narrow AI cannot.

Even bland statements humans make about nonfiction facts have some creativity in them, even if the ideas are non-copyrightable (e.g., I cannot copyright the fact that the declaration of independence was signed in 1776. However, the exact way I present this fact can be copyrightable- a timeline, chart, table, passage of text, etc. could all be copyrightable).

"Creative effort" is a hard thing to pin down, since "effort" alone does not qualify (e.g., I can't copyright a phone directory even if I spent a lot of effort collecting names/numbers, since simply putting names and numbers alongside each other in alphabetical isn't particularly creative or original). I don't think there's really a bright line test for what constitutes as "creative," but it doesn't take a lot. Randomness doesn't qualify either (e.g., I can't just pick a random stone out of a stream and declare copyright on it, even if it's a very unique-looking rock).

Narrow AI is ultimately just a very complex algorithm created based on training data. This is oversimplifying a lot of steps involved, but there isn't anything "creative" or "subjective" involved in how an LLM creates passages of text. At most, I think you could say that the developers of the AI have copyright over the initial code used to make that AI. I think that the outputs of some functional AI could be copyrightable by its developers, but I don't think any machine-learning AI would really qualify if it's the sole source of the work.

Personally, I think that the results of what an AI like Midjourney or ChatGPT creates would fall under public domain. Most of the time, it's removed enough from the source material that it's not really derivative anymore. However, I think if someone were to prompt one of these AI to create a work that explicitly mimics that of an author or artist, that could be infringement.

IANAL, this is just one random internet user's opinion.