this post was submitted on 26 Jun 2024
1 points (100.0% liked)
Technology
59587 readers
5370 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Some, but probably not most. This is mostly an issue with "viral" licenses like GPL, which restrict the license of derivative works. Permissive licenses like the MIT license are very common and don't restrict this.
MIT does say that "all copies or substantial portions of the Software" need to come with the license attached, but code generated by an AI is arguably not a "substantial portion" of the software.
How do you verify that though?
And does the model need to include all of the licenses? Surely the "all copies or substantial portions" would apply to LLMs, since they literally include the source in the model as a derivative work. That's fine if it's for personal use (fair use laws apply), but if you're going to distribute it (e.g. as a centralized LLM), then you need to be very careful about how licenses are used, applied, and distributed.
So I absolutely do believe that building a broadly used model is a violation of copyright, and that's true whether it's under an open source license or not.
By comparing it to the original work.
And how will you know what original work(s) to compare it to?
How do you know anything about anything an LLM generates? Presumably if you're the author you would recognize your own work?