this post was submitted on 28 Jan 2025
149 points (98.1% liked)
Technology
61227 readers
4224 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"Open source" and "commercial" aren't opposites, plenty of models we consider commercial are also 'open source' - an obvious example being Facebook/Meta's models...
Looking outside of AI there's plenty more examples. Chromium is open source, does that mean it and Google's Blink web rendering engine is non-commercial? I'd say no.
Should also be noted that there's been some pushback recently on whether models trained on closed sources should be called "open source", just because the model itself is.
The Open Source Initiative have defined what they believe constitutes "open source AI" (https://opensource.org/ai/open-source-ai-definition). This includes detailed descriptions of training data, explanation on how it was obtained, selected, labeled, processed and filtered. As long as a company utilize any model trained on non-specified data I will assume it is either stolen or otherwise unlawfully obtained from non-consenting users.
I will be clear that I have not read up on Deepseek yet, but I have a hard time believing their training data is specified according to OSI, since no big model yet has done so. Releasing the model source code means little for AI compared to all its training data.
No AI org of any significant size will ever disclose its full training set, and it's foolish to expect such a standard to be met. There is just too much liability. No matter how clean your data collection procedure is, there's no way to guarantee the data set with billions of samples won't contain at least one thing a lawyer could zero in on and drag you into a lawsuit over.
What Deepseek did, which was full disclosure of methods in a scientific paper, release of weights under MIT license, and release of some auxiliary code, is as much as one can expect.
As i wrote in my comment i have not read up on Deepseek, if this is true it is definetly a step in the right direction.
I am not saying i expect any company of significant scale to follow OSI since, as you say, it is too high risk. I do still believe that if you cannot prove to me that your AI is not abusing artists or creators by using their art, or not using data non-consentually acquired from users of your platform, you are not providing an ethic or moral service. This is my main concern with AI. Big tech keeps showing us, time and time again, that they really dont care about about these topics and this needs to change.
Imo AI today is developing and expanding way too fast for the general consumer to understand it and by extension also the legal and justice systems. We need more laws in place regarding how to handle AI and the data they use and produce. We need more education on what AI actually is doing.