this post was submitted on 28 Jan 2025
380 points (98.0% liked)
Technology
61227 readers
4082 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Been playing around with local LLMs lately, and even with it's issues, Deepseek certainly seems to just generally work better than other models I've tried. It's similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.
Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.
In the US, it's a bunch of assholes basically only wanting to replace workers with AI they don't have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn't make sense to. It's all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.
I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It's not being created to make anyone's lives better, it's being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.
And then there's the "want results now" where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. "needs more CUDA" has been the mindset and in the western AI community you are basically laughed at if you can't or don't want to use Nvidia for anything neural net related.
Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of "own the west" there, sure, but that isn't opposed to the research.
LLMs are just another tool for people to use, and I don't fault a hammer that is used incorrectly or to harm someone else. This tech isn't going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.