this post was submitted on 26 Jan 2025
1 points (100.0% liked)

Memes

46399 readers
137 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] haerrii@feddit.org 0 points 4 days ago (1 children)

So... as far as I understand from this thread, it's basically a finished model (llama or qwen) which is then fine tuned using an unknown dataset? That'd explain the claimed 6M training cost, hiding the fact that the heavy lifting has been made by others (US of A's Meta in this case). Nothing revolutionary to see here, I guess. Small improvements are nice to have, though. I wonder how their smallest models perform, are they any better than llama3.2:8b?

[–] yogthos@lemmy.ml 0 points 4 days ago

What's revolutionary here is the use of mixture-of-experts approach to get far better performance. While it has 671 billion parameters overall, it only uses 37 billion at a time, making it very efficient. For comparison, Meta’s Llama3.1 uses 405 billion parameters used all at once. It does as well as GPT-4o in the benchmarks, and excels in advanced mathematics and code generation. It also has 128K token context window means it can process and understand very long documents, and processes text at 60 tokens per second, twice as fast as GPT-4o.