this post was submitted on 06 Sep 2024
2 points (100.0% liked)

Technology

59587 readers
5236 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] bruhduh@lemmy.world 0 points 2 months ago* (last edited 2 months ago) (5 children)

Search Nvidia p40 24gb on eBay, 200$ each and surprisingly good for selfhosted llm, if you plan to build array of gpus then search for p100 16gb, same price but unlike p40, p100 supports nvlink, and these 16gb is hbm2 memory with 4096bit bandwidth so it's still competitive in llm field while p40 24gb is gddr5 so it's good point is amount of memory for money it cost but it's rather slow compared to p100

[–] Gormadt@lemmy.blahaj.zone 0 points 2 months ago (1 children)

Personally I don't much for the LLM stuff, I'm more curious how they perform in Blender.

[–] utopiah@lemmy.world 0 points 2 months ago

Interesting, I did try a bit of remote rendering on Blender (just to learn how to use via CLI) so that makes me wonder who is indeed scrapping the bottom of the barrel of "old" hardware and what they are using for. Maybe somebody is renting old GPUs for render farms, maybe other tasks, any pointer of such a trend?

[–] RegalPotoo@lemmy.world 0 points 2 months ago

Digging into it a bit more, it seems like I might be better off getting a 12gb 3060 - similar price point, but much newer silicon

[–] RegalPotoo@lemmy.world 0 points 2 months ago (2 children)

Thanks for the tips! I'm looking for something multi-purpose for LLM/stable diffusion messing about + transcoder for jellyfin - I'm guessing that there isn't really a sweet spot for those 3. I don't really have room or power budget for 2 cards, so I guess a P40 is probably the best bet?

[–] jlh@lemmy.jlh.name 0 points 2 months ago

Intel a310 is the best $/perf transcoding card, but if P40 supports nvenc, it might work for both transcode and stable diffusion.

[–] bruhduh@lemmy.world 0 points 2 months ago* (last edited 2 months ago)

Try ryzen 8700g integrated gpu for transcoding since it supports av1 and these p series gpus for llm/stable diffusion, would be a good mix i think, or if you don't have budget for new build, then buy intel a380 gpu for transcoding, you can attach it as mining gpu through pcie riser, linus tech tips tested this gpu for transcoding as i remember

[–] Scipitie@lemmy.dbzer0.com 0 points 2 months ago (1 children)

Lowest price on Ebay for me is 290 Euro :/ The p100 are 200 each though.

Do you happen to know if I could mix a 3700 with a p100?

And thanks for the tips!

[–] bruhduh@lemmy.world 0 points 2 months ago (1 children)

Ryzen 3700? Or rtx 3070? Please elaborate

[–] Scipitie@lemmy.dbzer0.com 0 points 2 months ago (1 children)

Oh sorry, nvidia RTX :) Thanks!

[–] bruhduh@lemmy.world 0 points 2 months ago

I looked it up, rtx 3070 have nvlink capabilities though i wonder if all of them have it, so you can pair it if it have nvlink capabilities