this post was submitted on 13 Nov 2024
669 points (94.9% liked)

Technology

59566 readers
4758 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] plixel@programming.dev 9 points 1 week ago (1 children)

You can install Ollama in a docker container and use that to install models to run locally. Some are really small and still pretty effective, like Llama 3.2 is only 3B and some are as little as 1B. It can be accessed through the terminal or you can use something like OpenWeb UI to have a more "ChatGPT" like interface.

[โ€“] cybersandwich@lemmy.world 2 points 4 days ago

I have a few LLMs running locally. I don't have an array of 4090s to spare so I am limited to the smaller models 8B and whatnot.

They definitely aren't as good as anything you get remotely. It's more private and controlled but it's much less useful (I've found) than any of the other models.