this post was submitted on 09 Aug 2024
260 points (93.6% liked)

Selfhosted

40329 readers
548 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I don't consider myself very technical. I've never taken a computer science course and don't know python. I've learned some things like Linux, the command line, docker and networking/pfSense because I value my privacy. My point is that anyone can do this, even if you aren't technical.

I tried both LM Studio and Ollama. I prefer Ollama. Then you download models and use them to have your own private, personal GPT. I access it both on my local machine through the command line but I also installed Open WebUI in a docker container so I can access it on any device on my local network (I don't expose services to the internet).

Having a private ai/gpt is pretty cool. You can download and test new models. And it is private. Yes, there are ethical concerns about how the model got the training. I'm not minimizing those concerns. But if you want your own AI/GPT assistant, give it a try. I set it up in a couple of hours, and as I said... I'm not even that technical.

you are viewing a single comment's thread
view the rest of the comments
[–] dataprolet@lemmy.dbzer0.com 5 points 3 months ago (3 children)

Isn't this using a lot of computing power?

[–] MangoPenguin@lemmy.blahaj.zone 9 points 3 months ago

Not really, it uses some GPU power when it's actively generating a response, but otherwise it just sits idle.

[–] Toribor@corndog.social 5 points 3 months ago* (last edited 3 months ago)

I've been testing Ollama in Docker/WSL with the idea that if I like it I'll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.

Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.

I haven't tracked power usage, but besides the VRAM requirements it doesn't seem too intensive on resources, but maybe I just haven't done anything complex enough yet.

[–] Swedneck@discuss.tchncs.de 5 points 3 months ago* (last edited 3 months ago)

you hear that said about AI because companies are desperately throwing more and more resources at it to get 0.3% better results, and people are collectively running an insane amount of prompts all the time.

but on a personal level it's not really any different from any other computations, people render videos all the time and no one complains about the resource usage from that, because companies aren't trying to sell bloated video rendering services to gardening businesses.