this post was submitted on 22 Mar 2024
68 points (92.5% liked)

Selfhosted

40394 readers
396 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I've been playing around with Ollama in a VM on my machine and it is really useful.

To get started I would start by making sure you have capable hardware. You will need recent hardware so that old computer you have laying around may not be enough. I created a VM on my laptop with KVM and gave it 8gb of ram and 12 cores.

Next, read the readme. You can find the Readme at the github repo

https://github.com/ollama/ollama

Once you run the install script you will need to download models. I would download Llama2, Mistral and LLava. As an example you can pull down llama2 with ollama pull llama2

Ollama models are available in the online repo. You can see all of them here: https://ollama.com/library

Once they are downloaded you need to setup openwebui. First, install docker. I am going to assume you already know how to do that. Once docker is installed pull and deploy open web UI with this command. Notice its a little different than the command in the open web UI docs. docker run -d --net=host -e OLLAMA_BASE_URL="http://localhost:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Notice that the networking is shared with the host. This is needed for the connection. I also am setting the environment variable in order to point open web UI to ollama.

Once that's done open up the host IP on port 8080 and create an account. Once that's done you should be all set.

you are viewing a single comment's thread
view the rest of the comments
[–] hyperhypervisor@programming.dev 8 points 8 months ago (1 children)

There's also llamafile, super simple: download and run it.

[–] possiblylinux127@lemmy.zip 1 points 8 months ago (1 children)

Not as cool or flexiable though

[–] hyperhypervisor@programming.dev 5 points 8 months ago (1 children)

Iirc it can run anything llama.cpp can because it just uses that under the hood.

[–] possiblylinux127@lemmy.zip 2 points 8 months ago (1 children)

Except you can't control it as easily. I like the UI and toolset of open web UI

[–] hyperhypervisor@programming.dev 4 points 8 months ago

Ok. I haven't tried it so I'll take your word for it. I'm just offering an easier alternative since the topic was "getting started"