this post was submitted on 30 Jan 2024
214 points (95.0% liked)

Selfhosted

40394 readers
375 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I wrote a blog post detailing my homelab setup throughout 2023. It includes the hardware I use, as well as the applications I selfhosted. I also detailed how I automate my home Kubernetes cluster and how I back up my data.

you are viewing a single comment's thread
view the rest of the comments
[–] 1984@lemmy.today 13 points 10 months ago (1 children)

I really like self hosting too but Kubernetes is overkill in complexity. I use nomad. :)

[–] diminou@lemmy.zip 2 points 10 months ago (2 children)

In a cluster? I'm actually thinking about using Nomad between my three sff pc that I use as servers, but I have no clue as to how to sync storage between them (container side I mean, with nextcloud data for example)?

[–] johntash@eviltoast.org 8 points 10 months ago (1 children)

Storage is hard to do right :(

If you can get away with it, use a separate NAS that exposes NFS to your other machines. Iscsi with a csi might be an option too.

For databases, it's usually better to not put their data on shared storage and instead use the databases built in replication (and take backups!).

But if you want to go down the rabbit hole, check out ceph, glusterfs, moosefs, seaweedfs, juicefs, and garagehq.

Most shared file systems aren't fully posix compliant so things like file locking may not work. This affects databases and sqlite a lot. Glusterfs and moosefs seen to behave the best imo with sqlite db files. Seaweedfs should as well, but I'm still working on testing it.

[–] Hexarei@programming.dev 4 points 10 months ago

Yep, as someone who just recently setup a hyperconverged mini proxmox cluster running ceph for a kubernetes cluster atop it, storage is hard to do right. Wasn't until after I migrated my minor services to the new cluster that I realized that ceph's rbd csi can't be used by multiple pods at once, so having replicas of something like Nextcloud means I'll have to use object storage instead of block storage. I mean. I can do that, I just don't want to lol. It also heavily complicates installing apps into Nextcloud.

[–] 1984@lemmy.today 2 points 10 months ago

Yeah in a cluster with consul. Consul gives automatic service discovery and works with traefik so I don't even have to care which node my service is running on since traefik knows how to find it using consul.

For the storage I went with a simple solution. I installed nfs on a machine running in nomad, and then configured the nomad clients to mount that disk. All of this with ansible so I don't have to do it more than once.