Selfhosted

41325 readers
777 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU.

Anyways, has anyone had a chance to look at his guide? It's accompanied by two youtube videos that are about 7 hours each.

I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests.

Any advice/links for a beginner are more than welcome.

Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it.

Edit 2: This is looking pretty good right now.

2
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
 
 

I think everybody on here is constantly keeping an eye out for what to host next. Sometimes you spinup something which chugs along nicely but sometimes you find out you've been missing out.

For me it's not very refreshing or new: Paperless-ngx. Never thought I would add all my administration to it. But it's great. I probably can't find the thing I need, but I should have a record of every mail or letter I've gotten. Close second is Wanderer. But I would like to have a little bit more features like adding recorded routes to view speed and compare with previous walks. But that's not what it is intended for.

What is that service for you?

4
 
 

I have been having a few issues recently, and I can't quite figure out what is causing this. My setup:

  • gigabit WAN up and down. Run speed tests regularly and get 800+ mbps up and down.
  • opnsense router VM (proxmox) running on a lenovo m920x. Installed an intel 2x10gbe card.
  • Sodola 10gbe switch
  • TrueNAS server (bare metal) w/ 10gbe serving the media files over NFS, stored on a ZFS mirror.
  • Jellyfin LXC
  • debian LXC running the arr stack w/ qbittorrent
  • NVidia Shield w/ ethernet

First issue is extremely slow downloads on qbittorrent. Even if I download an ubuntu iso with hundreds of seeders will sit around 1 mibps. Media downloads with ~10 seeders, I'll sit around 200kibps. Running this through gluetun and protonvpn wireguard with port forwarding enabled and functioning.

Second issue I'm having is if I am downloading anything on qbittorrent, and attempt to play a 4k remux on Jellyfin, it is constantly buffering. If I stop all downloads, immediately the movie plays without issue. 1080 files play without issue all the time.

I tried spinning up a new LXC with qbittorrent, and can download ubuntu isos at 30+ mibps locally and not over NFS.

Any idea what could be causing this? Is this a read/write issue on my TrueNAS server? Networking issuing causing the NFS to be slow? I've run iperf to the TrueNAS and getting 9+gbps.

5
 
 

Someone mentioned it in a comment and I genuinely didn't know what I was setting up, but its basically airdrop but to all your devices/servers so if you have an iphone like me you can goto any photo/file click share, taildrop, then pick the device, its a prettty fast transfer. It shows up in the downloads folder on my pc by default.

I no longer have to upload to icloud files to grab my files, it is very convenient and seems to be free forever for personal use up to 100 devices? I had no idea what I was even setting up til I saw the guide afterwards, I thought it was for monitoring server health, but it's made sharing files between devices/servers very convenient. (this was likely obvious, just wanted to share with others who didn't know)

6
 
 

Title says it - I want a simple CA that doesn't overcomplicate things (looking at you, EJBCA). I need it to serve at least CRLs or better OCSP automatically for the certs it manages. If it comes with a Web GUI, all the better, but doesn't need to. Docker deployment would be sweet.

Currently handling this on an OPNSense I happen to be running, but that thing is also serving stuff to the public 'net, so I'd rather not have my crown jewels on there.

7
 
 

This is all basically hypothetical, but it's something I want a better idea about to improve my concept of networking, service providing, and etc. Additionally, I think services like Mastohost are healthy for the growth of the fediverse as it eases concerns for business use, enterprise use or even broader "community" use. I think it will be important for the future of a federated internet that many of these types of services exist.

Let's get the obvious out of the way: You'd probably need a lot of hardware to achieve this type of service. We're talking about either having a micro datacenter or renting a datacenter from someone else. You're probably not going to get away with doing this on a bog-standard VPS regardless of how much storage you buy (though, if I'm wrong, feel free to correct me.)

I understand how virtualization via proxmox works (kind of, on a surface level) and I imagine that it would work similarly to that but with a preconfigured docker image, but how exactly does someone integrate virtual machine creation with client requests?

Normally I think about services running in a docker which would communicate with other docker containers or the host server -- so, for example, you can configure your Jellyfin to be visible to other containers that might be interested in sharing data between the two. But when it comes to requests for hosting new docker images that need persistent space, how would you manage such a task? Additionally, if we're talking about a multi-computer environment, how do you funnel a request for a new instance to one-of-many machines?

This seems like a basic, fundamental server hosting question and may not be appropriate for "self hosting" as it's probably beyond the scale of what most of us are willing to do -- but humor a man who simply wants to understand a bit more about modern enterprise compute problems.

Feel free to share any literature or online documentation that talks about solving these types of tasks.

8
 
 

Are they all gathered in one place somewhere or does it all need to be found case by case? Intrested in hosting a mastadon, pixelfed, and peertube instance. What are the vps requiremnts? I do like the idea of hosting all my own posts/comments, etc.

9
 
 

My solution uses qBittorrent with Glutun and it works great. My Docker Compose file is based on this one https://github.com/TechHutTV/homelab/blob/main/media/arr-compose.yaml. I simply removed some of the services I didn't need.


I am trying to have a QBitTorrent Docker container that is accessible on my local network and connects to WireGuard. I know this is a basic question, and I'm sorry if I'm wasting your time. I am using a separate user for this that i have add to the docker group.

I can't access the web interface what have i configured wrong.

Here is my docker compose file.

***
services:
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    environment:
      - PUID=1001
      - PGID=1001
      - TZ=Europe/London
      - WEBUI_PORT=8080
      - TORRENTING_PORT=6881
    volumes:
      - /home/torrent/torrent/:/config
      - /home/torrent/download/:/downloads 
    network_mode: service:wireguard
    depends_on:
      - wireguard
    restart: always

  wireguard:
    image: lscr.io/linuxserver/wireguard
    container_name: wireguard
    cap_add:
    - NET_ADMIN
    - SYS_MODULE
    environment:
    - PUID=1001
    - PGID=1001
    - TZ=Europe/London
    ports:
    - 51820:51820/udp
    volumes:
    - /home/torrent/wireguard/:/config
    - /home/torrent/wireguard/london.conf/:/config/wg0.conf
    sysctls:
    - net.ipv4.conf.all.src_valid_mark=1
    restart: always

10
11
 
 

So, I've been pushing my photos to local immich-instance and I'll need some kind of file storage too soon, total amount of data is roughly 1,5TB.

Everything is running on a proxmox server and that's running somewhat smoothly, but now I'd need to get that backed up offsite. I'm running a VPS at Hetzner and they offer pretty decently priced S3 storage or 'storagebox' which is just a raw disk you can connect via SMB/NFS and others.

Now, the question is, how to set up automated backups from proxmox to either of those solutions? I suppose I could just mount anything to the host locally and set up backup paths accordingly, but should the mount drop for whatever reason is proxmox smart enough to notice that actual storage is missing and not fill small local drive with backups?

Encryption would be nice too, but that might be a bit too much to ask. I have enough bandwidth to manage everything and after initial upload the data doesn't change that much, the only question is what is the best practise to do it?

12
 
 

Not sure if this is the right place but lets give it a go.

We have a family account on iCloud so all iPhones (5 of them) can sync items on the phones to their laptops and so forth. One feature that is eating all the storage space we have on iCloud and that would be Photo's. We ran out of space and thus Backups, Photos, Contacts, etc. will not sync anymore. We can add more space in iCloud but I am not keen on keeping buying storage space with Apple.

So my thought was to have all Photo's older then xyz days/months/years stored somewhere else to free up space in that iCloud account. I do not want to delete these older photo's, just have them stored somewhere else but still accessible. So ideally I would be able to tell some app/solution to move photo's from a phone to something self hosted and the user of that phone can then keep seeing the photo's in either the Photos app or the app related to the self hosted solution.

Honestly, even more ideal would be to 'tell' the Photos app from Apple to use the self hosted storage and not the iCloud storage. This would make the transition transparent to all the family members. Some features might no longer work (that 'memories' feature perhaps?) but that is OK, being able to store photo's is more important.

Apologies if this has been asked before but my searching, which is admittedly is not that great from my side, found no answer I could translate to my issue. Any help is appreciated!

FYI, I am running Docker at home and can make services available on the internet with ngnix in front of it as proxy. I can also run a new service of course, the self hosting bit as it were.

13
 
 

I’ve used Nextcloud for a long while but only for cloud storage and photo backup from my phone. I’ve moved the latter to Immich and just replaced the cloud storage with Seafile.

So far everything is hunky dory but I was wondering if anyone who’s run Seafile for longer has any insights for things to watch out for with it. I’m following the documentation backup solution and I’m the only user (so the whole db/file de sync during backup isn’t an issue)

14
 
 

I'm proud to share a major development status update of XPipe, a new connection hub that allows you to access your entire server infrastructure from your local desktop. XPipe 14 is the biggest rework so far and provides an improved user experience, better team features, performance and memory improvements, and fixes to many existing bugs and limitations.

If you haven't seen it before, XPipe works on top of your installed command-line programs and does not require any setup on your remote systems. It integrates with your tools such as your favourite text/code editors, terminals, shells, command-line tools and more. Here is what it looks like:

Hub

Browser

Reusable identities + Team vaults

You can now create reusable identities for connections instead of having to enter authentication information for each connection separately. This will also make it easier to handle any authentication changes later on, as only one config has to be changed.

Furthermore, there is a new encryption mechanism for git vaults, allowing multiple users to have their own private identities in a shared git vault by encrypting them with the personal key of your user.

Incus support

  • There is now full support for incus
  • The newly added features for incus have also been ported to the LXD integration

Webtop

For users who also want to have access to XPipe when not on their desktop, there exists the XPipe Webtop docker image, which is a web-based desktop environment that can be run in a container and accessed from a browser.

This docker image has seen numerous improvements. It is considered stable now. There is now support for ARM systems to host the container as well. If you use Kasm Workspaces, you can now integrate the webtop into your workspace environment via the XPipe Kasm Registry.

Terminals

  • Launched terminals are now automatically focused after launch
  • Add support for the new Ghostty terminal on Linux
  • There is now support for Wave terminal on all platforms
  • The Windows Terminal integration will now create and use its own profile to prevent certain settings from breaking the terminal integration

Performance updates

  • Many improvements have been made for the RAM usage and memory efficiency, making it much less demanding on available main memory
  • Various performance improvements have also been implemented for local shells, making almost any task in XPipe faster

Services

  • There is now the option to specify a URL path for services that will be appended when opened in the browser
  • You can now specify the service type instead of always having to choose between http and https when opening it
  • There is now a new service type to run commands on a tunneled connection after it is established
  • Services now show better when they are active or inactive

File transfers

  • You can now abort an active file transfer. You can find the button for that on the bottom right of the browser status bar
  • File transfers where the target write fails due to permissions issues or missing disk space are now better cancelled

Miscellaneous

  • There are now translations for Swedish, Polish, Indonesian
  • There is now the option to censor all displayed contents, allowing for a more simple screensharing workflow for XPipe
  • The Yubikey PIV and PKCS#11 SSH auth option have been made more resilient for any PATH issues
  • XPipe will now commit a dummy private key to your git sync repository to make your git provider potentially detect any leaks of your repository contents
  • Fix password manager requests not being cached and requiring an unlock every time
  • Fix Yubikey PIV and other PKCS#11 SSH libraries not asking for pin on macOS
  • Fix some container shells not working do to some issues with /tmp
  • Fix fish shells launching as sh in the file browser terminal
  • Fix zsh terminal not launching in the current working directory in file browser
  • Fix permission denied errors for script files in some containers
  • Fix some file names that required escapes not being displayed in file browser
  • Fix special Windows files like OneDrive links not being shown in file browser

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

Outlook

If this project sounds interesting to you, you can check it out on GitHub or visit the Website for more information.

Enjoy!

15
 
 

Help Needed: Homepage Configuration – Missing Widgets & API Errors

Hi everyone,

I'm running Homepage (v0.10.9) in Docker on Arch Linux ARM (Stormux) and encountering issues with missing widgets and API errors. Some widgets are showing as "Missing" on the dashboard, and I'm seeing repeated HTTP 401 errors for Portainer and Tailscale in the logs.

Setup Details:
- Homepage Version: v0.10.9
- Host OS: Arch Linux ARM (Stormux)
- Host IP: 192.168.1.137
- Docker Network: All containers are on homepage_net (gateway: 172.23.0.1)
- Docker Containers: Homepage, Portainer, Miniflux, Uptime Kuma, Glances, etc.

Issues:

  1. Several widgets showing as "Missing":
    - AdGuard (running on host, not in Docker)
    - Netdata
    - Uptime Kuma
    - Docker
    - Portainer
    - Miniflux
    - Tailscale
  2. Repeated HTTP 401 errors for Portainer and Tailscale in logs.

What I've Tried:

  1. Separated service definitions (services.yaml) and widget configurations (widgets.yaml).
  2. Updated widget URLs to use appropriate addresses (host IP for AdGuard, container names or Docker network IPs for containerized services).
  3. Regenerated API keys for Portainer and Tailscale.
  4. Verified all containers are on the same network (homepage_net).
  5. Enabled debug logging in Homepage.

Configuration Files:
I've uploaded my configuration files here: https://gist.github.com/Lanie-Carmelo/e01d973bc3b208e5082011e4b76532f6.
API keys and passwords have been redacted.

Any help troubleshooting this would be greatly appreciated! Let me know if you need additional details.

Hashtags & Mentions:
#SelfHosting #Linux #ArchLinux #Docker #HomeLab #OpenSource #WebDashboard #ArchLinuxARM
@selfhosted @linux @docker @opensource @selfhosting @selfhost

16
24
submitted 2 days ago* (last edited 2 days ago) by compostgoblin@slrpnk.net to c/selfhosted@lemmy.world
 
 

I’m still a newcomer to self hosting, and I could use some guidance on how to best accomplish what I’m trying to do.

Right now, I’ve got AdGuard, Jellyfin, and Nextcloud running on a Raspberry Pi 4 with a 500 GB external hard drive, using YunoHost. Those services are all available at my free domain name provided by YunoHost.

I’d like to run all of those services on the same Pi they’re on now, but using Docker, so I have more control and access to more applications. I would also like to configure a reverse proxy so I can access them at, for example, nextcloud.mydomain.com. (YunoHost doesn’t support custom domains from Porkbun, which is the registrar I’m using.)

What would be the least painful way to go about this? I understand how Docker works conceptually, but I admittedly don’t really know how to use it in practice. Are there any resources available that would get me up to speed quickly?

Appreciate the help - thanks!

17
 
 

Hey, community :)

I run a website that showcases the best open-source companies. Recently, I've added a new feature that filters self-hosted tools and presents them in a searchable format. Although there are other options available, like Awesome-Selfhosted, I found it difficult to find what I needed there, so I decided to display the information in a more digestible format.

You can check out the list here: https://openalternative.co/self-hosted

Let me know if there’s anything else I should add to the list.

Thanks!

18
 
 

See the video description for details on what it supports. From the email:

🆕 Self-hosters, you can now configure web server URLs in our desktop and mobile applications to enable features like Publish, Copy Link to Share, Custom URLs, and more. Download the latest version to give it a try!

19
 
 

I am wanting to automate some homelab things. Specifically deploying new and updating existing docker containers.

I would like to publish my entire docker compose stacks (minus env vars) onto a public Git repo, and then using something to select a specific compose from that, on a specific branch (so I can have a physical seperate server for testing) automatically deploy a container.

I thought of Jenkins, as it is quite flexable, and I am very willing to code it together, but are there any tools like this that I should look into instead? I've heard Ansible is not ideal for docker compose.

20
 
 

Hey everyone !

I'm looking into spinning up a WAF as the number of services I'm hosting is slowly growing. I want to have a better understanding of the traffic and also have a relative peace of mind that if there is a flaw in one of the services I'm hosting, the WAF could help mitigate it.

I've seen two big names come up while searching :

  • SafeLine
  • BunkerWeb

They are popular and look quite good all around but I don't want to just mindlessly take the project with the most GitHub stars.

What WAF are you using / have you used ? Which ones do you recommand ?

21
 
 

Update: Turned out I had like 3 versions of php and 2 versions of postgres all installed in different places and fighting like animals. Cleaned up the mess, fresh install of php and postgres, restored postgres data to the database and bobs your uncle. What a mess.

Thanks to everyone who commented. Your input is always so helpful.


Original Post

Hey everyone, it's me again. I'm now on NGINX, surprisingly simple, not here with a webserver issue today though, rather a nextcloud specific issue. I removed my last post about migrating from Apache to Caddy after multiple users pointed out security issues with what I was sharing, as well as suggesting caddy would be unable to meet my complex hosting needs. Thank you, if that was you.

During the NGINX setup which has gone shockingly smoothly I moved all of my site root directories from /usr/local/apache2/secure to /var/www/

Everything so far has moved over nicely... that is until nextcloud. It's showing an "Internal Server Error" when loading up. When I check the logs in nextcloud/data/nextcloud.log it informs me nextcloud can't find the config.php file and is still looking in the old apache webroot. I have googled relentlessly for about four hours now and everything I find is about people moving data directories which is completely irrelevant. Does anyone know how to get F*%KING nextcloud to realize that config.php is in /var/www/nextcloud/config where it belongs? I'm assuming nextcloud has an internal variable to know where it's own document root is but I can't seem to find it.

Thanks for any tips.

Cheers

nextcloud.log <- you can click me

22
 
 

For those that run Element server and run postgresql version older than 13 will need to update their postgresql major version.

I found these instructions by 'maxkratz' on their github page which worked perfectly for me to go from 11 to 16.

Hopefully this helps someone!

23
 
 

So all of these apps like gmail, outlook, etc. let me login to all of my websites different emails but none of them seem to sync across devices. Is there an app that lets me login to all my inboxes once and then sync that logininfo across pc, iphone, and android?

Right now I have to manually add all the email accounts in to each device, none of the mobile apps sync to their pc counterparts.

24
 
 

Hello to everyone!

Very new to WebDEV and I’m pulling my hair out trying to set up it on Windows 11 for local network use only (no internet access needed). I’ve hit two major roadblocks, and I’m hoping someone here can save me from this nightmare.

The problems:

  1. HTTPS connection fails:
    I can only get WebDAV to work over HTTP, not HTTPS. I’ve created a self-signed certificate, but it’s still not working. Am I missing something obvious?

  2. Sync issues with Android apps and another computer:
    I’ve tried syncing with apps like Joplin, EasySync, DataBackup, and Diarium. While they can push data to the WebDAV server, they can’t pull data back. It’s like the PUT method works, but GET doesn’t. Is this a certificate issue, a permissions problem, or something else entirely?


What I’ve done so far:

Here’s my setup process in case it helps diagnose the issue:

1. Windows Features:

  • Enabled Internet Information Services (IIS) (which auto-enabled Web Management Tools and World Wide Web Services).
  • Enabled WebDAV Publishing under World Wide Web Services > Common HTTP Features.
  • Enabled Basic Authentication under World Wide Web Services > Security.

2. IIS Manager:

  • In Default Web Site > WebDAV Authoring Rules, I enabled WebDAV and added an authoring rule for All users with Read, Source, and Write permissions.
  • Enabled Basic Authentication and disabled Anonymous Authentication and ASP .NET Impersonation.
  • Created a self-signed certificate under Server Certificates and bound it to the Default Web Site for HTTPS.

3. Folder Setup:

  • Created a folder (e.g. C:\WebDAVShare) and added it as a Virtual Directory in IIS with an alias (e.g. webdav).
  • Set permissions for a local user (DESKTOP-PC\webdavuser) with Full Control.

4. Directory Browsing:

  • Enabled Directory Browsing in IIS.

5. Accessing WebDAV:

  • Accessed the server via https://192.168.1.10/webdav in my browser.
  • Entered credentials (DESKTOP-PC\webdavuser + password) and could see the files, but the connection was HTTP, not HTTPS.

Additional info:

  • I’ve exported and installed the self-signed certificate on both my Android devices (Android 13 & 15) as VPN and app user certificates. I couldn’t install them as CA certificates - not sure if that’s the issue.

What am I missing?

  • Why isn’t HTTPS working despite the self-signed certificate?
  • Why can’t my Android apps pull data from the WebDAV server (nor another computer on same network)?
  • Is there a specific Windows feature, permission, or setting I’ve overlooked?

I’m at my wit’s end here, so any help would be hugely appreciated. If you’ve dealt with WebDAV on Windows 11 or have any insights, please chime in!

Thanks in advance and I'm sorry if this is not the right place to ask this :(

25
 
 

UPDATE: Thank you guys for all the suggestions! I got Navidrome installed on my NAS in a matter of minutes, got to test like a half dozen Subsonic compatible apps (both FOSS and Play Store), and it looks like Symfonium + Navidrome meets my needs. I'll keep testing before my free trial for Symfonium ends, but I really appreciate the nudge to try a new music server!


I'm self-hosting my music collection (synology NAS), and while I've liked Poweramp, it only reads local music files, which means I have to copy many GB of music to my phone, even if I'm not particularly listening to it.

The Synology DS Audio app actually does what I want: it caches music locally as you're streaming it, but it reads directly from the NAS.

The only problem with DS Audio is that it sucks as an actual music player.

Are there any Android music players, preferably FOSS or at least privacy-friendly, that will read from the NAS and cache in an intelligent way but also works well as an actual music player?

I did try Symfonium, but couldn't get it to work with Webdav or SMB, plus the dev comes off as a real asshole, so I'd rather not give them money.

EDIT: To clarify what I'm looking for:

  • The app must be able to connect to my NAS music collection (through my local network is fine).
  • Most importantly, the app must be able to cache my music either as I'm streaming it, or in advance when I'm running through a playlist... then future plays of the song should be from the cache.
  • I do NOT want to have to manually download or sync files, which is how I've been doing, and I don't like this at all.

If you've used the Synology DS Audio app, then you'll know exactly the behaviour I'm looking for. It really is a shame that DS Audio sucks as a music player, or else it would be exactly what I'm looking for.

view more: next ›