this post was submitted on 22 Mar 2024
48 points (96.2% liked)

Selfhosted

39488 readers
299 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have many services running on my server and about half of them use postgres. As long as I installed them manually I would always create a new database and reuse the same postgres instance for each service, which seems to me quite logical. The least amount of overhead, fast boot, etc.

But since I started to use docker, most of the docker-compose files come with their own instance of postgres. Until now I just let them do it and were running a couple of instances of postgres. But it's kind of getting rediciolous how many postgres instances I run on one server.

Do you guys run several dockerized instances of postgres or do you rewrite the docker compose files to give access to your one central postgres instance? And are there usually any problems with that like version incompatibilities, etc.?

you are viewing a single comment's thread
view the rest of the comments
[–] MrMcGasion@lemmy.world -1 points 6 months ago (9 children)

That's a big reason I actively avoid docker on my servers, I don't like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it's just as easy to learn to configure each piece of software for yourself and know what's going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.

I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I've peeked at it since, to me it seems that docker gets in the way more often than it solves problems.

I don't mean to yuck other people's yum though, so if you like docker, and it works for you, don't let me stop you from enjoying it. I just can't justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).

[–] sardaukar@lemmy.world 3 points 6 months ago (6 children)

It's kinda weird to see the Docker scepticism around here. I run 40ish services on my server, all with a single docker-compose YAML file. It just works.

Comparing it to manually tweaking every project I run seems impossibly time-draining in comparison. I don't care about the underlying mechanics, just want shit to work.

[–] Moonrise2473@feddit.it 1 points 6 months ago (1 children)

I have everything in docker too, but a single yml with 40 services is a bit extreme - you would be forced to upgrade everything together, no?

[–] sardaukar@lemmy.world 1 points 6 months ago

Not really. The docker-compose file has services in it, and they're separate from eachother. If I want to update sonarr but not jellyfin (or its DB service) I can.

load more comments (4 replies)
load more comments (6 replies)