this post was submitted on 23 Feb 2025
44 points (89.3% liked)

Selfhosted

42765 readers
1360 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I set it to debug at somepoint and forgot maybe? Idk, but why the heck does the default config of the official Docker is to keep all logs, forever, in a single file woth no rotation?

Feels like 101 of log files. Anyway, this explains why my storage recipt grew slowly but unexpectedly.

top 41 comments
sorted by: hot top controversial new old
[–] mhzawadi@lemmy.horwood.cloud 2 points 2 hours ago

for some helpful config, the below is the logging config I have and logs have never been an issue.

You can even add 'logfile' => '/some/location/nextcloud.log', to get the logs in a different place

  'logtimezone' => 'UTC',
  'logdateformat' => 'Y-m-d H:i:s',
  'loglevel' => 2,
  'log_rotate_size' => 52428800,
[–] MonkeMischief@lemmy.today 1 points 2 hours ago (1 children)

Wow, thanks for the heads up! I use Nextcloud AIO and backups take VERY long. I need to check about those logs!

Don't know if I'm just lucky or what, but it's been working really well for me and takes good care of itself for the most part. I'm a little shocked seeing so many complaints in this thread because elsewhere on the Internet that's the go-to method.

[–] MTK@lemmy.world 1 points 24 minutes ago

It can be fidgety, especially if you stray from the main instructions, generally I do think it's okay, but also updates break it a bit every now and again.

[–] Shimitar@downonthestreet.eu 20 points 14 hours ago (2 children)

You should always setup logrotate. Yes the good old Linux logrotate...

[–] non_burglar@lemmy.world 19 points 12 hours ago (1 children)

I don't disagree that logrotate is a sensible answer here, but making that the responsibility of the user is silly.

[–] Shimitar@downonthestreet.eu -3 points 4 hours ago (3 children)

Are you crazy? I understand that we are used to dumbed down stuff, but come on...

Rotating logs is in the ABC of any sysadmin, even before backups.

First, secure your ssh logins, then secure your logs, then your fail2ban then your backups...

To me, that's in the basic stuff you must always ensure.

[–] MTK@lemmy.world 1 points 20 minutes ago

This is a docker! If your docker is marketed as ready to go and all-in-one, it should have basic things like that.

If I were running this as a full system with a user base then of course I would go over everything and make sure it all makes sebse for my needs. But since my needs were just a running nc instance, it would make sense to run a simple docker with mostly default config. If your docker by default has terrible config, then you are missing the point a bit.

[–] Appoxo@lemmy.dbzer0.com 1 points 45 minutes ago

Logration is the abc of the developer.
Why should I need 3rd party tools to fix the work of the developer??

[–] catloaf@lemm.ee 1 points 3 hours ago (1 children)

Those should also all be secure by default. What is this, Windows?

[–] Shimitar@downonthestreet.eu 2 points 3 hours ago

Just basic checks I prefer to ensure, not leave to distribution good faith. If all is set, good to go. Otherwise, fix and move on.

Specially with self hosted stuff that is a bit more custom than the usual.

[–] catloaf@lemm.ee 26 points 14 hours ago (2 children)

We should each not have to configure log rotation for every individual service. That would require identify what and how it logs data in the first place, then implementing a logrotate config. Services should include a reasonable default in logrotate.d as part of their install package.

[–] Shimitar@downonthestreet.eu 1 points 4 hours ago (1 children)

Agreed, but going container route those nice basic practices are dead.

And also, being mextcloud a php service, of can't by definition ship with a logrotate config too, because its never packaged by your repo.

[–] peregus@lemmy.world 3 points 2 hours ago

The fact (IMHO) is that the logs shouldn't be there, in a persistent volume.

[–] RubberElectrons@lemmy.world 2 points 14 hours ago

Ideally yes, but I've had to do this regularly for many services developed both in-house and out of house.

Solve problems, and maybe share your work if you like, I think we all appreciate it.

[–] neo@lemmy.hacktheplanet.be 14 points 15 hours ago (2 children)

Imho it’s because docker does away with (abstracts?) many years of sane system administration principles (like managing logfile rotations) that you are used to when you deploy bare metal on a Debian box. It’s a brave new world.

[–] scrubbles@poptalk.scrubbles.tech 39 points 15 hours ago (2 children)

It's because with docker you don't need to do log files. Logging should be to stdout, and you let the host, orchestration framework, or whoever is running the container so logs however they want to. The container should not be writing log files in the first place, containers should be immutable except for core application logic.

[–] Appoxo@lemmy.dbzer0.com 3 points 43 minutes ago

At worst it saves in the config folder/volume where persistent stuff should be.

[–] neo@lemmy.hacktheplanet.be 1 points 1 hour ago

Good point!

[–] poVoq@slrpnk.net 5 points 12 hours ago* (last edited 12 hours ago) (1 children)

Or you can use Podman, which integrates nicely with Systemd and also utilizes all the regular system means to deal with log files and so on.

[–] neo@lemmy.hacktheplanet.be 1 points 1 hour ago (1 children)

Good suggestion, although I do feel it always comes back to this “many ways to do kind of the same thing” that surrounds the Linux ecosystem. Docker, podman, … some claim it’s better, I hear others say it’s not 100% compatible all the time. My point being more fragmentation.

[–] Appoxo@lemmy.dbzer0.com 2 points 42 minutes ago

100 ways to configure a static ip.
Why does it need that? At least one per distro controlled by the distro-maintainers.

[–] AMillionMonkeys@lemmy.world 12 points 15 hours ago (4 children)

Everything I hear about Nextcloud scares me away from messing with it.

[–] sith@lemmy.zip 0 points 2 hours ago* (last edited 2 hours ago) (1 children)

I stopped using Nextcloud a couple of years ago after it corrupted my encrypted storage. I'm giving it a try again because of political emergency. But we sure need a long term replacement. Written in Rust or some other sane language.

[–] MTK@lemmy.world 1 points 15 minutes ago

Nc is great, it really is amazing that it is foss. Sure it isn't the slickest or fastest, and it does need more maintenance than most foss services, but it is also more complex and has so many great features.

I really recommend nc, 99% of the time it just works for me. It just seems that their docker was done pretty poorly imo, but still it just works most of the time.

[–] ikidd@lemmy.world 2 points 9 hours ago (1 children)

Just use the official Docker AIO and it is very, very little trouble. It's by far the easiest way to use Nextcloud and the related services like Collabora and Talk.

[–] peregus@lemmy.world 1 points 2 hours ago

The price rboem is that the log file is inside the container in the www folder

[–] ocean@lemmy.selfhostcat.com 3 points 14 hours ago

If you only use it for files, the only thing it's good for imho. it's awesome! :)

[–] sailorzoop@lemmy.librebun.com 2 points 13 hours ago (1 children)

Reminds me of when my Jellyfin container kept growing its log because of something watchtower related. Think it ended up at 100GB before I noticed. Not even debug, just failed updates I think. It's been a couple of months.

[–] Appoxo@lemmy.dbzer0.com 3 points 42 minutes ago

Well that's not jellyfins faults but rather watchtower...

[–] breadsmasher@lemmy.world 2 points 15 hours ago (1 children)

101 of log files

is to configure it yourself

[–] MTK@lemmy.world 1 points 13 minutes ago

Look, defaults are a thing and if your defaults suck then you've made a mistake and if your default is to save a 100GB of log file in one file then something is wrong. The default in Dockers should just be not to save any log files on the persistent volumes.

[–] JASN_DE@feddit.org -2 points 15 hours ago (1 children)

Feels like blaming others for not paying attention.

[–] scrubbles@poptalk.scrubbles.tech 17 points 15 hours ago (1 children)

Persistent storage should never be used for logging in docker. Nextcloud is one of the worst offenders of breaking docker conventions I've found, this is just one of the many ways they prove they don't understand docker.

Logs should simply be logged to stdout, which will be read by docker or by a logging framework. There should never be "log files" for a container, as it should be immutable, with persistent volumes only being used for configuration or application state.

[–] exu@feditown.com 7 points 12 hours ago (2 children)

The AIO container is so terrible, like, that's not how you're supposed to use Docker.
It's unclear whether OP was using that or saner community containers, might just be the AIO one.

[–] merthyr1831@lemmy.ml 6 points 9 hours ago (1 children)

It's too late for me now coz I didnt do my research and ive already migrated over, but good god ever loving fuck was the AIO container the hardest of all my services to set up.

Firstly, it throws a fit if you don't set up the filesystem specifically for php and the postgres db as if it were bare metal. Idk how or why every other container I use can deal with UID 568 but Nextcloud demands www-data and netdata users.

When that's done, you realise it won't run background tasks because it expects cron to be set up. You have to set a cronjob that enters the container to run the cron, all to avoid the "recommended" approach of using a second nextcloud instance just to run background tasks.

And finally, and maybe this is just a fault of TrueNAS' setup wizard but, I still had to enter the container shell to set up a bunch of basic settings like phone region. come on.

Straight up worse than installing it bare metal

[–] MTK@lemmy.world 1 points 27 minutes ago

Yes! When I read that I need a second instance for cron I was like "wtf?" I know NC are not the only ones doing that but still

[–] scrubbles@poptalk.scrubbles.tech 7 points 11 hours ago (2 children)

I have lost now not hours, but days debugging their terrible AIO container. Live production code stored in persistent volumes. Scattered files around the main drive in seemingly arbitrary locations. Environment variables that are consistently ignored/overrided. It's probably my number one example of worst docker containers and what not to do when designing your container.

[–] peregus@lemmy.world 2 points 2 hours ago

Be too, and I went back to the standalone community container

[–] ilmagico@lemmy.world 4 points 10 hours ago (1 children)

Yeah, their AIO setup is just bad, the more "traditional" and community supported docker compose files work well, I've been using them for years. They're not perfect, but work well. Nextcloud is not bad per se, but just avoid their AIO docker.

[–] grimer@lemmy.world 1 points 7 hours ago

I’ve only ever used the AIO and it’s the only one of my problem containers out of about 30. Would you mind pointing me to some decent community compose files? Thanks!!