this post was submitted on 09 Jan 2025
23 points (89.7% liked)

Selfhosted

41132 readers
275 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I recently moved my files to a new zfs-pool and used that chance to properly configure my datasets.

This led me to discovering zfs-deduplication.

As most of my storage is used by my jellyfin library (~7-8Tb), which is mostly uncompressed bluray rips I thought I might be able to save some storage using deduplication in addition to compression.

Has anyone here used that for similar files before? What was your experience with it?

I am not too worried about performance. The dataset in question is rarely changed. Basically only when I add more media every couple of months. I also have overshot my cpu-target when originally configuring my server so there is a lot of headroom there. I have 32Gb of ram which is not really fully utilized either (but I also would not mind upgrading to 64 too much).

My main concern is that I am unsure it is useful. I suspect just because of the amount of data and similarity in type there would statistically be a lot of block-level duplication but I could not find any real world data or experiences on that.

you are viewing a single comment's thread
view the rest of the comments
[–] Uncut_Lemon@lemmy.world 1 points 20 hours ago

You better off enabling compression on a dataset.

Dedupe, even with the recent improvements, has huge overheads and will generally degrade in performance as the dataset increases in size, as it needs to keep track of the 'routing' table in RAM to redirect the request deduplicated blocks to the actual stored data. Apparently the latest openZFS release reduces the speeds loses over larger datasets, but it's still subpar compared to compressed data

Video files are already heavily compressed, you'd be better off transcoding it to a more efficient media codec, like X265 or AV1, to save space on video files