this post was submitted on 20 Nov 2023
27 points (93.5% liked)

Selfhosted

40329 readers
426 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi everyone. I was considering backup options to Glacier Deep Archive, and wanted to know:

  1. Which software do you use to encrypt client-side, obfuscate, compress and deduplicate the data before you send it to S3?
  2. What is the difference between Restore Requests (bulk) and Outbound data transfer and which one will I be using when I want to pull my data from AWS?

I'll be storing approximately 8TB or so of data, which is why I was looking at inexpensive ways to back it up other than buying an HDD outright.

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] CosmicTurtle@lemmy.world 3 points 1 year ago (1 children)
  1. I don't encrypt before I push to S3. Probably bad practice on my part. I just rely on AWS encryption to secure my data. My backups are low-risk (imo). That said, I lock down the bucket so that only my account can access the objects. Compression I use tar cjf (bzip). Protip: Once the tar file is made, run tar ljf $archiveFile > archiveFile-ls.txt and store the resulting file along with the tar file in standard storage. That way you know what is in the archive.

  2. Both. Restore Requests is to copy the data out from Glacier into Standard storage. Note that I said copy. When you perform a restore, your original object stays in glacier and AWS creates a copy to somewhere in S3 that you specify. Once the restore is complete, you can then download the copied object like any S3 object, triggering the Outbound data transfer fee.

[–] MigratingtoLemmy@lemmy.world 1 points 1 year ago

Thanks, I'll keep that in mind. I'd encrypt everything client-side since I don't want anyone to know what I'm storing; including the Cloud provider.