this post was submitted on 02 Sep 2023
29 points (93.9% liked)

Selfhosted

40394 readers
356 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello! I need a guide on how to migrate data from shared hosting to Docker. All the guides I can find are about migrating docker containers though! I am going to use a PaaS - Caprover which sets up everything. Can I just import my data into the regular filesystem or does the containerisation have sandboxed filesystems? Thanks!

top 23 comments
sorted by: hot top controversial new old
[–] krolden@lemmy.ml 17 points 1 year ago (4 children)

https://docs.docker.com/storage/volumes/

Just move your data and then either create bind mounts to those directories or create a new volume in docker and copy the data to the volume path in your filesystem.

I also suggest looking into podman instead of docker. Its basically a drop in replacement for docker.

[–] MangoPenguin@lemmy.blahaj.zone 6 points 1 year ago (3 children)

Podman definitely isn't a drop in replacement, it's like 90% there.

[–] krolden@lemmy.ml 4 points 1 year ago (1 children)
[–] vzq@lemmy.blahaj.zone 2 points 1 year ago

From my experience, a bit of systemd config.

[–] Valmond@lemmy.mindoki.com 1 points 1 year ago

Serious question: why change? Doesn't docker do the job (isn't it FOSS)?

[–] HybridSarcasm@lemmy.world 0 points 1 year ago (2 children)

I’ll consider it a drop-in replacement when Kubernetes can use it.

[–] ieatpillowtags@lemm.ee 9 points 1 year ago (1 children)

Not sure what you mean, Podman isn’t a container runtime and Kubernetes has deprecated it’s docker shim anyway.

[–] elbarto777@lemmy.world -2 points 1 year ago (1 children)
[–] ieatpillowtags@lemm.ee 5 points 1 year ago* (last edited 1 year ago) (1 children)

lol you’re not wrong but was it worth saying?

[–] elbarto777@lemmy.world 1 points 1 year ago

Yes. Very much. I understand I'm being pedantic, but I don't really do it to bash on the writer. I do it for me. It's like an itch. I see "its" being wrongly used, and writing it the correct way is like scratching that itch.

Does it make a difference? Who knows. Some people tell me to go eat dicks, some people thank me because they either didn't know the difference, or it was a typo (ironically, I've made this very mistake in the past!)

Also, I understand that languages evolve, so who knows and "it's" instead of "its" becomes the norm. But at the moment, I find it bothersome (like "your/you're" and "would of")

And I also understand that we all come from a variety of backgrounds and educational skills. Some people know less stuff than I do, some people know waaaay more than I do. I personally appreciate when someone corrects me.

In the end, this is just lemmy, so I don't take things too seriously here (in spite of this lengthy essay, lol!) This is an escape for me. If you got this far, thanks for reading.

[–] lutillian@sh.itjust.works 6 points 1 year ago* (last edited 1 year ago)

Kubernetes uses cri-o nowadays. If you're using kubernetes with the intent of exposing your docker sockets to your workloads, that's just asking for all sorts of fun, hard to debug trouble. It's best to not tie yourself to your k8s clusters underlying implementation, you just get a lot more portability since most cloud providers won't even let you do that if you're managed.

If you want something more akin to how kubernetes does it, there's always nerdctl on top of the containerd interface. However nerdctl isn't really intended to be used as anything other than a debug tool for the containerd maintainers.

Not to mention podman can just launch kubernetes workloads locally a.la. docker compose now.

[–] BlinkerFluid@lemmy.one 5 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah I saw this post and thought "what a coincidence, I'm looking to move from docker!"

Everybody's going somewhere, I suppose.

[–] krolden@lemmy.ml 3 points 1 year ago* (last edited 1 year ago) (1 children)

podman generate systemd really sold it for me. Also the auto update feature is great. No more need for watchtower.

[–] BlinkerFluid@lemmy.one 6 points 1 year ago* (last edited 1 year ago) (2 children)

My one... battlefield with docker was trying to have a wireguard VPN system in tandem with an adguard DNS filter and somehow not have nftables/iptables not have a raging bitch fit over it because both wireguard and docker edit your table entries in different orders and literally nothing I did made any difference to the issue, staggering wireguard's load time, making the entries myself before docker starts (then resolvconf breaks for no reason). Oh, and they also exist on a system with a Qbittorrent container that connects to a VPN of its own before starting. Yay!

And that's why all of that is on a raspberry pi now and will never be integrated back into the image stacks on my main server.

Just... fuck it, man. I can't do it again. It's too much.

[–] krolden@lemmy.ml 4 points 1 year ago

Docker networking is hell

[–] stanka@lemmy.ml 1 points 1 year ago

I wrote this: https://github.com/josefwells/nft_tool

Almost exactly your same situation, I got mad and took control of my firewall.

[–] SheeEttin@lemmy.world 2 points 1 year ago (1 children)

Yes, I would set up the containers empty, then import your data however the applications want it. Either by importing via their web interface, or by dropping it in their bound directory.

Thanks! So, here in the Capriver demo config for Wordpress path says: var/www

This is the regular var/www? Not a different one for the Wordpress container?

I would just simple put my current WP files (from public-html) in that directory?

Do the apps all share a db?

[–] anarchotaoist@links.hackliberty.org 1 points 1 year ago (1 children)

Thanks! I will have to research volumes! Bind mount - that would mean messing with fstab, yes? I set up a bind for my desktop but entering mounts in fstab has borked me more than once!

[–] prenatal_confusion@lemmy.one 5 points 1 year ago

No it's declared in the compose file or the docker run command and you specify a folder as target. No fstab needed.

[–] fmstrat@lemmy.nowsci.com 3 points 1 year ago

I'll try to answer the specific question here about importing data and sandboxing. You wouldn't have to sandbox, but it's a good idea. If we think of a Docker container as an "encapsulated version of the host", then let's say you have:

  • Service A running on your cloud
  • Requires apt-get install -y this that and the other to run
  • Uses data in /data/my-stuff
  • Service B running on your cloud
  • Requires apt-get install -y other stuff to run
  • Uses data in /data/my-other-stuff

In the cloud, the Service A data can be accessed by Service B, increasing the attack vector of a leak. In Docker, you could move all your data from the cloud to your server:

# On cloud
cd /
tar cvfz data.tgz data
# On local server
mkdir /local/server/
cd /local/server
tar xvfz /tmp/data.tgz ./
# Now you have /local/server/data as a copy

You're Dockerfile for Service A would be something like:

FROM ubuntu
RUN apt-get install -y this that and the other
RUN whatever to install Service A
CMD whatever to run

You're Dockerfile for Service B would be something like:

FROM ubuntu
RUN apt-get install -y other stuff
RUN whatever to install Service B
CMD whatever to run

This makes two unique "systems". Now, in your docker-compose.yml, you could have:

version : '3.8'

services:
  
  service-a:
    image: service-a
    volumes:
      - /local/server/data:/data

  service-b:
    image: service-b
    volumes:
      - /local/server/data:/data

This would make everything look just like the cloud since /local/server/data would be bind mounted to /data in both containers (services). The proper way would be to isolate:

version : '3.8'

services:
  
  service-a:
    image: service-a
    volumes:
      - /local/server/data/my-stuff:/data/my-stuff

  service-b:
    image: service-b
    volumes:
      - /local/server/data/my-other-stuff:/data/my-other-stuff

This way each service only has access to the data it needs.

I hand typed this, so forgive any errors, but hope it helps.

[–] key@lemmy.keychat.org 1 points 1 year ago

You can copy files into the docker image via a COPY in the dockerfile or you can mount a volume to share data from the host file system into the docker container at runtime.

[–] Decronym@lemmy.decronym.xyz 0 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
VPN Virtual Private Network
k8s Kubernetes container management package

3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #104 for this sub, first seen 3rd Sep 2023, 01:05] [FAQ] [Full list] [Contact] [Source code]