Hello! I need a guide on how to migrate data from shared hosting to Docker. All the guides I can find are about migrating docker containers though! I am going to use a PaaS - Caprover which sets up everything. Can I just import my data into the regular filesystem or does the containerisation have sandboxed filesystems? Thanks!

  • krolden@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    https://docs.docker.com/storage/volumes/

    Just move your data and then either create bind mounts to those directories or create a new volume in docker and copy the data to the volume path in your filesystem.

    I also suggest looking into podman instead of docker. Its basically a drop in replacement for docker.

    • BlinkerFluid@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah I saw this post and thought “what a coincidence, I’m looking to move from docker!”

      Everybody’s going somewhere, I suppose.

      • krolden@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        podman generate systemd really sold it for me. Also the auto update feature is great. No more need for watchtower.

        • BlinkerFluid@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          My one… battlefield with docker was trying to have a wireguard VPN system in tandem with an adguard DNS filter and somehow not have nftables/iptables not have a raging bitch fit over it because both wireguard and docker edit your table entries in different orders and literally nothing I did made any difference to the issue, staggering wireguard’s load time, making the entries myself before docker starts (then resolvconf breaks for no reason). Oh, and they also exist on a system with a Qbittorrent container that connects to a VPN of its own before starting. Yay!

          And that’s why all of that is on a raspberry pi now and will never be integrated back into the image stacks on my main server.

          Just… fuck it, man. I can’t do it again. It’s too much.

    • anarchotaoist@links.hackliberty.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Thanks! I will have to research volumes! Bind mount - that would mean messing with fstab, yes? I set up a bind for my desktop but entering mounts in fstab has borked me more than once!

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’ll try to answer the specific question here about importing data and sandboxing. You wouldn’t have to sandbox, but it’s a good idea. If we think of a Docker container as an “encapsulated version of the host”, then let’s say you have:

    • Service A running on your cloud
    • Requires apt-get install -y this that and the other to run
    • Uses data in /data/my-stuff
    • Service B running on your cloud
    • Requires apt-get install -y other stuff to run
    • Uses data in /data/my-other-stuff

    In the cloud, the Service A data can be accessed by Service B, increasing the attack vector of a leak. In Docker, you could move all your data from the cloud to your server:

    # On cloud
    cd /
    tar cvfz data.tgz data
    # On local server
    mkdir /local/server/
    cd /local/server
    tar xvfz /tmp/data.tgz ./
    # Now you have /local/server/data as a copy
    

    You’re Dockerfile for Service A would be something like:

    FROM ubuntu
    RUN apt-get install -y this that and the other
    RUN whatever to install Service A
    CMD whatever to run
    

    You’re Dockerfile for Service B would be something like:

    FROM ubuntu
    RUN apt-get install -y other stuff
    RUN whatever to install Service B
    CMD whatever to run
    

    This makes two unique “systems”. Now, in your docker-compose.yml, you could have:

    version : '3.8'
    
    services:
      
      service-a:
        image: service-a
        volumes:
          - /local/server/data:/data
    
      service-b:
        image: service-b
        volumes:
          - /local/server/data:/data
    

    This would make everything look just like the cloud since /local/server/data would be bind mounted to /data in both containers (services). The proper way would be to isolate:

    version : '3.8'
    
    services:
      
      service-a:
        image: service-a
        volumes:
          - /local/server/data/my-stuff:/data/my-stuff
    
      service-b:
        image: service-b
        volumes:
          - /local/server/data/my-other-stuff:/data/my-other-stuff
    

    This way each service only has access to the data it needs.

    I hand typed this, so forgive any errors, but hope it helps.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    VPN Virtual Private Network
    k8s Kubernetes container management package

    3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #104 for this sub, first seen 3rd Sep 2023, 01:05] [FAQ] [Full list] [Contact] [Source code]

  • key@lemmy.keychat.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    You can copy files into the docker image via a COPY in the dockerfile or you can mount a volume to share data from the host file system into the docker container at runtime.