dockerdeploymentdocker-composedocker-machine

What is the best way to deliver docker containers to remote host?


I'm new to docker and docker-compose. I use a docker-compose file with several services. I have containers and images on the local machine when working with docker-compose, and my task is to deliver them to a remote host.

I've identified several solutions:

  1. I can build my images and push them to a registry, then pull them on the production server. However, this option requires a private registry. Considering this, a registry may seem like an unnecessary element. What if I prefer to push images and run containers directly?
  2. Save docker images to a tar file and load them onto the remote host. I found this post, suggesting the use of shell scripts in this case. Alternatively, I can use docker directly, but doing so means loosing the benefits of docker-compose.
  3. Use Docker Machine with the general driver. However, in this scenario, I can deploy only from one machine, or I need to configure certificates. This doesn't seem like a straightforward solution to me.
  4. Use docker-compose with the host parameter (-H). However, in this case, I would need to build images on the remote host. Is it possible to build the image on the local machine and push it to the remote host?
  5. I can use docker-compose push (docs) to the remote host, but for this, I would need to create a registry on the remote host, and I'd have to add and pass the hostname as a parameter to docker-compose every time.

What is the recommended best practice for deploying Docker containers to a remote host?


Solution

  • Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.

    If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.

    There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.


    Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.