I'm new to docker
and docker-compose
. I use a docker-compose file with several services. I have containers and images on the local machine when working with docker-compose, and my task is to deliver them to a remote host.
I've identified several solutions:
tar
file and load them onto the remote host. I found this post, suggesting the use of shell scripts in this case. Alternatively, I can use docker directly, but doing so means loosing the benefits of docker-compose.general
driver. However, in this scenario, I can deploy only from one machine, or I need to configure certificates. This doesn't seem like a straightforward solution to me.docker-compose push
(docs) to the remote host, but for this, I would need to create a registry on the remote host, and I'd have to add and pass the hostname as a parameter to docker-compose every time.What is the recommended best practice for deploying Docker containers to a remote host?
Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.
If you can't use a registry then docker save
/docker load
is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.
There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.
Independently of the images you will also need to transfer the docker-compose.yml
file itself, plus any configuration files you bind-mount into the containers. Ordinary scp
or rsync
works fine here. There is no way to transfer these within the pure Docker ecosystem.