I've read Can anyone explain docker.sock to understand what /var/run/docker.sock
does, but its use in GitLab CI's Use Docker socket binding has me confused.
Here is their example command for the gitlab-runner
registration:
sudo gitlab-runner register -n \
--url https://gitlab.com/ \
--registration-token REGISTRATION_TOKEN \
--executor docker \
--description "My Docker Runner" \
--docker-image "docker:19.03.12" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock
I see two places that the resulting container could obtain docker
from.
/var/run/docker.sock
.docker
binary included in the base image docker:19.03.12
.Isn't this a PATH
conflict? I thought it should be one or the other, where I obtain the ability to use docker
from either the host's unix socket or the base image.
I would think that --docker-image
should instead be ubuntu:latest
or something along those lines that doesn't come with docker
, since the PATH
's docker
would already come from the host socket. Alternatively, the docker socket mount would be removed.
What is actually happening here in regards to this double inclusion of docker
?
The Unix socket file /var/run/docker.sock
is normally created by the Docker daemon. If you run something else as the main container process, the socket won't get created. You can directly look by running a container with a non-Docker main process, like /bin/ls
:
docker run --rm docker:19.03.12 ls -l /var/run
docker run --rm docker:19.03.12 ls -l /run
The /usr/bin/docker
binary must exist in the container filesystem, if you're going to use it. Containers can never call binaries that are on the host, and the socket API won't produce a binary either. (Some of the very early "use the host's Docker socket" posts advocated bind-mounting the binary into the container, but this leads to trouble with library dependencies and makes images not be self-contained.)
So if all you actually need is a Docker container, with a docker
binary, that can invoke the host's Docker socket, you need an image like docker
where the image has a /usr/bin/docker
, plus you need to bind-mount the host's /var/run/docker.sock
into the container.
docker run \
--rm \
-v /var/run/docker.sock:/var/run/docker.sock \
docker:19.03.12 \
docker ps
The GitLab setup you link to seems rather contrived. Using the docker
image to run jobs means that pretty much the only thing a build step can run is a docker
command. At a technical level, you can't start the docker
container without already having a docker
binary and access to a running Docker daemon; the shell-executor approach described at the top of that page seems simpler and there aren't really any downsides to it.
You also also might find it convenient to have a Docker image of build-time dependencies (compilers, header files, static checking tools, ...). That would let you update these dependencies without having to roll out an update to your entire build cluster. If your build scripts themselves need to invoke docker
then your build-tools image needs to install Docker, just using a normal RUN apt-get install
command. You need to push the host's Docker socket into the container in the same way, and so you don't need to start a separate Docker daemon.