dockerdockerfiledocker-builddocker-buildkit

Docker BuildKit 'context canceled' when output contains subdirectories (Docker 4.29 onwards)


The team I am working in is using a multi-stage Docker file as our build engine, using custom build output from Docker BuildKit. We are using Docker Desktop for Windows, with Linux containers.

Starting with Docker 4.29 (and now same behaviour in 4.30) we are experiencing an issue whereby the build fails with ERROR: failed to solve: error from receiver: context canceled whenever the output target contains a subdirectory.

Here is a minimal example of a Dockerfile which exhibits the issue:

FROM alpine
WORKDIR /app
RUN mkdir test
RUN touch test/mytestfile

FROM scratch
COPY --from=0 /app .

Explanation

  1. The initial build stage is our 'builder' image (which creates some build output inside a directory called app. Note in particular that the build output contains a subdirectory called test)
  2. The second build stage takes the build output from the /app directory of the initial build stage and sends it to Docker BuildKit's custom exporter

In the example above, the Docker invocation then looks as follows:

docker build --no-cache --output . .

Expected behaviour

The expected behaviour is that the custom exporter sends the build output to the local working directory and the file test/mytestfile appears locally

Actual behaviour

A directory test gets created but is empty. The Docker invocation auto-cancels with the error message above. Full logs are as follows:

[+] Building 4.4s (9/9) FINISHED                                                                                                                               docker:default 
 => [internal] load build definition from Dockerfile                                                                                                                     0.0s 
 => => transferring dockerfile: 143B                                                                                                                                     0.0s 
 => [internal] load metadata for docker.io/library/alpine:latest                                                                                                         1.5s 
 => [internal] load .dockerignore                                                                                                                                        0.0s 
 => => transferring context: 2B                                                                                                                                          0.0s 
 => [stage-0 1/4] FROM docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b                                           2.0s 
 => => resolve docker.io/library/alpine:latest@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b                                                   0.0s 
 => => sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b 1.64kB / 1.64kB                                                                           0.0s 
 => => sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 528B / 528B                                                                               0.0s 
 => => sha256:05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd 1.47kB / 1.47kB                                                                           0.0s 
 => => sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 3.41MB / 3.41MB                                                                           1.8s 
 => => extracting sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8                                                                                0.1s 
 => [stage-0 2/4] WORKDIR /app                                                                                                                                           0.1s 
 => [stage-0 3/4] RUN mkdir test                                                                                                                                         0.3s 
 => [stage-0 4/4] RUN touch test/mytestfile                                                                                                                              0.3s 
 => [stage-1 1/1] COPY --from=0 /app .                                                                                                                                   0.0s 
 => CANCELED exporting to client directory                                                                                                                               0.0s 
 => => copying files 56B                                                                                                                                                 0.0s 
ERROR: failed to solve: error from receiver: context canceled

I have also tried variations of the above with:

The result is always the same - ERROR: failed to solve: error from receiver: context canceled.

Specifying --progress=plain in the Docker invocation is no more illuminating:

...

#9 exporting to client directory
#9 copying files 56B done
#9 CANCELED
ERROR: failed to solve: error from receiver: context canceled

If we remove the subdirectory test (i.e. replace the third and fourth Docker instructions with just RUN touch mytestfile) then the result is as expected - a single empty file called mytestfile in the local working directory. Unfortunately though, in our real-world use cases, subdirectories in the build output are inevitable (and cannot easily be refactored away).

Furthermore, one workaround which I have discovered is to use type=tar in the Docker invocation:

docker build --output type=tar,dest=out.tar .

This then produces a .tar file with the expected directory structure:

tarball output

We would prefer not to use this workaround if at all possible, as it pollutes our build pipelines with additional build steps to unpack the .tar files.

Anecdotally - this behaviour seemed to be introduced in Docker 4.29 (and is still present in 4.30). We haven't been able to see anything obvious in the release notes relating to this, which leads me to wonder whether our existing setup is somehow and 'abuse' of the feature. Perhaps someone in the know might be able to shed some light?


Solution

  • This is a bug and has been reported at https://github.com/docker/buildx/issues/2433. Please follow that issue to track the progress on a fix.