pythondockerdocker-composedockerpy

Running ns1labs/flame from python in docker


I have a docker container that runs a python script. The python script should periodically run an instance of ns1labs/flame. It produces an output file, flame.out.json. That file needs to be read by the docker container that started the flame instance.

docker-compose.yml:

services:
  container1:
    
    ...

    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./output:/output

main.py (started from container1):

flamethrower_command = [
            server,
            "-q", "10",
            "-v", "0",
            "-t", "10",
            "-Q", "8194",
            "-l", "10"
        ] # Adjust Flamethrower parameters as needed

        output_file = "flame.out.json"
        output_dir = "/output"
        output_path = os.path.join(output_dir, output_file)

        flamethrower_command += ["-o", output_path]

        try:
            client = docker.from_env()

            container = client.containers.run(
                "ns1labs/flame",
                command=flamethrower_command,
                volumes={os.path.abspath(output_dir): {'bind': output_dir, 'mode': 'rw'}},
                remove=True,  # Automatically remove the container when it exits
                network="host"
            )

            with open(output_path, 'r') as file: # Getting FileNotFoundError here
               print(file.read().splitlines())
        except Exception as ex:
            print(ex)

Solution

  • In general it's far easier to run subprocesses directly as subprocesses than to have one container try to dynamically create another.

    In the case of this particular program, its Docker Hub image page points at a GitHub repository. While the README prominently mentions a Docker image, notice that all of the following invocations are just of the form flame subcommand, without mentioning Docker. If you read its Dockerfile you'll also notice that the build is a fairly simple multi-stage Dockerfile, that depends on a couple of Debian libraries but otherwise just copies a binary into the final image.

    In your own image, you can cannibalize this image for your own needs. The standard Python images are based on Debian, so you should be able to install the same packages. You can use Dockerfile COPY --from a Docker Hub image, if you know the specific things you need to extract; for a compiled program where that is a single binary, this can work.

    FROM python:3.12-slim
    
    # in the same place where you otherwise install OS-level packages
    # in your final build stage if you have multiple
    RUN apt-get update \
     && DEBIAN_FRONTEND=noninteractive \
        apt-get install --no-install-recommends --assume-yes \
          libldns3 \
          libuv1 \
          nghttp2
    # these are the $RUNTIME_DEPS from the referenced Dockerfile,
    # add them to anything else you might already install
    
    # grab the binary
    COPY --from=ns1labs/flame /usr/local/bin/flame /usr/local/bin/flame
    
    # do everything else you already did
    ...
    CMD ["./myapp.py"]
    

    In your Python source, delete all references to the Docker SDK. When you invoke the program, subprocess.run(['flame', ...]) like it was an ordinary local program (which it is) and read its result file from the local (container) filesystem.

    If this program can't run under Docker networking (it describes itself as a low-level network-protocol performance test) then you would need to disable Docker networking with --net=host for your driver script too.