dockerdockerfilegithub-actionssalt-project

Why is my Docker container, inside GitHub workflow, not working as it does in a standard context (Docker CLI)?


I am trying to run a "simple" salt master to be used by my GitHub workflow, but even if this very same Docker image is running fine from a vanilla docker run command (meaning I can connect to it on port 4505 and 4506, and the ENTRYPOINT defined in the Dockerfile is working as expected) I cannot get it to listen and answer to request when running in a GitHub workflow context.

Here is my sanitized Dockerfile:

FROM debian:bookworm

ARG salt_version
ENV SALT_VERSION $salt_version
ENV DEBIAN_FRONTEND noninteractive

# Install salt
RUN echo "salt_version = $SALT_VERSION"
RUN apt-get -qq update && \
    apt-get -qq upgrade -y && \
    apt-get install -y curl ssh

RUN mkdir -p /etc/apt/keyrings && \
    mkdir -p /srv/dev/salt/salt_states && \
    mkdir -p /srv/dev/salt/salt_pillars && \
    curl -fsSL https://packages.broadcom.com/artifactory/api/security/keypair/SaltProjectKey/public -o /etc/apt/keyrings/salt-archive-keyring.pgp && \
    curl -fsSL https://github.com/saltstack/salt-install-guide/releases/latest/download/salt.sources -o /etc/apt/sources.list.d/salt.sources && \
    apt-get update && \
    apt-get install -y salt-cloud=$SALT_VERSION && \
    apt-get install -y salt-master=$SALT_VERSION && \
    salt-call --local pip.install ipy && \
    salt-call --local pip.install requests

ENTRYPOINT /usr/bin/salt-master -l error

This built Docker image is published to our private GitHub container registry and is then pulled from our workflow YAML (ran on a self-hosted runner), as in:

name: Deploy
on:
  [workflow_dispatch]

jobs:

  Provision_applications:
    runs-on: [self-hosted, linux, x64, docker-ci-runner]
    container:
      image: ghcr.io/mycompany/salt-master:latest
      ports:
          - 4505:4505
          - 4506:4506
      credentials:
       username: ${{ github.actor }}
       password: ${{ secrets.PAT }}


    steps:

    - name: Destroy any existing container
      run: |
        hostname -f
        salt-cloud --version
        salt-cloud -m /etc/salt/cloud.maps.d/myhost.map -d -y

    - name: Provision container
      run: |
        hostname -f
        salt-cloud -y -m /etc/salt/cloud.maps.d/myhost.map

    - name: Deploy application code
      run: |
        hostname -f
        ps faux
        salt "myhost.lan" test.ping
        salt-run state.orchestrate orch.srv-compute pillar='{"vm_name": "myhost"}'
        
    - name: Destroy created container
      run: |
        salt-cloud -m /etc/salt/myhost.map -d -y

When running this workflow the container get created but :

  1. The container does not use the ENTRYPOINT defined in Dockerfile (I can see that by just looking at the docker ps output)
  2. The port mapping is not implemented (I cannot telnet to the Docker host on ports 4505/4506)

The command I use to test it in a "vanilla" context:

docker run -it -p 4505:4505 -p 4506:4506 ghcr.io/mycompany/salt-master:latest

Using this command I am able to telnet on port 4505 or 4506, which is not possible when it's running from the workflow.

EDIT:

To answer to michal comment there is no such thing as entrypointin jobs.<job_id>.container afaik.

I also thought of using servicesbut i am not sure if this would be a good practice in my case.

I also tried to use CMDinstead of ENTRYPOINT but it does not work and after reading more about the differences between those two i got back to using ENTRYPOINT


Solution

  • Finally got it worked out, my logic was bad by trying to build a salt-master container that was running the salt-master process, in the github action context I needed a simple container (with everything ready to actually run the salt-master process) from which I could start the process from within the workflow and then run the salt commands. So I just completely stripped out the ENTRYPOINT or CMD from the Dockerfile, and then start the daemon manually from the workflow, as in :

    [...]
    - name: Deploy application code
          run: |
            hostname -f
            ps faux
            salt-master -d
            salt "myhost.lan" test.ping
            salt-run state.orchestrate orch.srv-compute pillar='{"vm_name": "myhost"}'
    [...]