I've got a GitHub action (running on a self-hosted Gitea server) that runs a few docker compose
commands to set up a test environment and run tests. The compose files work on my local machine and with nektos act.
The runner is set up following Gitea's guide for act runner with docker compose
, which mounts the docker socket to the runner.
An example workflow:
jobs:
test_job:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup
run: docker compose --profile setup up --wait
- name: Test
run: docker compose run --rm test
- name: Cleanup
if: always()
run: docker compose --profile setup down
I've narrowed the problem down to the volumes not being mounted how I'd expect. My compose file has a database service with a volume:
services:
db:
image: postgres:17
volumes:
- ./test/db/schema.sql:/docker-entrypoint-initdb.d/11schema.sql
If I attach to the database service in the action, it prints an error:
test-db | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/11schema.sql
test-db | psql:/docker-entrypoint-initdb.d/11schema.sql: error: could not read from input file: Is a directory
test-db exited with code 1
Usually "Is a directory" is a result of the mount path being empty and docker creating folders in their place, how can I make sure the volume is mounted so the compose files work both locally and in the action?
/workspace/[Org]/[repo]
)docker compose
command it mounts volumes relative to the workspace, meaning ./test/db/schema.sql
is mapped to /workspace/Org/repo/test/db/schema.sql
/var/run/docker.sock
, which starts the containers on the host machine/workspace/Org/repo/test/db/schema.sql
folders insteadYou need the volumes to match between the container and the host. Github actions let you add volumes with jobs.<job_id>.container.volumes. Adjust your job to add the container.volumes
attribute:
jobs:
test_job:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
volumes:
- /workspace/Org/repo/app:/workspace/Org/repo/app
Note on the path: If you try to mount /workspace/Org/repo:/workspace/Org/repo
the container will fail in the "Set up job" step with the error:
failed to create container: 'Error response from daemon: Duplicate mount point: /workspace/Org/repo'
This is because the runner already handles mounting the workspace directory with some auto-generated "GITEA-ACTIONS-TASK-1234_WORKFLOW-Test_JOB-test_job" name. This can be worked around by using a subdirectory.
In order to use the volume adjust your runner.yaml
config for the gitea runner and restart the runner:
container:
valid_volumes:
- '/workspace/**'
Add the defaults.run.working-directory
attribute to your job:
jobs:
test_job:
defaults:
run:
working-directory: /workspace/Org/repo/app
If using actions/checkout
, add a path so it clones the code to the right folder:
steps:
- name: Checkout
uses: actions/checkout@v4
with:
path: "app"
Make sure to empty the mounted volume after the test:
steps:
- name: Cleanup
if: always()
run: |
docker compose --profile setup down
rm -rf /workspace/Org/repo/app/*
My final workflow file:
jobs:
test_job:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
volumes:
- /workspace/Org/repo/app:/workspace/Org/repo/app
defaults:
run:
working-directory: /workspace/Org/repo/app
steps:
- name: Checkout
uses: actions/checkout@v4
with:
path: "app"
- name: Setup
run: docker compose --profile setup up --wait
- name: Test
run: docker compose run --rm test
- name: Cleanup
if: always()
run: |
docker compose --profile setup down
rm -rf /workspace/Org/repo/app/*
Some notes:
${{ env.JOB_CONTAINER_NAME }}
to get the container name for the volumes_from
attribute, but I couldn't get that to run locally with act