project1: I have a very simple python app which accepts cmd line input and prints it out and that is all. I created an image of this python app and pushed to gitlab container registrt for project2 as follows (works as intenede):
app.py
import sys
#print(sys.argv)
print("Yes I am inside the program")
for file in sys.argv:
print(file)
Dockerfile:
FROM python:3.12.7-alpine
WORKDIR /app
COPY app.py .
ENTRYPOINT ["python","app.py"]
.gitlab-ci.yml (to create the image and push to gitlab as custom image: stages:
build image:
stage: build
image: docker:stable
services:
- name: docker:dind
alias: thedockerhost
variables:
# Tell docker CLI how to talk to Docker daemon; see
# https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
DOCKER_HOST: tcp://thedockerhost:2375/
# Use the overlayfs driver for improved performance:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
# Download bash:
- apk add --no-cache bash python3
- echo "$CI_REGISTRY_USER"
- echo "$CI_REGISTRY_PASSWORD"
- docker login -u <user> -p <password> gitlab.ilts.com:5050
- docker build -t gitlab.ilts.com:5050/group/project2/pythonapp .
- docker push gitlab.ilts.com:5050/group/project2/pythonapp
- docker logout
In project2, where I have the custom image available in the GitLab Container Registry, I am trying to use the image and provide input as file names.
.gitlab-ci.yml of project2.
build:
image: gitlab.ilts.com:5050/group/pythonapp:latest
before_script:
- echo "setting environment"
- apk update && apk add git
- git --version
script:
- echo "Creating Tag"
- git switch $CI_COMMIT_REF_NAME
- LAST_VER=$(git tag --list | sort -V | tail -n1)
- echo "$LAST_VER"
- FILES_CHANGED=$(git diff --name-only HEAD v1.0.0)
- echo "$FILES_CHANGED"
- python --version
- python app.py hi.sql yes.sql hello.sql
1) When I perform this action, the output I receive doesn't make sense to me. Can you help me understand why it's displayed this way, how I can fix it?
OUTPUT:
Yes I am inside the program
app.py
sh
-c
if [ -x /usr/local/bin/bash ]; then
exec /usr/local/bin/bash
elif [ -x /usr/bin/bash ]; then
exec /usr/bin/bash
elif [ -x /bin/bash ]; then
exec /bin/bash
elif [ -x /usr/local/bin/sh ]; then
exec /usr/local/bin/sh
elif [ -x /usr/bin/sh ]; then
exec /usr/bin/sh
elif [ -x /bin/sh ]; then
exec /bin/sh
elif [ -x /busybox/sh ]; then
exec /busybox/sh
else
echo shell not found
exit 1
fi
2: Is it possible to access files that are in project2 repository within this custom Python app image for reading and writing ?
The root cause of your issue is using ENTRYPOINT
instead of CMD
in the Dockerfile. These two directives seem similar, and both in some form specify the command to run. They can be separately overridden when you run the container, and the docker run
syntax makes it somewhat easier to override the CMD
.
If both ENTRYPOINT
and CMD
are provided, then they get combined into a single command word list. This matches what you're seeing in the output of your program.
In particular, it looks like the GitLab pipeline runs the container, overriding the CMD
(not the ENTRYPOINT
) with the sh -c 'if [ -x ... ]; ... fi'
inline command to run some sort of shell, with an expectation of feeding the individual commands into that shell. Your ENTRYPOINT
just prints out all of the things passed to it as arguments, and doesn't actually try to run them.
The easiest fix here is to just change ENTRYPOINT
to CMD
in the Dockerfile
CMD ["./app.py"] # not ENTRYPOINT
If you do use ENTRYPOINT
in a setup like this, it needs to make sure it executes its command-line arguments to actually run the CMD
. I most often write a shell script ending in exec "$@"
. You could use a Python entrypoint wrapper too; it would end with os.execvp(sys.argv[1], sys.argv[1:])
.