I am trying to build a CI/CD pipeline for my google cloud functions. What i have right know is, i have local developement environment with gcloud and git. i write my code in local environment and have cloudbuilds.yaml file. After writing the code i push it to Google Source Repository where i have Build Trigger. It builds the function and deploy it.
Now i would like to have some test files with it too.That means whenever i push it to source Repository it should also run the tests and build my main.py file and then deploy it. The cloudbuild.yaml file i have is
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- FunctionName
- --runtime=python37
- --source=.
- --entry-point=function
- --trigger-topic=topic_name
- --region=europe-west3
You can add a step in you Cloud Build. I don't know how your run your test, but here an example for running your script in a python3.7 context
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
run the python script that you want
pip install and others.
Update
Add this step before your deployment function. If the step fail (exit code different than 0), the cloud build process stopped and the deployment is not performed.
Update 2
The concept of Cloud Build is quite simple. You load a container (represented in the name
). In the container, only the volume /workspace
is attached and kept from one step to the next one.
This concept is very important. If you set environment variable or other in one step, the step after will loose this context. Only the file of /workspace
are kept. The next step is call only if the current one finish correctly (exit code = 0).
When a container is loaded, a command is trigger. If you use cloud builder, a default entry point is called by default (for example, the gcloud Cloud Builder launch automatically the gcloud command). Then you only have to add the args array to submit to this entry point. Example
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- list
This command represent the gcloud functions list
with gcloud
as entrypoint and functions
and list
as args.
If your container don't have entrypoint (like python container) or if you want to override your entrypoint, you can specify it with entrypoint
key word. In my first code example, few linux concept are required. The entrypoint is bash. the arg is -c
for executing a command. The pipe |
if for allowing a multi command (multi line) command entry.
If you have only one python command to launch, you can do like this:
- name: 'python:3.7'
entrypoint: 'python3'
args:
- 'test_main.py'
- '.'
But, the steps that you wrote won't work. Why? go back to the beginning of my explanation: only the file of the /workspace
are kept. If you perform a pip3 install
the files aren't written in the /workspace
directory, but elsewhere in the system. When you switch of step, you loose this system context.
That's why, a multi-line command is useful
- name: 'python:3.7'
entrypoint: 'bash'
args:
- '-c'
- |
pip3 install -r requirements.txt
python3 test_main.py .
Hope this help!