I’m deploying a Laravel application to Google Cloud Run using Google Cloud Build. During the Docker build process, I need to access Google Secret Manager to inject secrets into the application code automatically. However, the build process fails with the following error:
Failed to load secrets: cURL error 28: Failed to connect to 169.***.***.** port 80 after 130458 ms: Couldn't connect to server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://169.***.***.**/computeMetadata/v1/instance/service-accounts/default/token?scopes=https://www.googleapis.com/auth/cloud-platform
As I understand it, the build process is trying to access the Google Cloud Instance Metadata Server at 169...** to retrieve a token for the default service account. However, the metadata server is not available in Cloud Build, as it doesn’t run on a virtual machine.
I’m not sure how to correctly inject these credentials into the Docker build process for this specific scenario.
What I’m Trying to Achieve:
Steps 2 and 3 work perfectly in local development (via gcloud auth application-default login
) and Dockerized environments (by mounting a JSON file with Service Account credentials
).
Build via Cloud Build fails due to application not being able to access secrets manager.
What I’ve Tried:
However, the build process still fails, as Cloud Build cannot access the metadata server.
Current Workflow:
Here’s what I expect and currently envision as the workflow:
What I Need:
Any help or insights on this would be greatly appreciated.
Thanks in advance!
The issue was primarily related to how services were being injected, which caused the build process to sometimes fail before reaching the required entry point.
Additionally, the cloudbuild.yaml
configuration was incomplete. Expecting the automatic injection of a Service Account (SA) to specific CI/CD steps is totally accurate.
Here’s an example of a complete cloudbuild.yaml
that resolved the issue:
options:
logging: CLOUD_LOGGING_ONLY
dynamicSubstitutions: true
steps:
# Step 1: Build the Docker image for the application
- name: gcr.io/cloud-builders/docker
id: Build
args:
- build
- '--no-cache'
- '-f'
- Dockerfile
- '-t'
- ${_IMAGE_NAME}
- .
# Step 2: Push the Docker image to Google Artifact Registry
- name: gcr.io/cloud-builders/docker
id: Push
args:
- push
- ${_IMAGE_NAME}
# Step 3: Deploy the Docker image to Cloud Run with secrets management
- name: gcr.io/cloud-builders/gcloud
id: Deploy
args:
- run
- deploy
- ${_SERVICE_NAME}
- '--image'
- ${_IMAGE_NAME}
- '--platform=managed'
- '--region'
- ${_REGION}
- '--allow-unauthenticated'
- '--memory=1024Mi'
- '--set-secrets=EXAMPLE_API_KEY=my-api-key:latest'
# Step 4: Execute a Cloud Run Job for database migrations
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
id: Migrate
entrypoint: /bin/bash
args:
- '-c'
- |
gcloud run jobs execute migrate-job \
--region ${_REGION}
substitutions:
_ARTIFACT_REGISTRY: 'example-europe-north1-registry'
_SERVICE_NAME: 'example-service-name'
_REGION: 'europe-north1'
_IMAGE_NAME: '${_REGION}-docker.pkg.dev/${PROJECT_ID}/${_ARTIFACT_REGISTRY}/${_SERVICE_NAME}:${SHORT_SHA}'
images:
- ${_IMAGE_NAME}
Local Testing with Docker:
service-account.json
file in docker-compose.yml
to authenticate.Local Non-Docker Testing:
gcloud auth application-default login
to ensure your local environment is authenticated with GCP.Service Account Injection:
GOOGLE_APPLICATION_CREDENTIALS
as mentioned by @DazWilkin.Secrets Management:
This configuration resolved the issue, ensuring both the CI/CD pipeline and the deployed application worked seamlessly with the necessary Service Account permissions.