laraveldockergoogle-cloud-platformgoogle-cloud-rungoogle-cloud-build

How to Properly Inject Service Account Credentials in Google Cloud Build for Cloud Run Deployment?


I’m deploying a Laravel application to Google Cloud Run using Google Cloud Build. During the Docker build process, I need to access Google Secret Manager to inject secrets into the application code automatically. However, the build process fails with the following error:

Failed to load secrets: cURL error 28: Failed to connect to 169.***.***.** port 80 after 130458 ms: Couldn't connect to server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://169.***.***.**/computeMetadata/v1/instance/service-accounts/default/token?scopes=https://www.googleapis.com/auth/cloud-platform

As I understand it, the build process is trying to access the Google Cloud Instance Metadata Server at 169...** to retrieve a token for the default service account. However, the metadata server is not available in Cloud Build, as it doesn’t run on a virtual machine.

I’m not sure how to correctly inject these credentials into the Docker build process for this specific scenario.


What I’m Trying to Achieve:

  1. Build a Docker image for the Laravel app using Cloud Build. ❌
  2. Access secrets from Secret Manager during the build (e.g., database credentials, API keys). ✅
  3. Inject the secrets into the Docker image automatically for use in the application. ✅

Steps 2 and 3 work perfectly in local development (via gcloud auth application-default login) and Dockerized environments (by mounting a JSON file with Service Account credentials).

Build via Cloud Build fails due to application not being able to access secrets manager.


What I’ve Tried:

  1. Created a Service Account with all necessary permissions for Secret Manager (roles/secretmanager.secretAccessor).
  2. Stored the Service Account key JSON file securely.
  3. Modified my cloudbuild.yaml to inject the GOOGLE_APPLICATION_CREDENTIALS environment variable during the build process.

However, the build process still fails, as Cloud Build cannot access the metadata server.


Current Workflow:

Here’s what I expect and currently envision as the workflow:


What I Need:

  1. Guidance on the best way to inject Service Account credentials into the Cloud Build process.
  2. Should I use a Service Account JSON key directly, or is there a way to configure the Cloud Build Service Account to access Secret Manager during the build?

Any help or insights on this would be greatly appreciated.

Thanks in advance!


Solution

  • The issue was primarily related to how services were being injected, which caused the build process to sometimes fail before reaching the required entry point.

    Additionally, the cloudbuild.yaml configuration was incomplete. Expecting the automatic injection of a Service Account (SA) to specific CI/CD steps is totally accurate.

    Here’s an example of a complete cloudbuild.yaml that resolved the issue:

    options:
      logging: CLOUD_LOGGING_ONLY
      dynamicSubstitutions: true
    
    steps:
      # Step 1: Build the Docker image for the application
      - name: gcr.io/cloud-builders/docker
        id: Build
        args:
          - build
          - '--no-cache'
          - '-f'
          - Dockerfile
          - '-t'
          - ${_IMAGE_NAME}
          - .
    
      # Step 2: Push the Docker image to Google Artifact Registry
      - name: gcr.io/cloud-builders/docker
        id: Push
        args:
          - push
          - ${_IMAGE_NAME}
    
      # Step 3: Deploy the Docker image to Cloud Run with secrets management
      - name: gcr.io/cloud-builders/gcloud
        id: Deploy
        args:
          - run
          - deploy
          - ${_SERVICE_NAME}
          - '--image'
          - ${_IMAGE_NAME}
          - '--platform=managed'
          - '--region'
          - ${_REGION}
          - '--allow-unauthenticated'
          - '--memory=1024Mi'
          - '--set-secrets=EXAMPLE_API_KEY=my-api-key:latest'
    
      # Step 4: Execute a Cloud Run Job for database migrations
      - name: gcr.io/google.com/cloudsdktool/cloud-sdk
        id: Migrate
        entrypoint: /bin/bash
        args:
          - '-c'
          - |
            gcloud run jobs execute migrate-job \
              --region ${_REGION}
    
    substitutions:
      _ARTIFACT_REGISTRY: 'example-europe-north1-registry'
      _SERVICE_NAME: 'example-service-name'
      _REGION: 'europe-north1'
      _IMAGE_NAME: '${_REGION}-docker.pkg.dev/${PROJECT_ID}/${_ARTIFACT_REGISTRY}/${_SERVICE_NAME}:${SHORT_SHA}'
    
    images:
     - ${_IMAGE_NAME}
    

    Key Notes

    Local Testing with Docker:

    Local Non-Docker Testing:

    Service Account Injection:

    Secrets Management:


    This configuration resolved the issue, ensuring both the CI/CD pipeline and the deployed application worked seamlessly with the necessary Service Account permissions.