I am following the Google Cloud Platform's guide for connecting to a Cloud SQL instance through a GKE cluster using Cloud SQL Proxy and a Public IP address (https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine). However, after trying to deploy my application I get the following error in my container logs.
{ Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1107:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 5432 }
Followed by the error message
2021/02/01 05:35:31 the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
In addition (and I assume related) when I check my compute engine for the node in the cluster I see that the Cloud SQL Cloud API access scope is disabled. Is there a way to enable this?
I am aware that there are multiple ways to connect to a Cloud SQL Instance through a GKE cluster, however, I would like to use workload identity over a credentials file.
If you don't use the default service account on your Compute Engine VM, you don't need to play with the scope of the VM. The scope are enforced only with the Compute engine default service account, with custom service account, they aren't.
If you use Workload identity on your cluster, it's the same thing (because it's not the Compute Engine default service account which is used but a custom one). And yes prefer this to Service Account key file.