There exists plenty of documentation on how to handle IAM with GCP and Crossplane, complete with nice details on exactly what commands to run to tie them together with workload identity.
My issue is automation. I use terraform run by GHA to create a kubernetes cluster along with any other GCP resources it needs.
By default the Kubernetes SA the Crossplane provider pod runs as is uniquely named for each version/release of each provider, how do I make automation that's necessarily outside the kubernetes cluster add the GCP IAM bindings to an SA created in the kubernetes cluster with a unique name? Alternatively, if I create a known named kubernetes SA and have the Crossplane provider pod run as that, how do I ClusterRoleBind that SA to the uniquely name ClusterRole the Crossplane provider also creates? I suppose I could just also create my own ClusterRole and not use any of the RBAC the Crossplane GCP providers create, this would be a lot of toil however.
From https://marketplace.upbound.io/providers/upbound/provider-family-gcp/v1.3.0/docs/configuration here are the key two commands to run to allow a kubernetes SA to impersonate a GCP SA with workload identity.
$ gcloud iam service-accounts add-iam-policy-binding \
${GCP_SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:${PROJECT_ID}.svc.id.goog[upbound-system/${KUBERNETES_SERVICE_ACCOUNT}]" \
--project ${PROJECT_ID}
$ kubectl annotate serviceaccount ${KUBERNETES_SERVICE_ACCOUNT} \
iam.gke.io/gcp-service-account=${GCP_SERVICE_ACCOUNT}@${PROJECT_ID}.iam.gserviceaccount.com \
-n upbound-system
${KUBERNETES_SERVICE_ACCOUNT} either contains the .status.currentRevision
field of the currently installed provider, or is your own kubernetes SA with a ClusterRoleBinding to a ClusterRole that contains the .status.currentRevision
field of the currently installed provider.
Turns out I was worried much about nothing. Crossplane has your back.
With the following manifests Crossplane takes ownership of the created SA, does the complicated bit of binding the SA to the ClusterRole it creates for that specific revision (I was not aware this would happen automatically), and runs the pod as the SA.
---
apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: provider-gcp-network
spec:
package: xpkg.upbound.io/upbound/provider-gcp-network:v1.3.0
runtimeConfigRef:
name: provider-gcp-network
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: provider-gcp-network
---
apiVersion: pkg.crossplane.io/v1beta1
kind: DeploymentRuntimeConfig
metadata:
name: provider-gcp-network
spec:
serviceAccountTemplate:
metadata:
name: provider-gcp-network
---
apiVersion: gcp.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
projectID: <projectId>
credentials:
source: InjectedIdentity
Using a GKE Autopilot cluster I can then bind GCP roles directly onto the Kubernetes SA and create GCP resources with it.
principal://iam.googleapis.com/projects/<projectNumber>/locations/global/workloadIdentityPools/<projectId>.svc.id.goog/subject/ns/<kubernetesNamespace/sa/<kubernetesSA>
Everything in the provider-family-gcp documentation relating to currentRevision is, I presume, outdated.