I am new into Kapitan, Helm and K8s. This is also my first question here.
I have aws-load-balancer-controller.yml in apps classes. I am trying to use gkms refs for certs. See webhookTLS section.
---
parameters:
apps:
aws-load-balancer-controller:
version: ${global_release}
clusterName: ${cluster:cluster_name}
...
webhookTLS:
caCert: ?{gkms:certificates/aws_load_balancer_controller_webhook/${cluster:env_type}@ca_cert}
cert: ?{gkms:certificates/aws_load_balancer_controller_webhook/${cluster:env_type}@cert}
key: ?{gkms:certificates/aws_load_balancer_controller_webhook/${cluster:env_type}@key}
kapitan:
dependencies:
- type: git
source: git@git.company.com:container-ops/helm.git
subdir: charts/aws-load-balancer-controller
ref: ${apps:aws-load-balancer-controller:version}
output_path: .charts/${apps:aws-load-balancer-controller:version}/aws-load-balancer-controller
compile:
- input_type: helm
input_paths:
- .charts/${apps:aws-load-balancer-controller:version}/aws-load-balancer-controller
helm_values: ${apps:aws-load-balancer-controller}
helm_params:
namespace: ingress
name: aws-load-balancer-controller
output_file: aws-load-balancer-controller.yml
output_path: apps/10-core/aws-load-balancer-controller/templates
During compile - ${cluster:env_type} resolves ok. But ?{} is not, I see it in diff:
120,146 name: aws-load-balancer-webhook-service
@@ update validatingwebhookconfiguration/aws-load-balancer-webhook (admissionregistration.k8s.io/v1) cluster @@
...
103,103 clientConfig:
104 - caBundle: LS0tLS1...
104 + caBundle: P3tna21zOmNlcnRpZmljYXRlcy9hd3NfbG9hZF9iYWxhbmNlcl9jb250cm9sbGVyX3dlYmhvb2svbGFiQGNhX2NlcnR9
This is just base64 of value:
ā ~ echo "P3tna21zOmNlcnRpZmljYXRlcy9hd3NfbG9hZF9iYWxhbmNlcl9jb250cm9sbGVyX3dlYmhvb2svbGFiQGNhX2NlcnR9" | base64 -d
?{gkms:certificates/aws_load_balancer_controller_webhook/lab@ca_cert}
The path to refs is correct, I can reveal it:
$ kapitan refs --reveal --tag "?{gkms:certificates/aws_load_balancer_controller_webhook/lab}"
{
"ca_cert": "LS0tLS1C...",
"key": "LS0tL...",
"cert": "LS0tLS1CRUd..."
}
In chart:
/charts/aws-load-balancer-controller/values.yaml:
# webhookTLS specifies TLS cert/key for the webhook
webhookTLS:
caCert:
cert:
key:
/charts/aws-load-balancer-controller/templates/webhook.yaml:
webhooks:
- clientConfig:
{{ if not $.Values.enableCertManager -}}
caBundle: {{ $tls.caCert }}
{{ end }}
/charts/aws-load-balancer-controller/templates/_helpers.tpl:
{{/*
Generate certificates for webhook
*/}}
...
{{- if (and .Values.webhookTLS.caCert .Values.webhookTLS.cert .Values.webhookTLS.key) -}}
caCert: {{ .Values.webhookTLS.caCert | b64enc }}
clientCert: {{ .Values.webhookTLS.cert | b64enc }}
clientKey: {{ .Values.webhookTLS.key | b64enc }}
{{- else if and .Values.keepTLSSecret $secret -}}
... (generate by aws)
I tried explicitly add helm values to kapitan compile section, but it didn't have any effect. What's strange is that same syntax works for another app - f5:
f5-bigip-ctrl.yml
---
parameters:
apps:
f5_bigip_ctlr:
secret:
bigip_username: ?{gkms:passwords/f5/${cluster:env_type}@bigip_username}
bigip_password: ?{gkms:passwords/f5/${cluster:env_type}@bigip_password}
args:
ipam : true
kapitan:
dependencies:
- type: git
source: git@git.company.com:container-ops/helm.git
subdir: charts/f5-bigip-ctlr
ref: ${apps:f5_bigip_ctlr:version}
output_path: .charts/${apps:f5_bigip_ctlr:version}/f5-bigip-ctlr
compile:
- output_path: apps/
input_type: helm
input_paths:
- .charts/${apps:f5_bigip_ctlr:version}/f5-bigip-ctlr
helm_values:
image:
registry: ${cluster:registry}
secret:
bigip_username: ${apps:f5_bigip_ctlr:secret:bigip_username}
bigip_password: ${apps:f5_bigip_ctlr:secret:bigip_password}
...
I'm one of the kapitan founders š
Unfortunately helm tends to base64 the value before kapitan can do anything about it.
There are several ways to solve the problem, especially if the helm chart provides passing the name of the secret (that you can generate with Kapitan) instead of the actual values.
Another solution is to use a "mutation" which allows kapitan to manipulate helm objects, but the other option is easier.
If you need help also consider joining the #kapitan channel on the kubernetes slack