I am trying to update an existing config map of an EKS cluster with terraform. The cluster is deployed with terraform aws module :
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = ">= 20.23"
.
.
.
}
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/20.31.1
The configmap (aws-auth) is deployed by EKS automatically and is not defined by my terraform code.
The data addition will be as below in yaml format:
mapUsers: |
- groups:
- system:masters
userarn: arn:aws:iam::516161651:user/myuser
username: myuser
I am keeping this info in yaml file and want to pass it to terraform null_resource as follows:
data "template_file" "configmap_file" {
template = "${file("${path.module}/configmap-values.yaml")}"
}
resource "null_resource" "your_deployment" {
triggers = {
manifest_sha1 = "${sha1("${data.template_file.configmap_file.rendered}")}"
}
provisioner "local-exec" {
command = "kubectl patch cm mycm -n kube-system --patch-file -<<EOF\n${data.template_file.your_template.rendered}\nEOF"
}
}
And I am having error below:
│ Error running command 'kubectl patch cm aws-auth -n kube-system --patch-file <<EOF │ data: │ mapUsers: | │ - groups: │ - system:masters │ userarn: arn:aws:iam::516161651:user/myuser │ username: myuser │ EOF': exit status 1. Output: << was unexpected at this time.
Do you have any suggestions? I can use Helm if there is a way to update it with helm as well.
Note: The Kubernetes provider doesn't inject data into existing configmap as I understood after some research.
Since some of the module call code is missing, I would suggest trying something along the lines of:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = ">= 20.23"
.
.
.
aws_auth_users = [
{
userarn = "arn:aws:iam::516161651:user/myuser"
username = "myuser"
groups = ["system:masters"]
}
]
}
This should only update the existing ConfigMap.