I have the following Terraform script to deploy an EKS cluster (tags left empty to hide values)
EKS.tf
provider "aws" {
region = var.region
profile = var.profile
default_tags {
tags = {
Name = "Example Eks cluster"
Owner = ""
ChargeCode = ""
ProjectId = ""
ApplicationId = ""
#Environment = ""
PatchGroup = ""
Eeol = ""
Oic = ""
GovId = ""
CommId = ""
}
}
}
# Filter out local zones, which are not currently supported
# with managed node groups
data "aws_availability_zones" "available" {
filter {
name = "opt-in-status"
values = ["opt-in-not-required"]
}
}
locals {
cluster_name = "test-eks-cluster"
cluster_enabled_log_types = []
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.15.3"
cluster_name = local.cluster_name
cluster_version = "1.27"
cluster_endpoint_public_access = true
cluster_endpoint_private_access = true
vpc_id = ""
subnet_ids = ["", ""]
cluster_enabled_log_types = local.cluster_enabled_log_types
create_cloudwatch_log_group = false
tags = var.tags
eks_managed_node_group_defaults = {
#ami_type = "ami-06d7aa002b2e3009b"
ami_type = "AL2_x86_64"
tags = var.tags
}
eks_managed_node_groups = {
one = {
name = "node-group-1"
instance_types = ["t3.small"]
min_size = 1
max_size = 3
desired_size = 2
tags = var.tags
}
}
# access_entries = {
#}
}
data "aws_iam_policy" "ebs_csi_policy" {
arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
module "irsa-ebs-csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
provider_url = module.eks.oidc_provider
role_policy_arns = [data.aws_iam_policy.ebs_csi_policy.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
tags = var.tags
}
resource "aws_eks_addon" "ebs-csi" {
cluster_name = module.eks.cluster_name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.30.0-eksbuild.1"
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
tags = {
"eks_addon" = ""
"terraform" = ""
Name = ""
Owner = ""
ChargeCode = ""
ProjectId = ""
ApplicationId = ""
#Environment = ""
PatchGroup = ""
Eeol = ""
Oic = ""
GovId = ""
CommId = ""
}
}
This Terraform script deploys the cluster with no issues but when I run a kubectl apply -f ./zk.yml
zk.yml:
apiVersion: platform.confluent.io/v1beta1
kind: Zookeeper
metadata:
name: zookeeper
namespace: confluent
spec:
replicas: 3
image:
application: confluentinc/cp-zookeeper:7.5.0
init: confluentinc/confluent-init-container:2.7.0
dataVolumeCapacity: 10Gi
logVolumeCapacity: 10Gi
The zookeeper pods get stuck in a Init status because the necessary tags for my organization are not attached to the volumes associated with it.
Error example:
AttachVolume.Attach failed for volume "pvc-68f45cb0-d06e-4e19-b0ff-c88ecb53f4c8" : rpc error: code = Internal desc = Could not attach volume "vol-0ccc9938279c3a256" to node "i-061ea33a9f07efffb": could not attach volume "vol-0ccc9938279c3a256" to node "i-061ea33a9f07efffb": operation error EC2: AttachVolume, https response error StatusCode: 403, RequestID: a8d8596c-d9b6-4210-9919-bd5ce67f37f0, api error UnauthorizedOperation: You are not authorized to perform this operation. User: arn:aws-us-gov:sts::000451337248:assumed-role/AmazonEKSTFEBSCSIRole-test-eks-cluster/1716299655525948774 is not authorized to perform: ec2:AttachVolume on resource: arn:aws-us-gov:ec2:us-gov-east-1:000451337248:volume/vol-0ccc9938279c3a256 with an explicit deny in a service control policy
To fix this manually I would add the necessary tags to "vol-0ccc9938279c3a256"
Now, I am able to fix this issue from the AWS console by manually adding all the tags to each volume associated with each zookeeper pod, but I need to have this done automatically in my terraform script. I tried adding a "volume_tags" section to out EBS resource, but it doesn't seem like that is there correct place to tag the volumes. How can I add to my Terraform script the volume tags that are needed instead of manually adding them each time?
To add standard set of tags to the EBS volume creates by the CSI addon, try:
resource "aws_eks_addon" "ebs-csi" {
...
configuration_values = jsonencode({
controller = {
extraVolumeTags = {
your_company_tag_key = "your_company_tag_value"
...
}
}
})
}