I have a problem with reading data from the EKS Cluster module from within the kubernetes
and helm
provider. I was at first using cluster_name
and trying to read the data with the use of data
. However, I cam across an issue whereby which I couldn't read the data because the resource didn't exist yet. So I need a way to read it directly from the module. Alas this is where I am having the issue. Here is the error code:
│ Error: Unsupported attribute
│
│ on providers.tf line 37, in provider "kubernetes":
│ 37: cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data.data)
│ ├────────────────
│ │ module.primary.cluster_certificate_authority_data is "**********************"
│
│ Can't access attributes on a primitive-typed value (string).
╵
╷
│ Error: Unsupported attribute
│
│ on providers.tf line 54, in provider "helm":
│ 54: cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data.data)
│ ├────────────────
│ │ module.primary.cluster_certificate_authority_data is "**********************"
│
│ Can't access attributes on a primitive-typed value (string).
Here is what my terraform looks like within my main module:
main.tf
################################################
# KMS CLUSTER ENCRYPTION KEY #
################################################
module "kms" {
source = "terraform-aws-modules/kms/aws"
version = "1.1.0"
aliases = ["eks/${var.cluster_name}__cluster_encryption_key_test"]
description = "${var.cluster_name} cluster encryption key"
enable_default_policy = true
key_owners = [data.aws_caller_identity.current.arn]
tags = local.tags
}
##################################
# KUBERNETES CLUSTER #
##################################
module "primary" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.13.1"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
cluster_endpoint_private_access = var.cluster_endpoint_private_access
cluster_endpoint_public_access = var.cluster_endpoint_public_access
create_kms_key = false
cluster_encryption_config = {
resources = ["secrets"]
provider_key_arn = module.kms.key_arn
}
create_cni_ipv6_iam_policy = var.create_cni_ipv6_iam_policy
manage_aws_auth_configmap = true
aws_auth_roles = var.aws_auth_roles
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
eks_managed_node_group_defaults = {
ami_type = var.ami_type
disk_size = var.disk_size
instance_types = var.instance_types
iam_role_attach_cni_policy = var.iam_role_attach_cni_policy
}
eks_managed_node_groups = {
primary = {
min_size = 1
max_size = 5
desired_size = 1
capacity_type = "ON_DEMAND"
}
secondary = {
min_size = 1
max_size = 5
desired_size = 1
capacity_type = "SPOT"
}
}
cluster_addons = {
coredns = {
most_recent = true
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "PRESERVE"
timeouts = {
create = "20m"
delete = "20m"
update = "20m"
}
}
kube-proxy = {
most_recent = true
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "PRESERVE"
timeouts = {
create = "20m"
delete = "20m"
update = "20m"
}
}
aws-ebs-csi-driver = {
most_recent = true
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "PRESERVE"
timeouts = {
create = "20m"
delete = "20m"
update = "20m"
}
}
vpc-cni = {
most_recent = true
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "PRESERVE"
timeouts = {
create = "20m"
delete = "20m"
update = "20m"
}
}
}
fargate_profiles = {
default = {
name = "default"
selectors = [
{
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
},
{
namespace = "default"
}
]
timeouts = {
create = "20m"
delete = "20m"
}
}
}
tags = {
repo = "https://github.com/impinj-di/terraform-aws-eks-primary"
team = "di"
owner = "di_admins@impinj.com"
}
}
####################################
# KUBERNETES RESOURCES #
####################################
resource "kubernetes_namespace" "this" {
depends_on = [module.primary]
for_each = toset(local.eks_namespaces)
metadata {
name = each.key
}
}
Here is my providers.tf
:
terraform {
required_version = ">= 1.3.7"
required_providers {
aws = ">= 4.12.0"
# harness = {
# source = "harness/harness"
# version = "0.21.0"
# }
helm = {
source = "hashicorp/helm"
version = "2.9.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.11.0"
}
}
}
terraform {
backend "s3" {
bucket = "impinj-canary-terraform"
key = "terraform-aws-eks-primary.tfstate"
region = "us-west-2"
encrypt = true
}
}
provider "aws" {
alias = "sec"
region = "us-west-2"
}
provider "kubernetes" {
host = module.primary.cluster_endpoint
cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data.data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.primary.cluster_name]
}
}
# provider "harness" {
# endpoint = "https://app.harness.io/gateway"
# account_id = var.harness_account_id
# platform_api_key = var.harness_platform_api_key
# }
provider "helm" {
kubernetes {
host = module.primary.cluster_endpoint
cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data.data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.primary.cluster_name]
}
}
}
The issue lies in this config cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data.data)
as far Can't access attributes on a primitive-typed value (string).
error is concerned.
The output cluster_certificate_authority_data in terraform-aws-eks module already consists value from aws_eks_cluster.this[0].certificate_authority[0].data
hence the correct reference here for cluster_ca_certificate
should be base64decode(module.primary.cluster_certificate_authority_data)
provider "kubernetes" {
host = module.primary.cluster_endpoint
cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.primary.cluster_name]
}
}
provider "helm" {
kubernetes {
host = module.primary.cluster_endpoint
cluster_ca_certificate = base64decode(module.primary.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.primary.cluster_name]
}
}
}
In General, I would also recommend separating the EKS cluster deployment and Kubernetes resources/workload deployments. By keeping the two providers' resources in separate Terraform states, we can limit the scope of changes to either the EKS cluster or the Kubernetes resources. hashicorp /terraform-provider-kubernetes recommendation.