kubernetesterraformkubernetes-helmazure-aks

How to connect HCP to authenticate to AKS cluster for deploying Helm and K8 resources?


I am trying to deploy some helm charts and K8 resources on my AKS cluster.

I am trying the following provider configuration. I got the following provider configuration from Terraform example

data "azurerm_kubernetes_cluster" "info" {
  depends_on          = [module.aks]
  name                = "${var.app}-${var.environment_prefix}-aks"
  resource_group_name = module.resource_group.name
}

provider "helm" {
  kubernetes {
    host                   = data.azurerm_kubernetes_cluster.info.kube_config.0.host
    client_certificate     = base64decode(data.azurerm_kubernetes_cluster.info.kube_config.0.client_certificate)
    client_key             = base64decode(data.azurerm_kubernetes_cluster.info.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.info.kube_config.0.cluster_ca_certificate)
  }
}



provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.info.kube_config.0.host
  client_certificate     = base64decode(data.azurerm_kubernetes_cluster.info.kube_config.0.client_certificate)
  client_key             = base64decode(data.azurerm_kubernetes_cluster.info.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.info.kube_config.0.cluster_ca_certificate)
}

helm chart example

resource "helm_release" "soc_loki" {
  name       = "loki"
  repository = "https://grafana.github.io/helm-charts"
  chart      = "loki"
  version    = "6.22.0"
  namespace  = "soc-loki"

  values = [
    "${file("./helm_charts_values/loki/values.yaml")}",
    "${file("./helm_charts_values/loki/env/${var.environment_prefix}.yaml")}"
  ]
}

Every time I run Terraform apply on HCP it says this error Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials. Does it depends on AKS authentication? I am using Entra ID authentication with Azure RBAC


Solution

  • Since I was using Azure AD with brace I needed to use kube_admin_config.

    below provider config worked for me

    data "azurerm_kubernetes_cluster" "info" {
      depends_on          = [module.aks]
      name                = "${var.app}-${var.environment_prefix}-aks"
      resource_group_name = module.resource_group.name
    }
    
    provider "helm" {
      kubernetes {
        host                   = data.azurerm_kubernetes_cluster.info.kube_admin_config.0.host
        cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.info.kube_admin_config.0.cluster_ca_certificate)
        client_certificate     = base64decode(data.azurerm_kubernetes_cluster.info.kube_admin_config.0.client_certificate)
        client_key             = base64decode(data.azurerm_kubernetes_cluster.info.kube_admin_config.0.client_key)
      }
    }
    
    provider "kubernetes" {
      host                   = data.azurerm_kubernetes_cluster.info.kube_admin_config.0.host
      cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.info.kube_admin_config.0.cluster_ca_certificate)
      client_certificate     = base64decode(data.azurerm_kubernetes_cluster.info.kube_admin_config.0.client_certificate)
      client_key             = base64decode(data.azurerm_kubernetes_cluster.info.kube_admin_config.0.client_key)
    }