grafanaprometheus-operator

LDAP configuration not picked up by grafana.ini during Helm chart install


I have installed kube-prometheus-stack-9.4.5 operator using Helm, mostly with default settings by passing custom values.yaml for Grafana URLs and LDAP configuration. I could access Grafana dashboard and also see the configuration in grafana.ini when I exec into Grafana container. I then added LDAP settings as below to the YAML file and noticed that none of the LDAP information is updated in grafana.ini file. The container has admin.ldap flag set to true in grafana.ini but don’t see LDAP configuration either in the secret or in /etc/grafana/ldap.toml or in the secret. The /etc/grafana/ldap.toml has default LDAP settings and don’t see any custom values specified in values.yaml.

grafana:
enabled: true
namespaceOverride: ""
rbac:
  pspUseAppArmor: false
grafana.ini:
server:
  domain: sandboxgrmysite.com
  #root_url: "%(protocol)s://%(domain)s/"
  root_url: https://sandboxgrmysite.com/grafana/
  serve_from_sub_path: true
auth.ldap:
  enabled: true
  allow_sign_up: true
envFromSecret: "grafana-ldap-cred"
ldap:
 enabled: true
 existingSecret: ""
config: |-
verbose_logging = true

[[servers]]
host = "my.ldap.server.com"
port = 636
use_ssl = true
root_ca_cert = "/home/myid/CA_Cert.pem"
start_tls = false
ssl_skip_verify = false
bind_dn = "uid=ldapbind,ou=Users,dc=com"
bind_password = "${LDAP_BIND_PASSWORD}"
search_filter = "(uid=%s)"
search_base_dns = ["dc=com"]

[servers.attributes]
name = "givenName"
surname = "sn"
username = "cn"
email = "mail"
 group_search_filter = "(&(objectClass=groupOfUniqueNames) 
(uniquemember=%s))"
## An array of the base DNs to search through for groups. Typically uses ou=groups
group_search_base_dns = ["ou=groups,dc=Global,dc=com"]
## the %s in the search filter will be replaced with the attribute defined below
group_search_filter_user_attribute = "uid"

[[servers.group_mappings]]
group_dn = "cn=admin_ldap,ou=Users,dc=com"
org_role = "Admin"
grafana_admin = true

[[servers.group_mappings]]
group_dn = "*"
org_role = "Viewer"

I have looked at this post and compared the configuration, but still no luck. Any clues what is missing here?


Solution

  • Spent some time looking at Helm templates and other configuration to understand what was missing and able to make it work with below configuration for Grafana in custom-values.yaml created from the operator.

    Pay special attention to the indentation since that has caused some issues when I was trying to copy n paste from Grafana chart's values.yaml.

    grafana:
      enabled: true
      namespaceOverride: ""
      rbac:
        pspUseAppArmor: false
      grafana.ini:
        # To troubleshoot and get more log info enable ldap debug logging in grafana.ini
        log:
          mode: console
          #level: debug
          # to enable debug level for ldap calls only
          #filters: ldap:debug
        
        server:
          domain: sbgrafana.mysite.com
          #root_url: "%(protocol)s://%(domain)s/"
          root_url: https://sbgrafana.mysite.com/grafana/
          serve_from_sub_path: true
        auth.ldap:
          enabled: true
          allow_sign_up: true
          config_file: /etc/grafana/ldap.toml
    
      ldap:
        enabled: true
        # `existingSecret` is a reference to an existing secret containing the ldap configuration
        # for Grafana in a key `ldap-toml`.
        existingSecret: ""
        # `config` is the content of `ldap.toml` that will be stored in the created secret
        config: |-
          verbose_logging = true
    
          [[servers]]
          host = "my.ldap.com"
          # Default port is 389 or 636 if use_ssl = true
          # port = 389
          # use_ssl = false
          port = 636
          use_ssl = true
          # CA cert is mapped as certs-configmap in extraConfigmapMounts section below -- path in Grafana container
          root_ca_cert = "/etc/grafana/ssl/CACert.pem"
          start_tls = false
          ssl_skip_verify = false
          bind_dn = "uid=%s,ou=users,dc=myorg,dc=com"
          bind_password = "${LDAP_BIND_PASSWORD}"
          search_filter = "(uid=%s)"
          group_search_filter = "(&(objectClass=groupOfUniqueNames) 
           uniquemember=%s))"
          group_search_base_dns = ["uid=%s,ou=users,dc=myorg,dc=com"]
          group_search_filter_user_attribute = "uid"
          
          [servers.attributes]
          name = "givenName"
          surname = "sn"
          username = "cn"
          email = "mail"
    
          [[servers.group_mappings]]
          group_dn = "cn=admins,dc=grafana,dc=org"
          org_role = "Admin"
    
          [[servers.group_mappings]]
          group_dn = "cn=users,dc=grafana,dc=org"
          org_role = "Editor"
    
          [[servers.group_mappings]]
          group_dn = "*"
          org_role = "Viewer"
    
      extraConfigmapMounts:
        - name: certs-configmap
          mountPath: /etc/grafana/ssl/
          configMap: certs-configmap
          readOnly: true
    

    Steps for creating configmap referenced above for LDAP SSL/HTTPS communication. At least I couldn't find clear information, so adding here for others.

    kubectl -n monitoring create configmap certs-configmap --from-file=my-ca-cert.pem
    

    Create a custom secret in the monitoring namespace with key as LDAP_BIND_PASSWORD and LDAP bind password as the value. Now we no longer need to have it in plain text in the custom values.yaml file.