kubernetesprometheuskubernetes-helmkube-prometheus-stack

How to overrun the secret size limitation in kube-prometheus-stack helm chart if we add more and more provisioned dashboard as separate yml files?


For the kube-prometheus-stack we added more and more dashboards config to /grafana/dashboards folder to have more and more provisioned dashboards.

And then in one day we've done this:

kube-prometheus-stack>helm -n monitoring upgrade prometheus ./ -f ./values-core.yaml 

and got:

Error: UPGRADE FAILED: create: failed to create: Secret "sh.helm.release.v1.prometheus.v16" is invalid: data: Too long: must have at most 1048576 bytes

What is the designed way to overrun these limitations? There is a need to add more and more provisioned dashboards to the chart.

kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5", GitCommit:"aea7bbadd2fc0cd689de94a54e5b7b758869d691", GitTreeState:"clean", BuildDate:"2021-09-15T21:04:16Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

Solution

  • As we can get information from doc, the purpose of auto generated secret is for recording release information. In k8s design, Individual secrets are limited to 1MiB in size. Based on the above information, the secret size is the hard limitation of k8s, and the actual release secret size should be positively correlated with the size of the helm chart.

    In this use case, the main reason for the large helm chart is that you use grafana's dashboardProvider to automatically deploy the ready-made dashboard JSON file. The provider will load all JSON file to kube-prometheus-stack for creating dashboard configmaps. And then in one day when you add new dashboard and it makes release secret finally hit the limitation you will get the error.

    If you don't want to change k8s storage backend type, there is alternative way to work around with. The main idea is to separat tasks of creating dashboard configmap from grafana dashboardProvider and create dashboard configmap by our own.

    First, we can abandon this kind of declaration in kube-prometheus-stack

        dashboardProviders:
          dashboardproviders.yaml:
              apiVersion: 1
              providers:
                  - name: 'default'
                  orgId: 1
                  folder: 'default'
                  type: file
                  disableDeletion: true
                  editable: true
                  options:
                      path: /var/lib/grafana/dashboards/default
          dashboards:
            default:
            {{- range $_, $file := ( exec "bash" (list "-c" "echo -n dashboards/default/*.json") | splitList " " ) }}
            {{ trimSuffix (ext $file) (base $file) }}:
                json: |
                  {{- readFile $file }}
            {{- end }}
    

    Then, we create another helm chart configmap

    Helm chart template

    {{- range $config, $data := .Values.configs }}
     ---
     apiVersion: v1
     kind: ConfigMap
     metadata:
       name: grafana-dashboard-{{ $config }}
       labels:
         grafana_dashboard: "1"
       annotations:
         grafana_folder: {{ $config }}
     data:
       {{ range $key, $val := $data }}
         {{ $key }}.json: |
           {{ mustToJson $val }}
       {{ end }}
    {{- end }}
    

    Helm values, read dashboard json file and convert to golang string

    configs:
     default:
     {{- range $_, $file := ( exec "bash" ( list "-c" (printf "echo -n dashboards/default/*.json")) | splitList " ") }}
     {{ trimSuffix (ext $file) (base $file) }}:
         {{ readFile $file }}
     {{- end}}
    

    At this time, when we deploy this separated dashboard helm chart, it should generate all configmaps which contain dashboard json value automatically.

    Finally, the last step, we can go to setup Grafana sidecar configurations to make it scrape dashboard from configmaps.

    grafana:
     defaultDashboardsEnabled: false
     sidecar:
       dashboards:
         enabled: true
         label: grafana_dashboard
         annotations:
           grafana_folder: "Default"
         folder: /tmp/dashboards
         folderAnnotation: grafana_folder
         provider:
           foldersFromFilesStructure: true
    

    After update kube-prometheus-stack and waiting for a while, or you can monitoring on Grafana sidecar pod logs. You will see the dashboard configmaps are loading to pod and ADD to dashboard.