kuberneteskubernetes-helmconfigmapdaemonset

Helm Hook Annotations not loaded in Kubernetes Configmaps


I am using two configmaps - one for install and one for uninstall / delete and to trigger actions before delete , I defined helm hooks in the uninstall daemonset annotations. Below is the code for the uninstall daemonset written in yaml

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: delete-pack
      namespace: mynamespace
      labels:
        app: <ex>
        chart: <ex>
      annotations:
        "helm.sh/hook": pre-delete
        "helm.sh/hook-weight": "-10"
        "helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
    spec:
      selector:
        matchLabels:
            app: <ex>

The rest of the yaml file is of regular template of defining mount volume etc. When running the install command, the configmaps are registered in the namespace and while checking the namespace with kubectl describe configmaps -n mynamespace, for the delete-pack configmap, this is the following output:

    Name:         delete-pack
    Labels:       app=sample
                  chart=sample
    Annotations:  meta.helm.sh/release-name: sample

The hook definitions are not loaded, due to which, while running delete, the configmaps are removed and the hook terminates with an error stating

    configmap "delete-pack" not found

and the configmaps are cleared from the namespace. The hook becomes an orphanedPod and is deleted without the pre-delete actions defined being executed.

I find the reason to be the annotations not loaded along with the configmap name in the namespace. What could be the reason for the helm hooks to not be registered? Is there a fundamental error in Kubernetes installation?

Could you also please point me towards any further steps for debug?

UPDATE

The Configmap of delete-pack did not have the annotations. But when added, delete-pack does not get registered at all in the configmaps of the name space. (i.e)

    Name:         pack
    Labels:       app=sample
                  chart=sample
    Annotations:  meta.helm.sh/release-name: sample

    <delete-pack data absent>

Solution

  • The primary design point for Helm hooks is to run Jobs. You can sort of use them to load things your Job needs like ServiceAccounts and ConfigMaps. It won't really work well to use them to load a DaemonSet, especially in a pre-delete hook.

    The Helm hook documentation notes:

    If the resource is a Job or Pod kind, Helm will wait until it successfully runs to completion. [...] For all other kinds, as soon as Kubernetes marks the resource as loaded (added or updated), the resource is considered "Ready".

    In your setup, you have a pre-delete DaemonSet. That's not a Job or a bare Pod, so Helm sends the DaemonSet definition to Kubernetes; once it's done that, possibly before any individual Pods are created, Helm then sends the delete request for everything else in the chart.

    This is where the hook annotation on the ConfigMap matters. If the ConfigMap is also annotated as a pre-delete hook, then it won't be installed as part of the main helm install content, but it will be loaded when the DaemonSet is. If it isn't, then it will be uninstalled immediately after the DaemonSet is sent to the cluster.

    The other challenge you may be running into is the helm.sh/hook-delete-policy: hook-succeeded annotation. It's not clearly documented what "succeeded" means for long-running workload resources. If "succeeded" just means "submitted to the cluster without error" this could cause the DaemonSet to be sent and then immediately deleted.

    The only concrete suggestion I have here is to make a second copy of the ConfigMap that the hook can use. I could imagine writing a dedicated DaemonSet that did mostly sat idle but the cleanup work when it received SIGTERM, which could then run as a normal DaemonSet without being a Helm hook. The use case seems unusual to me, especially the comment about needing to manipulate software on the host, since everything in Kubernetes usually runs in a container and can't see or affect the host filesystem at all.