kubernetesdaemonset

Is it possible to know if the node where a Kubernetes Pod is being scheduled is master or worker?


I'm currently using Kubernetes to schedule a DaemonSet on both master and worker nodes.

The DaemonSet definition is the same for both node types (same image, same volumes, etc), the only difference is that when the entrypoint is executed, I need to write a different configuration file (which is generated in Python with some dynamic values) if the node is a master or a worker.

Currently, to overcome this I'm using two different DaemonSet definitions with an env value which tells if the node is a master or not. Here's the yaml file (only relevant parts):

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: worker-ds
  namespace: kube-system
  labels:
    k8s-app: worker
spec:
  ...
    spec:
      hostNetwork: true
      containers:
        - name: my-image
          ...
          env:
            - name: NODE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: IS_MASTER
              value: "false"
      ...
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: master-ds
  namespace: kube-system
  labels:
    k8s-app: master
spec:
  ...
    spec:
      hostNetwork: true
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
        - name: my-image
          ...
          env:
            - name: NODE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
            - name: IS_MASTER
              value: "true"
      ...

However, since the only difference is the IS_MASTER value, I want to collapse both the definitions in a single one that programmatically understands if the current node where the pod is being scheduled is a master or a worker.

Is there any way to know this information about the node programmatically (even reading a configuration file [for example something that only the master has or viceversa] in the node or something like that)?

Thanks in advance.


Solution

  • Unfortunately, there is not a convenient way to access the node information in pod.

    If you only want a single DaemonSet definition, you can add a sidecar container to your pod, the sidecar container can access the k8s api, then your main container can get something useful from the sidecar.

    By the way, I think your current solution is properly :)