kubernetesnginxkubernetes-ingressingress-nginx

Kubernetes Ingress-Controllers "fighting" over Address in Ingress


Infrastructure Background:

I have 4 nodes in my Kubernetes (K3s) cluster.

k3s-server location=home   (VM) maily used for etcd replication 
k3s-agent  location=home   (VM) runs most pods
mercury    location=home   (RPI4) backup for important pods
moon       location=cloud  (Cloud VM) runs certain workloads in a public cloud

I am running 2 different instances of the ingress-nginx helm chart in two namespaces. One to exposes local services at my home when ingressClassName=nginx using loadBalancerIP: 192.168.113.230 running in the nginx-ingres-home namespace. The cloud controller uses ingressClassName=nginx-cloud with loadBalancerIP: 91.x.x.x and runs inside the nginx-ingress-cloud namespace. (Values for the Helm charts & example Ingress below)

The Problem

I now have multiple Ingresses defined, which are using the different classes. However using kubectl get ingress -A provides the following output.

NAMESPACE              NAME                   CLASS         HOSTS                        ADDRESS           PORTS     AGE
kubernetes-dashboard   kubernetes-dashboard   nginx         k3s.local.example.com        192.168.113.230   80, 443   4d3h
longhorn-system        longhorn-ingress       nginx         longhorn.local.example.com   192.168.113.230   80, 443   10d
mailu                  mailu                  nginx-cloud   mail.example.com             192.168.113.230   80, 443   2d23h
pihole                 pihole                 nginx         dns.local.example.com        192.168.113.230   80, 443   10d
ubiquiti               unifi-web-interface     nginx         unifi.local.example.com       192.168.113.230   80, 443   24h

You can see that regardless of the ingressClassName set in the Ingress, the addresses are always the ones from one of the ingress-controllers. The addresses are switching periodically. The logs of nginx-ingress-home show, that the controller permanently updates the addresses of the Ingresses (1m interval):

I0510 19:46:12.087815       7 status.go:300] "updating Ingress status" namespace="pihole" ingress="pihole" currentValue=[{IP:91.x.x.x Hostname: Ports:[]}] newValue=[{IP:192.168.113.230 Hostname: Ports:[]}]
I0510 19:46:12.087857       7 status.go:300] "updating Ingress status" namespace="ubiquiti" ingress="unifi-web-interface" currentValue=[{IP:91.x.x.x Hostname: Ports:[]}] newValue=[{IP:192.168.113.230 Hostname: Ports:[]}]
I0510 19:46:12.088485       7 status.go:300] "updating Ingress status" namespace="mailu" ingress="mailu" currentValue=[{IP:91.x.x.x Hostname: Ports:[]}] newValue=[{IP:192.168.113.230 Hostname: Ports:[]}]
I0510 19:46:12.088782       7 status.go:300] "updating Ingress status" namespace="longhorn-system" ingress="longhorn-ingress" currentValue=[{IP:91.x.x.x Hostname: Ports:[]}] newValue=[{IP:192.168.113.230 Hostname: Ports:[]}]
I0510 19:46:12.090051       7 status.go:300] "updating Ingress status" namespace="kubernetes-dashboard" ingress="kubernetes-dashboard" currentValue=[{IP:91.x.x.x Hostname: Ports:[]}] newValue=[{IP:192.168.113.230 Hostname: Ports:[]}]

Of course the nginx-ingress-cloud does the same thing, just replacing 192.168.113.230 with 91.x.x.x.

Does somebody know, how to stop them from "taking owernship" of all Ingresses and only updating the ones with the same IngressClass assigned?

Configs

Values of nginx-ingress-home

controller:
  ingressClass: "nginx"
  ingressClassResource:
    name: nginx
    enabled: yes
    default: yes
  service:
    type: "LoadBalancer"
    loadBalancerIP: 192.168.113.230
  nodeSelector:
    location: home
  tolerations: #Allow running on backup nodes
    - key: "backup"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"
  affinity: #Prefer running on nodes labled type=power
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: type
                operator: In
                values:
                  - power

Values of nginx-ingress-cloud

controller:
  ingressClass: "nginx-cloud"
  ingressClassResource:
    name: nginx-cloud
    enabled: yes
    default: no
  service:
    type: "LoadBalancer"
    loadBalancerIP: 91.x.x.x
  nodeSelector:
    location: cloud

Example home Ingress: Pihole

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - k3s.local.example.com
      secretName: kubernetes-dashboard-tls
  rules:
    - host: k3s.local.example.com
      http:
        paths:
          - backend:
              service:
                name: kubernetes-dashboard
                port:
                  number: 443

            path: /
            pathType: Prefix

Solution

  • With the help of a comment from Blender Fox I was able to figure out the solution.

    The IngressClass objects have the property spec.controller, which seems to be the value, for which a ingress controller looks, when choosing if the ingressClass is part of his set of classes. I was able to change that value in the Helm Chart, by setting the controller.ingressClassResource.controllerValue to different values. In my case I chose k8s.io/ingress-nginx/nginx and k8s.io/ingress-nginx/nginx-cloud instead of the default k8s.io/ingress-nginx.