kuberneteskubernetes-ingressnginx-ingressdistributed-systemingress-controller

Kubernetes Ingress same with with master-slave architecture


I am trying to create a service which follows vertical replication-

enter image description here

In this architecture, requestes goes to the master node. For that I can use a kubernetes ingress.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: / 
        backend:
          serviceName: master-node
          servicePort: http

Now my requirement is if master is down, then request should go to the slave node. I can achieve that by creating three paths /master, /slave-1, /slave-2. But the constraint is that the path of request must remain same. So, the path must always be /

How can I create a ingress in such a way that if master-node is down, then all requests should forward to the slave-1-node?

I want to achieve something like below-

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: / 
        priority: 1
        backend:
          serviceName: master-node
          servicePort: http
  - host: example.com
    http:
      paths:
      - path: / 
        priority: 2
        backend:
          serviceName: slave-1-node
          servicePort: http
  - host: example.com
    http:
      paths:
      - path: / 
        priority: 3
        backend:
          serviceName: slave-2-node
          servicePort: http

Solution

  • I'm not sure how to do this using just an ingress resource, but it would be very easy if you were to deploy an haproxy pod in front of your services, so that your architecture looks like this:

    enter image description here

    Using an haproxy configuration like this, you would get the behavior you want:

    global
        log         stdout format raw local0
        maxconn     4000
        user        haproxy
        group       haproxy
    
    defaults
        mode    http
        log global
        option  httplog
        option  dontlognull
        option  http-server-close
        option  forwardfor  except 127.0.0.0/8
        option  redispatch
        retries 3
        timeout connect     10s
        timeout client      1m
        timeout server      1m
    
    frontend  example_fe
        bind 0.0.0.0:8080
        default_backend example_be
    
    backend example_be
        option httpchk GET /healthz
    
        server alpha example-alpha:80 check
        server beta example-beta:80 check backup
        server gamma example-gamma:80 check backup
    

    This will send all requests to alpha as long it is running. If alpha is offline, requests will go to beta, and if beta is not running, requests will go to gamma. I found this article useful when looking for information about how to set this up.

    You create a Deployment that runs haproxy:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: haproxy
      name: haproxy
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: haproxy
      template:
        metadata:
          labels:
            app: haproxy
        spec:
          containers:
          - image: docker.io/haproxy:latest
            name: haproxy
            ports:
            - containerPort: 8080
              name: http
            volumeMounts:
            - mountPath: /usr/local/etc/haproxy
              name: haproxy-config
          volumes:
          - configMap:
              name: haproxy-config-ddc898c5f5
            name: haproxy-config
    
    

    A Service pointing at that Deployment:

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: haproxy
      name: haproxy
    spec:
      ports:
      - name: http
        port: 80
        targetPort: http
      selector:
        app: haproxy
    

    And then point the Ingress at that Service:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: example
    spec:
      rules:
      - host: example.com
        http:
          paths:
          - backend:
              service:
                name: haproxy
                port:
                  name: http
            path: /
            pathType: Prefix
    
    

    I've put together a complete configuration here if you want to try this out.