kubernetesgoogle-cloud-platformgoogle-kubernetes-enginedocker-ingress

GKE with Ingress setup always gives status UNHEALTHY


To start of I have tested the tutorial at https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

which works fine. I also tested the same tutorial but added a tls secret as well to test https which also worked fine.

My problems arise when I create my own image. Here is the steps I take:

  1. The Dockerfile:
     # We label our stage as "builder"
     FROM node:9.4.0-alpine as builder

     COPY package.json package-lock.json ./

     ## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
     RUN npm i && mkdir /srv/cs-ui && cp -R ./node_modules ./srv/cs-ui

     WORKDIR /srv/cs-ui

     COPY . .

     ## Build the angular app in production mode and store the artifacts in dist folder
     RUN $(npm bin)/ng build --environment "prod"

     FROM nginx

     ## Copy our default nginx config
     COPY nginx/default.conf /etc/nginx/conf.d/

     ## Remove default nginx website
     RUN rm -rf /usr/share/nginx/html/*

     ## From "builder" stage copy over the artifacts in dist folder to default nginx nginx public folder
     COPY --from=builder /srv/cs-ui/dist /usr/share/nginx/html/
  1. The Dockerfile is run with docker-compose file that looks like this:
version: '2'
services:
  cs-ui:
    image: "gcr.io/cs-micro/cs-ui:v1"
    container_name: "cs-ui"
    tty: true
    build: .
    ports:
      - "80:80"
  1. Locally this works without any issues. The next thing I do is to push it to the Container Registry.
gcloud docker -- push gcr.io/cs-micro/cs-ui:v1
  1. After that I create a container:
kubectl run cs-ui --image=gcr.io/cs-micro/cs-ui:v1 --port=80
  1. Then I expose it:
kubectl expose deployment cs-ui --target-port=80  --type=NodePort
  1. Then I run the following ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: basic-ingress
spec:
  tls:
    - secretName: tls-certificate
  backend:
    serviceName: cs-ui
    servicePort: 80

with command:

kubectl apply -f test.yaml
  1. kubectl describe service
    Name:                     cs-ui
    Namespace:                default
    Labels:                   run=cs-ui
    Annotations:              
    Selector:                 run=cs-ui
    Type:                     NodePort
    IP:                       10.35.244.124
    Port:                       80/TCP
    TargetPort:               80/TCP
    NodePort:                   30272/TCP
    Endpoints:                10.32.0.32:80
    Session Affinity:         None
    External Traffic Policy:  Cluster
    Events:                   


    Name:              kubernetes
    Namespace:         default
    Labels:            component=apiserver
                       provider=kubernetes
    Annotations:       
    Selector:          
    Type:              ClusterIP
    IP:                10.35.240.1
    Port:              https  443/TCP
    TargetPort:        443/TCP
    Endpoints:         35.195.192.28:443
    Session Affinity:  ClientIP
    Events:            
  1. kubectl describe deployment
    Name:                   cs-ui
    Namespace:              default
    CreationTimestamp:      Thu, 25 Jan 2018 12:27:59 +0100
    Labels:                 run=cs-ui
    Annotations:            deployment.kubernetes.io/revision=1
    Selector:               run=cs-ui
    Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  1 max unavailable, 1 max surge
    Pod Template:
      Labels:  run=cs-ui
      Containers:
       cs-ui:
        Image:        gcr.io/cs-micro/cs-ui:v1
        Port:         80/TCP
        Environment:  
        Mounts:       
      Volumes:        
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
    OldReplicaSets:  
    NewReplicaSet:   cs-ui-2929390783 (1/1 replicas created)
    Events:
      Type    Reason             Age   From                   Message
      ----    ------             ----  ----                   -------
      Normal  ScalingReplicaSet  9m    deployment-controller  Scaled up replica set cs-ui-2929390783 to 1
  1. kubectl describe ing
    Name:             basic-ingress
    Namespace:        default
    Address:          35.227.220.186
    Default backend:  cs-ui:80 (10.32.0.32:80)
    TLS:
      tls-certificate terminates
    Rules:
      Host  Path  Backends
      ----  ----  --------
      *     *     cs-ui:80 (10.32.0.32:80)
    Annotations:
      https-forwarding-rule:  k8s-fws-default-basic-ingress--f5fde3efbfa51336
      https-target-proxy:     k8s-tps-default-basic-ingress--f5fde3efbfa51336
      ssl-cert:               k8s-ssl-default-basic-ingress--f5fde3efbfa51336
      target-proxy:           k8s-tp-default-basic-ingress--f5fde3efbfa51336
      url-map:                k8s-um-default-basic-ingress--f5fde3efbfa51336
      backends:               {"k8s-be-30272--f5fde3efbfa51336":"UNHEALTHY"}
      forwarding-rule:        k8s-fw-default-basic-ingress--f5fde3efbfa51336
      static-ip:              k8s-fw-default-basic-ingress--f5fde3efbfa51336
    Events:
      Type    Reason   Age               From                     Message
      ----    ------   ----              ----                     -------
      Normal  ADD      12m               loadbalancer-controller  default/basic-ingress
      Normal  CREATE   11m               loadbalancer-controller  ip: 35.227.220.186
      Normal  Service  6m (x4 over 11m)  loadbalancer-controller  default backend set to cs-ui:30272
  1. After 3-5 minutes I get Unhealthy and I have no clue why because the setup is almost exactly the same as with their setup.

I have read countless of threads on what to do when you get the backend status of Unhealthy, but none of them have helped. One mentioned to add a firewall rule mention in this tutorial: https://cloud.google.com/compute/docs/load-balancing/health-checks which I have added, but did not help.

If you have any suggestions I will gladly test them.


Solution

  • Turned out our Angular application had a redirect on '/' which gave it a 302 response. This response makes the health check fail and results in a UNHEALTHY state.

    As soon as we set up a custom health check it worked.