google-kubernetes-enginekubernetes-ingressambassador

Ambassador Edge Stack w/ GKE Health Check and LB Provisioning Errors


I'm currently installing the Ambassador Edge Stack (AES) to help manage several applications running applications in our GKE cluster but I'm experiencing a couple of issues.

The steps in the manual install guide seem to be working fine aside from edgectl being deprecated in favour of telepresence (I haven't tried this out really).

The next steps, setting up the ingress with GKE is where the issues begin.

As per the guide, this can be done with the legacy Ambassador API Gateway or the new AES. Doing a compare of both installs, you don't need to do anything in the API Gateway install other than sorting out patching the original aes ambassador service and the ambassador-admin service from LoadBalancer to NodePort types. I've done this with kustomize

# prod/ambassador-service-patches.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: ambassador
  namespace: ambassador
  annotations:
    cloud.google.com/backend-config: '{"ports": {"8080": "my-backend"}}'
spec:
  # loadBalancerIP: 35.244.139.65
  type: NodePort # needed for GKE ingress LB
  ports:
   - name: backend
     port: 8080
     targetPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: ambassador-admin
  namespace: ambassador
spec:
  type: NodePort # needed for GKE ingress backend health check
  ports:
   - name: backend
     port: 8877
     targetPort: 8877
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ambassador
  namespace: ambassador
spec:
  replicas: 3

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ambassador-agent
  namespace: ambassador
spec:
  replicas: 3

The rest is all about setting up the GKE Ingress and BackendConfig. Mine look like this:

# prod/ingress.yaml
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: my-cluster-ssl
spec:
  domains:
    - www.mydomain.com
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backend
  namespace: ambassador
spec:
    timeoutSec: 30
  connectionDraining:
      drainingTimeoutSec: 30
  healthCheck:
    checkIntervalSec: 10
    timeoutSec: 10
    healthyThreshold: 2
    unhealthyThreshold: 2
    type: HTTP
    requestPath: /ambassador/v0/check_ready
    port: 8877
  logging:
    enable: true
    sampleRate: 1.0
 ---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: ambassador
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "my-global-ip"
    networking.gke.io/managed-certificates: "my-cluster-ssl"
    kubernetes.io/ingress.class: "gce"
spec:
  backend:
    serviceName: ambassador
    servicePort: 8080

This is where things start falling apart. The ambassador and ambassador-admin services and pods run fine. The Ingress creates an HTTP LB with assigning my reserved global ip address and a fails to get an OK from the backend health checks. I believe because the LB is HTTP and is not exposing port 443 the the ManagedCertificate also fails to provision with the NOT_VISIBLE error.

Doing some troubleshooting I've added now a FrontEndConfig and added to to my ingress with the annotation networking.gke.io/v1beta1.FrontendConfig: "my-frontend" to setup a HTTP redirect LB but this new redirect only LB is assigned the global static ip address (not the HTTPS LB) which has This load balancer has no frontend configured. Below is the FrontendConfig.

apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: my-frontend
  namespace: ambassador
spec:
  redirectToHttps:
    enabled: true

I've also played around with a kubernetes.io/ingress.allow-http: "false" with no luck. This ingress+backend+frontend config isn't that different from what I used to have except the Ingress spec mapped to the individual services I needed, without any issues.

At this point I've been at it for a couple days and looking for some help.


Solution

  • According to the google documentation, if you want the load balancer to terminate SSL traffic then you need to configure it to do so. You can use the following instructions: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls

    These are linked to in the original documentation by google about how to create a L7 loadbalancer as linked in the ambassador docs: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer (link comes from Step 1 of this site you linked: https://www.getambassador.io/docs/edge-stack/latest/topics/running/ambassador-with-gke/)

    Unfortunately this means that you wouldn't be getting full advantage of Ambassador's letsencrypt automatic certificate generation when you specify both an ambassador Mapping and a Host. On my GKE cluster which I installed AES from scratch, has a Service of type LoadBalancer which automatically creates a google load balancer (if you're in GKE that is). This service has ports configured for 443 and 80 automatically and no additional ingress is needed.