kubernetesgoogle-cloud-platformgoogle-kubernetes-enginegoogle-cloud-vpn

Google Kubernetes Engine & VPN


I am using Google Kubernetes Engine to deploy some applications that need to connect to a DB on premises. In order to do that, I have configured a VPN tunnel and created a VPC.

Then, I created a GKE cluster (1 node) that is using that VPC and I can confirm that the DB is accessible by connecting to the node and try to ping the DB server

~ $ sudo toolbox ping 10.197.100.201
Spawning container root-gcr.io_google-containers_toolbox-20180309-00 on 
/var/lib/toolbox/root-gcr.io_google-containers_toolbox-20180309-00.
Press ^] three times within 1s to kill container.
PING 10.197.100.201 (10.197.100.201): 56 data bytes 
64 bytes from 10.197.100.201: icmp_seq=0 ttl=62 time=45.967 ms
64 bytes from 10.197.100.201: icmp_seq=1 ttl=62 time=44.186 ms`

However, if I try to do the same from a Pod, I am not able to connect.

root@one-shot-pod:/# traceroute 10.197.100.201
traceroute to 10.197.100.201 (10.197.100.201), 30 hops max, 60 byte 
packets
 1  10.0.0.1 (10.0.0.1)  0.046 ms  0.009 ms  0.007 ms
 2  * * *
 3  * * *```

What am I missing?


Solution

  • After some investigation, I found the root cause of the problem. Basically, the communication wasn't working properly because there is something called ip masquerade (https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent) that is used for NAT translation.

    As GKE has some default addresses that are configured to not be masquerade (on the version that I was using, the defaults were: 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) and the destination ip was 10.197.100.201 (part of 10.0.0.0/8) and that ip was outside the cluster, the solution was modifying the nonMasqueradeCIDRs and remove 10.0.0.0/8 and use 10.44.0.0/14 (GKE cluster CIDR) instead.

    In order to do that, I used the following configmap:

    apiVersion: v1
    data:
      config: |-
        nonMasqueradeCIDRs:
          - 10.44.0.0/14
          - 172.16.0.0/12
          - 192.168.0.0/16
        resyncInterval: 60s
    kind: ConfigMap
    metadata:
      name: ip-masq-agent
      namespace: kube-system
    

    After that, to apply the config, you can upload the configmap using the follwing command:

    kubectl create configmap ip-masq-agent --from-file <configmap file> --namespace kube-system