I'm using Google's Container Engine service, and got a pod running a server listening on port 3000. I set up the service to connect port 80 to that pod's port 3000. I am able to curl the service using its local and public ip from within the node, but not from outside. I set up a firewall rule to allow port 80 and send it to the node, but I keep getting 'connection refused' from outside the network. I'm trying to do this without a forwarding rule, since there's only one pod and it looked like forwarding rules cost money and do load balancing. I think the firewall rule works, because when I add the createExternalLoadBalancer: true
to the service's spec, the external IP created by the forwarding rule works as expected. Do I need to do something else? Set up a route or something?
controller.yaml
kind: ReplicationController
apiVersion: v1beta3
metadata:
name: app-frontend
labels:
name: app-frontend
app: app
role: frontend
spec:
replicas: 1
selector:
name: app-frontend
template:
metadata:
labels:
name: app-frontend
app: app
role: frontend
spec:
containers:
- name: node-frontend
image: gcr.io/project_id/app-frontend
ports:
- name: app-frontend-port
containerPort: 3000
targetPort: 3000
protocol: TCP
service.yaml
kind: Service
apiVersion: v1beta3
metadata:
name: app-frontend-service
labels:
name: app-frontend-service
app: app
role: frontend
spec:
ports:
- port: 80
targetPort: app-frontend-port
protocol: TCP
externalIPs: #previously publicIPs
- 123.45.67.89
selector:
name: app-frontend
Edit (additional details):
Creating this service adds these additional rules, found when I run iptables -L -t nat
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http redir ports 56859
REDIRECT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 56859
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
DNAT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
I don't fully understand iptables, so I'm not sure how the destination port matches my service. I found that the DNS for 89.67.45.123.bc.googleusercontent.com
resolves to 123.45.67.89
.
kubectl get services shows the IP address and port I specified:
NAME IP(S) PORT(S)
app-frontend-service 10.247.243.151 80/TCP
123.45.67.89
Nothing recent from external IPs is showing up in /var/log/kube-proxy.log
TL;DR: Use the Internal IP of your node as the public IP in your service definition.
If you enable verbose logging on the kube-proxy you will see that it appears to be creating the appropriate IP tables rule:
I0602 04:07:32.046823 24360 roundrobin.go:98] LoadBalancerRR service "default/app-frontend-service:" did not exist, created
I0602 04:07:32.047153 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 10.119.244.130/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.048446 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 10.119.244.130:80
I0602 04:07:32.049525 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.050872 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.052247 24360 proxier.go:595] Opened iptables from-containers portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
I0602 04:07:32.053222 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.054491 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.055848 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
Listing the iptables entries using -L -t
shows the public IP turned into the reverse DNS name like you saw:
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.119.240.2 /* default/kubernetes: */ tcp dpt:https redir ports 50353
REDIRECT tcp -- anywhere 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 54605
REDIRECT udp -- anywhere 10.119.240.10 /* default/kube-dns:dns */ udp dpt:domain redir ports 37723
REDIRECT tcp -- anywhere 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:domain redir ports 50126
REDIRECT tcp -- anywhere 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
REDIRECT tcp -- anywhere 36.156.251.23.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
But adding the -n
option shows the IP address (by default, -L
does a reverse lookup on the ip address, which is why you see the DNS name):
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- 0.0.0.0/0 10.119.240.2 /* default/kubernetes: */ tcp dpt:443 redir ports 50353
REDIRECT tcp -- 0.0.0.0/0 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:80 redir ports 54605
REDIRECT udp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns */ udp dpt:53 redir ports 37723
REDIRECT tcp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:53 redir ports 50126
REDIRECT tcp -- 0.0.0.0/0 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
REDIRECT tcp -- 0.0.0.0/0 23.251.156.36 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
At this point, you can access the service from within the cluster using both the internal and external IPs:
$ curl 10.119.244.130:80
app-frontend-5pl5s
$ curl 23.251.156.36:80
app-frontend-5pl5s
Without adding a firewall rule, attempting to connect to the public ip remotely times out. If you add a firewall rule then you will reliably get connection refused:
$ curl 23.251.156.36
curl: (7) Failed to connect to 23.251.156.36 port 80: Connection refused
If you enable some iptables logging:
sudo iptables -t nat -I KUBE-PORTALS-CONTAINER -m tcp -p tcp --dport
80 -j LOG --log-prefix "WTF: "
And then grep the output of dmesg
for WTF
it's clear that the packets are arriving on the 10. IP address of the VM rather than the ephemeral external IP address that had been set as the public IP on the service.
It turns out that the problem is that GCE has two types of external IPs: ForwardingRules (which forward with the DSTIP intact) and 1-to-1 NAT (which actually rewrites the DSTIP to the internal IP). The external IP of the VM is the later type so when the node receives the packets the IP tables rule doesn't match.
The fix is actually pretty simple (but non-intuitive): Use the Internal IP of your node as the public IP in your service definition. After updating your service.yaml file to set publicIPs to the Internal IP (e.g. 10.240.121.42
) you will be able to hit your application from outside of the GCE network.