I have a bare metal kube cluster (PI Cluster). It's got a simple hello world web page split up across the nodes and it's working fine. I've since created a service to get it to exposed on the public side of the things but the site won't render. It seems that I'm not getting announcement to publish.
My config.map is pretty simple.
metadata:
name: metallb
namespace: metallb-system
selfLink: /api/v1/namespaces/metallb-system/configmaps/metallb
uid: 89d1e418-989a-4869-9da1-244409f8f700
resourceVersion: '1086283'
creationTimestamp: '2020-06-09T00:34:07Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","data":{"config":"address-pools:\n- name: default\n
protocol: layer2\n addresses:\n -
192.168.1.16-192.168.1.31\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"metallb","namespace":"metallb-system"}}
managedFields:
- manager: kubectl
operation: Update
apiVersion: v1
time: '2020-06-09T00:34:07Z'
fieldsType: FieldsV1
fieldsV1:
'f:data':
.: {}
'f:config': {}
'f:metadata':
'f:annotations':
.: {}
'f:kubectl.kubernetes.io/last-applied-configuration': {}
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.16-192.168.1.31
kind: ConfigMap
and my service looks fine:
testapp-service LoadBalancer 10.107.213.151 192.168.1.16 8081:30470/TCP 7m40s
From on the master node, I can curl 192.168.1.16:8081 and get the data back that I'd expect. However, if I go to any other machine on the 192.168.1.0 network, I can't get it to render at all.
I know the public addresses aren't overlapping. I have the 192.168.1.16-192.168.1.31 range blocked out from my DHCP server, so there's nothing in that range.
So what does it take to get my master-001 node to announce that it is handling traffic for 192.168.1.16? (It has it's only address at .250 and that one does announce, but that one isn't the service, etc).
I'm using Ubuntu 20 on Raspberry PI 4s. The 192 address is the wifi side of things, the 10. is the wired side of things.
Thanks, Nick
In Layer2 mode, the address range you give to metallb and the node IPs must be in same subnet. What are the IP addresses of the nodes?
The packet destined to service IP (192.168.1.16) must first reach layer 2 domain of the cluster nodes so that the packet can be routed to the node handling the service IP which means the node IPs also must be in 192.168.1.0 network.
If only master node is connected to public network, try adding a nodeAffinity on the speaker daemonset so that speaker pods are created only on those nodes.