I installed MicroK8s on two Ubuntu 20.04 systems. One is assigned as the CP and I joined the second system as a node to the cluster.
I am trying to spin up a couple of pods with syslog-ng to consume syslog traffic on port UDP/514. I am generating some synthetic syslog traffic from an external system to see the load balancing in action. However, I only see one of the pods consuming data at any given time. Following is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: syslogng-server
labels:
app: syslogng-server
spec:
replicas: 2
selector:
matchLabels:
app: syslogng-server
template:
metadata:
labels:
app: syslogng-server
spec:
containers:
- name: syslogng-server
image: 'balabit/syslog-ng:3.25.1'
resources:
requests:
cpu: 10m
ports:
- containerPort: 514
name: udp
protocol: UDP
volumeMounts:
- name: config-volume
mountPath: /etc/syslog-ng/conf.d
volumes:
- name: config-volume
configMap:
name: syslogng-d
---
apiVersion: v1
kind: ConfigMap
metadata:
name: syslogng-d
labels:
app: syslogng-server
data:
syslogng-test.conf: |
##################################################
options {
create_dirs(yes);
owner(root);
group(root);
perm(0640);
dir_owner(root);
dir_group(root);
dir_perm(0750);
};
##################################################
source s_net {
tcp(ip(0.0.0.0) port(514));
udp(ip(0.0.0.0) port(514));
};
##################################################
destination d_host-specific {
file("/var/log/firewalls.log");
};
log {
source(s_net);
destination(d_host-specific);
};
---
apiVersion: v1
kind: Service
metadata:
name: syslogng-server-service
labels:
app: syslogng-server-service
spec:
selector:
app: syslogng-server
type: LoadBalancer
externalTrafficPolicy: Cluster
# loadBalancerIP is optional. MetalLB will automatically allocate an IP
# from its pool if not specified. You can also specify one manually.
# loadBalancerIP: x.y.z.a
ports:
- name: udp
protocol: UDP
port: 514
targetPort: 514
I was hoping to see both pods actively consuming the syslog data but that is not the case. Am I doing something wrong? Thank you for the assistance.
With MetalLB in layer2 mode, one node in the cluster takes "ownership" of the service, all traffic for the LoadBalancer IP goes to that node and it then hands over to kube-proxy
to get the traffic to the Service / Pods.
It wouldn't be too surprising for traffic not to be balanced equally, especially if there is only one source. There could be a number of reasons for that, such as kube-proxy
preferring to route traffic to the Pod on the same Node or some sort of "source IP affinity" happening.
TL;DR yes! MetalLB isn't the best at load balancing in Layer 2 mode.
There's more information in the MetalLB Docs.