Recently i did install kubernetes using kubeadm
on my old dual core AMD machine using bionic ubuntu and lxc.
This is my lxc profile which i found in web:
config:
limits.cpu: "2"
limits.memory: 2GB
limits.memory.swap: "false"
linux.kernel_modules: nf_conntrack_ipv4,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter
raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw"
security.nesting: "true"
security.privileged: "true"
description: LXD profile for Kubernetes
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: k8s
used_by:
- /1.0/containers/kmaster1
- /1.0/containers/kworker1
And i did run:
# mount --make-rshared /
# lxc config device add "kmaster1" "kmsg" unix-char source="/dev/kmsg" path="/dev/kmsg"
# lxc config device add "kworker1" "kmsg" unix-char source="/dev/kmsg" path="/dev/kmsg"
The kube-proxy
couldn't run because of this error message:
write /sys/module/nf_conntrack/parameters/hashsize: operation not supported
So i did fix it by editing kube-proxy
config and set connection tracking to zero.
After that i did deploy flannel
as cni.
my pods can't access to default cluster IP 10.96.0.1 now, But the service is there:
#kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Families: <none>
IP: 10.96.0.1
IPs: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 10.46.157.182:6443
Session Affinity: None
Events: <none>
May disabling connection tracking makes issue and there's using NAT
, i don't know.
Any idea to fix it?
I fixed that in two steps:
first I did make change the kube-proxy
config to it's default.
Then i did write to HOST's /sys/module/nf_conntrack/parameters/hashsize
the hash size which needed by kube-proxy manually. Then delete kube-proxy pods and let it's deployment to create theme again automatically.
The main problem is Flannel
configuration. flannel deployment doesn't looks for my cluster default ip range and set it's ip range 10.244.0.0/16
So i did edit it's configuration and change that to my range:
kubectl edit cm -n kube-system kube-flannel-cfg
And again, delete all flannel pods and let it's deployment to create them again automatically.
Now my pod has access to kubernetes default clusterIP service.