kubernetes-helmk3scontainerd

NeuVector pods deployed via Helm to a local K3s cluster fails to come online, Unknown container runtime


I have a K3s instance running locally with containerd, and I went through the following commands to install NeuVector via Helm

helm repo add neuvector https://neuvector.github.io/neuvector-helm/

helm upgrade --install neuvector neuvector/core --version 2.2.4 --set tag=5.0.4 --set registry=docker.io --create-namespace --namespace neuvector

My controller and enforcer pods don't come online, and I'm always seeing this in the logs:

2023-07-06T19:45:24|MON|/usr/local/bin/monitor starts, pid=1 
2023-07-06T19:45:24|MON|Start ctrl, pid=7
2023-07-06T19:45:24.035|INFO|CTL|main.main: START - version=v5.0.4
2023-07-06T19:45:24.035|INFO|CTL|main.main: - join=neuvector-svc-controller.neuvector
2023-07-06T19:45:24.035|INFO|CTL|main.main: - advertise=10.42.0.241
2023-07-06T19:45:24.035|INFO|CTL|main.main: - bind=10.42.0.241
2023-07-06T19:45:24.039|INFO|CTL|system.NewSystemTools: cgroup v2
2023-07-06T19:45:24.039|INFO|CTL|container.Connect: - endpoint=
2023-07-06T19:45:24.039|ERRO|CTL|main.main: Failed to initialize - error=Unknown container runtime
2023-07-06T19:45:24|MON|Process ctrl exit status 254, pid=7
2023-07-06T19:45:24|MON|Process ctrl exit with non-recoverable return code. Monitor Exit!!
Leave the cluster
Error leaving: Put "http://127.0.0.1:8500/v1/agent/leave": dial tcp 127.0.0.1:8500: connect: connection refused
2023-07-06T19:45:24|MON|Clean up.
Stream closed EOF for neuvector/neuvector-controller-pod-54c7bdc784-rq9np (neuvector-controller-pod)

I can tell containerd is up and running

vboxuser@ZGBigBang202307060937:~/bigbang$ sudo systemctl status containerd

● containerd.service - containerd container runtime
     Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-07-06 19:21:24 UTC; 26min ago
       Docs: https://containerd.io
   Main PID: 820 (containerd)
      Tasks: 38
     Memory: 119.5M
        CPU: 1.585s
     CGroup: /system.slice/containerd.service
             ├─ 820 /usr/bin/containerd
             ├─1412 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2b8d47611cba5591063f956527c9e05197c4388347871ccb40aca4205ff8bbc1 -address /run/containerd/containerd.sock
             └─1413 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 96c5cc5981783c57e7ee5080e381461e3e672214ed419e358b9a1bc98f4c5a72 -address /run/containerd/containerd.sock

Jul 06 19:21:24 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:24.560562567Z" level=info msg="Start cni network conf syncer for default"
Jul 06 19:21:24 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:24.560571867Z" level=info msg="Start streaming server"
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262488075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262562875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262582575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262722677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262779877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262791378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262865978Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/2b8d47611cba5591063f956527c9e05197c4388347871ccb40aca4205ff8bbc1 pid=1412 runtime=io.containerd.runc.v2
Jul 06 19:21:28 ZGBigBang202307060937 containerd[820]: time="2023-07-06T19:21:28.262964779Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/96c5cc5981783c57e7ee5080e381461e3e672214ed419e358b9a1bc98f4c5a72 pid=1413 runtime=io.containerd.runc.v2

So I'm not sure how to make NeuVector aware that it should be going through containerd 🤷

I've tried a ton of stuff, including enabling the cri plugin for containerd, explicitly configuring the cri plugin for the image and runtime sockets. Nothing I do though seems to make NeuVector aware that containerd is running.


Solution

  • For the helm install to work with k3s you need to append the following to the end of the command:

    --set k3s.enabled=true
    

    Additionally, if you are using rancher desktop make sure you change the runtime to containerd