kubernetescnimacvlan

Kubernetes with multus, and macvlan selecting the wrong interface eth0


I am working through some training on kubernetes and have the following config. 4 vm's running ubuntu 20.04 with the, one master and 3 nodes with calico CNI. I managed to deploy some nginx pods and got the connectivity to work as expected/

I am trying to use multus to add a macvlan and have been following the instructions here https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md

The server output for ip ad on the master shows (nodes show just the first 3 interfaces)

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:de:3a:e5 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.230/24 brd 192.168.1.255 scope global enp1s0
   valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fede:3ae5/64 scope link 
   valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.16.255.128/32 scope global tunl0
   valid_lft forever preferred_lft forever
6: calif1302e6e8bf@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-93b9a0fa-78e3-34fa-f5e4-75b8c8b9f760
inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
   valid_lft forever preferred_lft forever
7: cali8475067f6cf@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-255cc6e9-b83e-ed27-8487-9b957f83520d
inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
   valid_lft forever preferred_lft forever
8: cali2b9e0768962@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default 
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-0826376f-5aea-ae7e-f10f-ae5aa6d0363a
inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
   valid_lft forever preferred_lft forever

the output of kubectl describe network-attachment-definitions macvlan-conf

Name:         macvlan-conf
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  k8s.cni.cncf.io/v1
Kind:         NetworkAttachmentDefinition
Metadata:
  Creation Timestamp:  2022-10-21T00:35:01Z
  Generation:          1
  Managed Fields:
    API Version:  k8s.cni.cncf.io/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:config:
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2022-10-21T00:35:01Z
  Resource Version:  491066
  UID:               a3d7f621-4ded-4987-ac65-250904528414
Spec:
  Config:  { "cniVersion": "0.3.0", "type": "macvlan", "master": "enp1s0", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "192.168.100.0/24", "rangeStart": "192.168.100.200", "rangeEnd": "192.168.100.216", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway": "192.168.1.254" } }
Events:    <none>

and from kubectl describe daemonsets.apps -n kube-system kube-multus-ds

Name:           kube-multus-ds
Selector:       name=multus
Node-Selector:  <none>
Labels:         app=multus
                name=multus
                tier=node
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 4
Current Number of Nodes Scheduled: 4
Number of Nodes Scheduled with Up-to-date Pods: 4
Number of Nodes Scheduled with Available Pods: 4
Number of Nodes Misscheduled: 0
Pods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=multus
                    name=multus
                    tier=node
  Service Account:  multus
  Init Containers:
   install-multus-binary:
    Image:      ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-thick
    Port:       <none>
    Host Port:  <none>
    Command:
      cp
      /usr/src/multus-cni/bin/multus-shim
      /host/opt/cni/bin/multus-shim
    Requests:
      cpu:        10m
      memory:     15Mi
    Environment:  <none>
    Mounts:
      /host/opt/cni/bin from cnibin (rw)
  Containers:
   kube-multus:
    Image:      ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-thick
    Port:       <none>
    Host Port:  <none>
    Command:
      /usr/src/multus-cni/bin/multus-daemon
    Args:
      -cni-version=0.3.1
      -cni-config-dir=/host/etc/cni/net.d
      -multus-autoconfig-dir=/host/etc/cni/net.d
      -multus-log-to-stderr=true
      -multus-log-level=verbose
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        100m
      memory:     50Mi
    Environment:  <none>
    Mounts:
      /etc/cni/net.d/multus.d from multus-daemon-config (ro)
      /host/etc/cni/net.d from cni (rw)
      /host/run from host-run (rw)
      /hostroot from hostroot (rw)
      /run/k8s.cni.cncf.io from host-run-k8s-cni-cncf-io (rw)
      /run/netns from host-run-netns (rw)
      /var/lib/cni/multus from host-var-lib-cni-multus (rw)
      /var/lib/kubelet from host-var-lib-kubelet (rw)
  Volumes:
   cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
   cnibin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
   hostroot:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
   multus-daemon-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      multus-daemon-config
    Optional:  false
   host-run:
    Type:          HostPath (bare host directory volume)
    Path:          /run
    HostPathType:  
   host-var-lib-cni-multus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/cni/multus
    HostPathType:  
   host-var-lib-kubelet:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet
    HostPathType:  
   host-run-k8s-cni-cncf-io:
    Type:          HostPath (bare host directory volume)
    Path:          /run/k8s.cni.cncf.io
    HostPathType:  
   host-run-netns:
    Type:          HostPath (bare host directory volume)
    Path:          /run/netns/
    HostPathType:  
Events:            <none>

When I create the samplepod from the instructions it remains in default samplepod 0/1 ContainerCreating 0 40m

A describe show the the following

 Normal   AddedInterface          31m                  multus   Add eth0 [172.16.169.153/32] from k8s-pod-network
  Normal   AddedInterface          78s (x269 over 40m)  multus   (combined from similar events): Add eth0 [172.16.169.177/32] from k8s-pod-network
  Warning  FailedCreatePodSandBox  73s (x278 over 40m)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0eb93b24a13b6330323d061f818e15b0086707d1a853a9b4df823a52e31ab059": CNI request failed with status 400: '&{ContainerID:0eb93b24a13b6330323d061f818e15b0086707d1a853a9b4df823a52e31ab059 Netns:/var/run/netns/cni-308cd3e6-f3e7-3eaf-0f25-b90dd50c3b08 IfName:eth0 Args:IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=samplepod;K8S_POD_INFRA_CONTAINER_ID=0eb93b24a13b6330323d061f818e15b0086707d1a853a9b4df823a52e31ab059;K8S_POD_UID=53c586ce-0fd1-4991-bbff-188bb534d728 Path: StdinData:[123 34

There are references to eth0 in this log, but I have not specified eth0 in any of this config.

Am I missing something (quite likely)

Here is the samplepod output before the error showing the config

Name:             samplepod
Namespace:        default
Priority:         0
Service Account:  default
Node:             kube-node-2/192.168.1.232
Start Time:       Fri, 21 Oct 2022 13:40:12 +1300
Labels:           <none>
Annotations:      cni.projectcalico.org/containerID: cfe4778b5963e7d28365b6012ed0297a0d3c0dc9b0609c0f65a8d97f32ec7f41
                  cni.projectcalico.org/podIP: 
                  cni.projectcalico.org/podIPs: 
                  k8s.v1.cni.cncf.io/networks: macvlan-conf
Status:           Pending
IP:               
IPs:              <none>
Containers:
  samplepod:
    Container ID:  
    Image:         alpine
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/ash
      -c
      trap : TERM INT; sleep infinity & wait
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htggc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-htggc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Solution

  • Try the config from the official multus-cni repo - https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/examples/macvlan-pod.yml

    Specifically focusing on the plugins part, i.e.:

    spec:
      config: |
        {
          "name": "macvlan-conf",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "cniVersion": "0.3.1",
              "type": "macvlan",
              "master": "enp1s0",
              "mode": "bridge",
              "ipam": {
                "type": "host-local",
                "subnet": "192.168.100.0/24",
                "rangeStart": "192.168.100.200",
                "rangeEnd": "192.168.100.216", 
                "routes": [
                  {
                    "dst": "0.0.0.0/0",
                    "gw": "192.168.100.254"
                  }
                ]
              }
            }
          ]
        }
    

    The above is not a tested example BTW.

    After making the change, do all the normal jazz, such as restarting the multus daemonset and deleting your pod.