openstacktcpdumpopenstack-neutronopenvswitch

Openstack traffic flow is not showing on patch-tun or patch-int


I´m analyzing OpenStack traffic flow between instances, to understand how the traffic goes from one place to another. I have a provider network with openvswitch configuration. My scenario is depicted in the photo, where I make a ping between two machines in the same network but in different compute nodes.

enter image description here

I currently can see using tcpdump the traffic flowing between the tap and qvb interface in the qbr bridge:

root@compute3:/home/mw# tcpdump -i tape0109961-57 -p icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tape0109961-57, link-type EN10MB (Ethernet), capture size 262144 bytes
10:01:20.647598 IP 192.168.200.184 > 192.168.200.211: ICMP echo request, id 6, seq 5860, length 64
10:01:20.647874 IP 192.168.200.211 > 192.168.200.184: ICMP echo reply, id 6, seq 5860, length 64
root@compute3:/home/mw# tcpdump -i qvbe0109961-57 -p icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qvbe0109961-57, link-type EN10MB (Ethernet), capture size 262144 bytes
10:01:32.663739 IP 192.168.200.184 > 192.168.200.211: ICMP echo request, id 6, seq 5872, length 64
10:01:32.664067 IP 192.168.200.211 > 192.168.200.184: ICMP echo reply, id 6, seq 5872, length 64

And also can see the traffic arriving at the qvo interface on the integration bridge on openvswitch using ovs-tcpdump:

root@compute3:/home/mw# ovs-tcpdump -i qvoe0109961-57 -p icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ovsmi969443, link-type EN10MB (Ethernet), capture size 262144 bytes
10:04:56.929662 IP 192.168.200.184 > 192.168.200.211: ICMP echo request, id 6, seq 6076, length 64
10:04:56.929849 IP 192.168.200.211 > 192.168.200.184: ICMP echo reply, id 6, seq 6076, length 64

But I can't see the traffic flowing for the patch-int or patch-tun interfaces, however, when I capture traffic on the vxlan interface, the traffic is flowing there:

root@compute3:/home/mw# ovs-tcpdump -i vxlan-ac100186 -p icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ovsmi181239, link-type EN10MB (Ethernet), capture size 262144 bytes
10:06:50.080366 IP 192.168.200.184 > 192.168.200.211: ICMP echo request, id 6, seq 6189, length 64
10:06:50.080620 IP 192.168.200.211 > 192.168.200.184: ICMP echo reply, id 6, seq 6189, length 64

So, am I missing something? Where is the traffic flowing to reach the vxlan interface and go through the overlay network? I capture traffic in both compute nodes and the result is the same. If someone knows why this is happening, I will thank the help.


Solution

  • I asked about this on the ovs-discuss mailing list, where I received this reply:

    I had a similar question recently, the answer I got is the following:

    "A patch port is not the same as a veth pair. It is not used to "send" and "receive" traffic. When we process an upcall and have to determine what actions to perform on a packet, a patch_port is is basically an equivalent to "continue processing openflow flows on this other bridge"

    The following blog post goes into some detail on the topic: https://arthurchiao.art/blog/ovs-deep-dive-4-patch-port/

    The referenced article talks about the reasons why OpenStack is used patch ports rather than veth pairs, and the tradeoffs involved:

    According to some materials, the reason of switching from linux veth pair to OVS patch port is for performance consideration. Apart from this, at least for OpenStack, the patch port brings another great benefit: traffic of instances (VMs) will not get down during OVS neutron agent restart - this is what graceful OVS agent restart achieves in newer OpenStack releases.

    However, there is also a disadvanage of patch port: you could no longer capture packets on the patch ports using tools such as tcpdump - like what you have been doing on linux veth pair ports.