Multi-Queue virtio-net Functionality in Linux

It seems that multi queue virtio functionality is enabled in the Redhat System being used, but I need to confirm if this functionality is working appropriately:

cat /usr/include/linux/virtio_net.hstruct virtio_net_config

{ /* The config defining mac address (if VIRTIO_NET_F_MAC) */ __u8 mac[6]; /* See VIRTIO_NET_F_STATUS and VIRTIO_NET_S_* above */ __u16 status; /* Maximum number of each of transmit and receive queues; * see VIRTIO_NET_F_MQ and VIRTIO_NET_CTRL_MQ. * Legal values are between 1 and 0x8000 */ __u16 max_virtqueue_pairs; }

_attribute_((packed));

I want to test below scenarios:

The guest is active on many connections at the same time, with traffic running between guests, guest to host, or guest to an external system.

Also is there any other way to check if Multi-Queue virtio-net Functionality is functioning appropriately

Helper and debugging scripts for the Virtio-forwarder are located in /usr/lib[64]/virtio-forwarder/ .

Here are pointers to using some of the more useful one, from the docs:

  • virtioforwarder_stats.py: Gathers statistics (including rate stats) from running relay instances.
  • virtioforwarder_core_pinner.py: Manually pin relay instances to CPUs at runtime. Uses the same syntax asmthe environment file, that is, -virtio-cpu=RN:Ci,Cj . Run without arguments to get the current relay to CPUmapping. Note that the mappings may be overridden by the load balancer if it is also running. The same is true for mappings provided in the configuration file.
  • virtioforwarder_monitor_load.py: Provides a bar-like representation of the current load on worker CPUs. Useful to monitor the work of the load balancer.

System logs can be viewed by running journalctl -u virtio-forwarder -u vio4wd_core_scheduler on systemd-enabled systems.

Syslog provides the same information on older systems.

Hope this helps....

Thanks Neo. Also wanted to tell that I downloaded libvirt xml for the guest to see number of queues and number of vcpus , I read in some posts that number of vcpus should be made equal to the number of queues , but I did nt find any mention of queue in the xml file for that guest.Below I have pasted the lines from xml file associating to vcpus :

<currentMemory unit='KiB'>6291456</currentMemory>
  <vcpu placement='static' current='4'>32</vcpu>

 </features>
  <cpu mode='custom' match='exact' check='partial'>
    <model fallback='allow'>Skylake-Server-IBRS</model>
    <topology sockets='16' cores='2' threads='1'/>
    <numa>
      <cell id='0' cpus='0-3' memory='6291456' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>

 <numa>
      <cell id='0' cpus='0-3' memory='6291456' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>

</cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>

How can I enable/make best use of Multi-Queue virtio-net Functionality here.