- Nov 21, 2023
-
-
David Woodhouse authored
In net_cleanup() we only need to delete the netdevs, as those may have state which outlives Qemu when it exits, and thus may actually need to be cleaned up on exit. The nics, on the other hand, are owned by the device which created them. Most devices don't bother to clean up on exit because they don't have any state which will outlive Qemu... but XenBus devices do need to clean up their nodes in XenStore, and do have an exit handler to delete them. When the XenBus exit handler destroys the xen-net-device, it attempts to delete its nic after net_cleanup() had already done so. And crashes. Fix this by only deleting netdevs as we walk the list. As the comment notes, we can't use QTAILQ_FOREACH_SAFE() as each deletion may remove *multiple* entries, including the "safely" saved 'next' pointer. But we can store the *previous* entry, since nics are safe. Signed-off-by:
David Woodhouse <dwmw@amazon.co.uk> Reviewed-by:
Paul Durrant <paul@xen.org> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Akihiko Odaki authored
Recently MemReentrancyGuard was added to DeviceState to record that the device is engaging in I/O. The network device backend needs to update it when delivering a packet to a device. This implementation follows what bottom half does, but it does not add a tracepoint for the case that the network device backend started delivering a packet to a device which is already engaging in I/O. This is because such reentrancy frequently happens for qemu_flush_queued_packets() and is insignificant. Fixes: CVE-2023-3019 Reported-by:
Alexander Bulekov <alxndr@bu.edu> Signed-off-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Acked-by:
Alexander Bulekov <alxndr@bu.edu> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Akihiko Odaki authored
Recently MemReentrancyGuard was added to DeviceState to record that the device is engaging in I/O. The network device backend needs to update it when delivering a packet to a device. In preparation for such a change, add MemReentrancyGuard * as a parameter of qemu_new_nic(). Signed-off-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by:
Alexander Bulekov <alxndr@bu.edu> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
- Nov 17, 2023
-
-
Markus Armbruster authored
The error message $ qemu-system-x86_64 -netdev user,id=net0,ipv6-net=fec0::0/ qemu-system-x86_64: -netdev user,id=net0,ipv6-net=fec0::0/: Parameter 'ipv6-prefixlen' expects a number points to ipv6-prefixlen instead of ipv6-net. Fix: qemu-system-x86_64: -netdev user,id=net0,ipv6-net=fec0::0/: parameter 'ipv6-net' expects a number after '/' Signed-off-by:
Markus Armbruster <armbru@redhat.com> Message-ID: <20231031111059.3407803-6-armbru@redhat.com>
-
- Nov 07, 2023
-
-
Hawkins Jiawei authored
Enable SVQ with VIRTIO_NET_F_RSS feature. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <626449eb303207de408126b3dc7c155cd72b028b.1698195059.git.yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
This patch reuses vhost_vdpa_net_load_rss() with some refactorings to restore the receive-side scaling state at device's startup. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <cf5b78a16ed0318982ceffb195f2227f6aad4ac1.1698195059.git.yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
At present, to enable the VIRTIO_NET_F_RSS feature, eBPF must be loaded for the vhost backend. Given that vhost-vdpa is one of the vhost backend, we need to implement the SetSteeringEBPF method to support RSS for vhost-vdpa, even if vhost-vdpa calculates the rss hash in the hardware device instead of in the kernel by eBPF. Although this requires QEMU to be compiled with `--enable-bpf` configuration even if the vdpa device does not use eBPF to calculate the rss hash, this can avoid adding the specific conditional statements for vDPA case to enable the VIRTIO_NET_F_RSS feature, which reduces code maintainbility. Suggested-by:
Eugenio Pérez <eperezma@redhat.com> Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <280e20ddce55b6de60f1552ba0865bffffe909b2.1698195059.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
Enable SVQ with VIRTIO_NET_F_HASH_REPORT feature. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <d66b0aee501cdad7954231900c35a11cad1e13db.1698194366.git.yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
This patch introduces vhost_vdpa_net_load_rss() to restore the hash calculation state at device's startup. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <dbf699acff8c226596136a55a6abe35ebfeac8b0.1698194366.git.yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- Nov 01, 2023
-
-
Juan Quintela authored
Each user network conection create a new slirp instance. We register more than one slirp instance for number 0. qemu-system-x86_64: -netdev user,id=hs1: savevm_state_handler_insert: Detected duplicate SaveStateEntry: id=slirp, instance_id=0x0 Broken pipe ../../../../../mnt/code/qemu/full/tests/qtest/libqtest.c:195: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0) Aborted (core dumped) Reviewed-by:
Stefan Berger <stefanb@linux.ibm.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231020090731.28701-6-quintela@redhat.com>
-
- Oct 20, 2023
-
-
Steve Sistare authored
Pass the callback function to add_migration_state_change_notifier so that migration can initialize the notifier on add and clear it on delete, which simplifies the call sites. Shorten the function names so the extra arg can be added more legibly. Hide the global notifier list in a new function migration_call_notifiers, and make it externally visible so future live update code can call it. No functional change. Signed-off-by:
Steve Sistare <steven.sistare@oracle.com> Reviewed-by:
Peter Xu <peterx@redhat.com> Tested-by:
Michael Galaxy <mgalaxy@akamai.com> Reviewed-by:
Michael Galaxy <mgalaxy@akamai.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <1686148954-250144-1-git-send-email-steven.sistare@oracle.com>
-
- Oct 18, 2023
-
-
Hawkins Jiawei authored
This patch enables sending CVQ state load commands in parallel at device startup by following steps: * Refactor vhost_vdpa_net_load_cmd() to iterate through the control commands shadow buffers. This allows different CVQ state load commands to use their own unique buffers. * Delay the polling and checking of buffers until either the SVQ is full or control commands shadow buffers are full. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1578 Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <9350f32278e39f7bce297b8f2d82dac27c6f8c9a.1697165821.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
This patch introduces two new arugments, `out_cursor` and `in_cursor`, to vhost_vdpa_net_loadx(). Addtionally, it includes a helper function vhost_vdpa_net_load_cursor_reset() for resetting these cursors. Furthermore, this patch refactors vhost_vdpa_net_load_cmd() so that vhost_vdpa_net_load_cmd() prepares buffers for the device using the cursors arguments, instead of directly accesses `s->cvq_cmd_out_buffer` and `s->status` fields. By making these change, next patches in this series can refactor vhost_vdpa_net_load_cmd() directly to iterate through the control commands shadow buffers, allowing QEMU to send CVQ state load commands in parallel at device startup. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <1c6516e233a14cc222f0884e148e4e1adceda78d.1697165821.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
This patch moves vhost_svq_poll() to the caller of vhost_vdpa_net_cvq_add() and introduces a helper funtion. By making this change, next patches in this series is able to refactor vhost_vdpa_net_load_x() only to delay the polling and checking process until either the SVQ is full or control commands shadow buffers are full. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <196cadb55175a75275660c6634a538289f027ae3.1697165821.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
Considering that vhost_vdpa_net_load_rx_mode() is only called within vhost_vdpa_net_load_rx() now, this patch refactors vhost_vdpa_net_load_rx_mode() to include a check for the device's ack, simplifying the code and improving its maintainability. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <68811d52f96ae12d68f0d67d996ac1642a623943.1697165821.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
Next patches in this series will refactor vhost_vdpa_net_load_cmd() to iterate through the control commands shadow buffers, allowing QEMU to send CVQ state load commands in parallel at device startup. Considering that QEMU always forwards the CVQ command serialized outside of vhost_vdpa_net_load(), it is more elegant to send the CVQ commands directly without invoking vhost_vdpa_net_load_*() helpers. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <254f0618efde7af7229ba4fdada667bb9d318991.1697165821.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
Next patches in this series will no longer perform an immediate poll and check of the device's used buffers for each CVQ state load command. Consequently, there will be multiple pending buffers in the shadow VirtQueue, making it a must for every control command to have its own buffer. To achieve this, this patch refactor vhost_vdpa_net_cvq_add() to accept `struct iovec`, which eliminates the coupling of control commands to `s->cvq_cmd_out_buffer` and `s->status`, allowing them to use their own buffer. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <8a328f146fb043f34edb75ba6d043d2d6de88f99.1697165821.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- Oct 06, 2023
-
-
Philippe Mathieu-Daudé authored
Fix: net/net.c:1680:35: error: declaration shadows a variable in the global scope [-Werror,-Wshadow] bool netdev_is_modern(const char *optarg) ^ net/net.c:1714:38: error: declaration shadows a variable in the global scope [-Werror,-Wshadow] void netdev_parse_modern(const char *optarg) ^ net/net.c:1728:60: error: declaration shadows a variable in the global scope [-Werror,-Wshadow] void net_client_parse(QemuOptsList *opts_list, const char *optarg) ^ /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/getopt.h:77:14: note: previous declaration is here extern char *optarg; /* getopt(3) external variables */ ^ Signed-off-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20231004120019.93101-4-philmd@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
- Oct 04, 2023
-
-
Eugenio Pérez authored
This patch solves a few issues. The most obvious is that the feature set was done previous to ACKNOWLEDGE | DRIVER status bit set. Current vdpa devices are permissive with this, but it is better to follow the standard. Fixes: 152128d6 ("vdpa: move CVQ isolation check to net_init_vhost_vdpa") Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230915170836.3078172-4-eperezma@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Eugenio Pérez authored
Otherwise it continues the CVQ isolation probing. Fixes: 152128d6 ("vdpa: move CVQ isolation check to net_init_vhost_vdpa") Reported-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230915170836.3078172-3-eperezma@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org>
-
Eugenio Pérez authored
It incorrectly prints "error setting features", probably because a copy paste miss. Fixes: 152128d6 ("vdpa: move CVQ isolation check to net_init_vhost_vdpa") Reported-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230915170836.3078172-2-eperezma@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org>
-
Eugenio Pérez authored
Not zeroing it causes a SIGSEGV if the live migration is cancelled, at net device restart. This is caused because CVQ tries to reuse the iova_tree that is present in the first vhost_vdpa device at the end of vhost_vdpa_net_cvq_start. As a consequence, it tries to access an iova_tree that has been already free. Fixes: 00ef422e ("vdpa net: move iova tree creation from init to start") Reported-by:
Yanhui Ma <yama@redhat.com> Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230913123408.2819185-1-eperezma@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Si-Wei Liu <si-wei.liu@oracle.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Stefan Hajnoczi authored
gcc 13.2.1 emits the following warning: net/vhost-vdpa.c: In function ‘net_vhost_vdpa_init.constprop’: net/vhost-vdpa.c:1394:25: error: ‘cvq_isolated’ may be used uninitialized [-Werror=maybe-uninitialized] 1394 | s->cvq_isolated = cvq_isolated; | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~ net/vhost-vdpa.c:1355:9: note: ‘cvq_isolated’ was declared here 1355 | int cvq_isolated; | ^~~~~~~~~~~~ cc1: all warnings being treated as errors Cc: Eugenio Pérez <eperezma@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20230911215435.4156314-1-stefanha@redhat.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
Next patches in this series will no longer perform an immediate poll and check of the device's used buffers for each CVQ state load command. Instead, they will send CVQ state load commands in parallel by polling multiple pending buffers at once. To achieve this, this patch refactoring vhost_svq_poll() to accept a new argument `num`, which allows vhost_svq_poll() to wait for the device to use multiple elements, rather than polling for a single element. Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Acked-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <950b3bfcfc5d446168b9d6a249d554a013a691d4.1693287885.git.yin31149@gmail.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Eugenio Pérez authored
Now that we have add migration blockers if the device does not support all the needed features, remove the general blocker applied to all net devices with CVQ. Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Message-Id: <20230822085330.3978829-6-eperezma@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Eugenio Pérez authored
Doing that way allows CVQ to be enabled before the dataplane vqs, restoring the state as MQ or MAC addresses properly in the case of a migration. The patch does it by defining a ->load NetClientInfo callback also for dataplane. Ideally, this should be done by an independent patch, but the function is already static so it would only add an empty vhost_vdpa_net_data_load stub. Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Message-Id: <20230822085330.3978829-5-eperezma@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Eugenio Pérez authored
Next patches will add the corresponding data load. Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Message-Id: <20230822085330.3978829-4-eperezma@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Eugenio Pérez authored
Previous to this patch the only way CVQ would be shadowed is if it does support to isolate CVQ group or if all vqs were shadowed from the beginning. The second condition was checked at the beginning, and no more configuration was done. After this series we need to check if data queues are shadowed because they are in the middle of the migration. As checking if they are shadowed already covers the previous case, let's just mimic it. Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Acked-by:
Jason Wang <jasowang@redhat.com> Message-Id: <20230822085330.3978829-2-eperezma@redhat.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
Enable SVQ with VIRTIO_NET_F_CTRL_VLAN feature. Co-developed-by:
Eugenio Pérez <eperezma@redhat.com> Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <38dc63102a42c31c72fd293d0e6e2828fd54c86e.1690106284.git.yin31149@gmail.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
Hawkins Jiawei authored
This patch introduces vhost_vdpa_net_load_single_vlan() and vhost_vdpa_net_load_vlan() to restore the vlan filtering state at device's startup. Co-developed-by:
Eugenio Pérez <eperezma@redhat.com> Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Signed-off-by:
Hawkins Jiawei <yin31149@gmail.com> Message-Id: <e76a29f77bb3f386e4a643c8af94b77b775d1752.1690106284.git.yin31149@gmail.com> Tested-by:
Lei Yang <leiyang@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- Sep 29, 2023
-
-
Philippe Mathieu-Daudé authored
Fix: net/eth.c:435:20: error: declaration shadows a local variable [-Werror,-Wshadow] size_t input_size = iov_size(pkt, pkt_frags); ^ net/eth.c:413:16: note: previous declaration is here size_t input_size = iov_size(pkt, pkt_frags); ^ Suggested-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Signed-off-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Message-ID: <20230904161235.84651-16-philmd@linaro.org> Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
- Sep 18, 2023
-
-
Peter Maydell authored
Use a heap allocation instead of a variable length array in tap_receive_iov(). The codebase has very few VLAs, and if we can get rid of them all we can make the compiler error on new additions. This is a defensive measure against security bugs where an on-stack dynamic allocation isn't correctly size-checked (e.g. CVE-2021-3527). Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Francisco Iglesias <frasse.iglesias@gmail.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Peter Maydell authored
Use a g_autofree heap allocation instead of a variable length array in dump_receive_iov(). The codebase has very few VLAs, and if we can get rid of them all we can make the compiler error on new additions. This is a defensive measure against security bugs where an on-stack dynamic allocation isn't correctly size-checked (e.g. CVE-2021-3527). Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Francisco Iglesias <frasse.iglesias@gmail.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Ilya Maximets authored
AF_XDP is a network socket family that allows communication directly with the network device driver in the kernel, bypassing most or all of the kernel networking stack. In the essence, the technology is pretty similar to netmap. But, unlike netmap, AF_XDP is Linux-native and works with any network interfaces without driver modifications. Unlike vhost-based backends (kernel, user, vdpa), AF_XDP doesn't require access to character devices or unix sockets. Only access to the network interface itself is necessary. This patch implements a network backend that communicates with the kernel by creating an AF_XDP socket. A chunk of userspace memory is shared between QEMU and the host kernel. 4 ring buffers (Tx, Rx, Fill and Completion) are placed in that memory along with a pool of memory buffers for the packet data. Data transmission is done by allocating one of the buffers, copying packet data into it and placing the pointer into Tx ring. After transmission, device will return the buffer via Completion ring. On Rx, device will take a buffer form a pre-populated Fill ring, write the packet data into it and place the buffer into Rx ring. AF_XDP network backend takes on the communication with the host kernel and the network interface and forwards packets to/from the peer device in QEMU. Usage example: -device virtio-net-pci,netdev=guest1,mac=00:16:35:AF:AA:5C -netdev af-xdp,ifname=ens6f1np1,id=guest1,mode=native,queues=1 XDP program bridges the socket with a network interface. It can be attached to the interface in 2 different modes: 1. skb - this mode should work for any interface and doesn't require driver support. With a caveat of lower performance. 2. native - this does require support from the driver and allows to bypass skb allocation in the kernel and potentially use zero-copy while getting packets in/out userspace. By default, QEMU will try to use native mode and fall back to skb. Mode can be forced via 'mode' option. To force 'copy' even in native mode, use 'force-copy=on' option. This might be useful if there is some issue with the driver. Option 'queues=N' allows to specify how many device queues should be open. Note that all the queues that are not open are still functional and can receive traffic, but it will not be delivered to QEMU. So, the number of device queues should generally match the QEMU configuration, unless the device is shared with something else and the traffic re-direction to appropriate queues is correctly configured on a device level (e.g. with ethtool -N). 'start-queue=M' option can be used to specify from which queue id QEMU should start configuring 'N' queues. It might also be necessary to use this option with certain NICs, e.g. MLX5 NICs. See the docs for examples. In a general case QEMU will need CAP_NET_ADMIN and CAP_SYS_ADMIN or CAP_BPF capabilities in order to load default XSK/XDP programs to the network interface and configure BPF maps. It is possible, however, to run with no capabilities. For that to work, an external process with enough capabilities will need to pre-load default XSK program, create AF_XDP sockets and pass their file descriptors to QEMU process on startup via 'sock-fds' option. Network backend will need to be configured with 'inhibit=on' to avoid loading of the program. QEMU will need 32 MB of locked memory (RLIMIT_MEMLOCK) per queue or CAP_IPC_LOCK. There are few performance challenges with the current network backends. First is that they do not support IO threads. This means that data path is handled by the main thread in QEMU and may slow down other work or may be slowed down by some other work. This also means that taking advantage of multi-queue is generally not possible today. Another thing is that data path is going through the device emulation code, which is not really optimized for performance. The fastest "frontend" device is virtio-net. But it's not optimized for heavy traffic either, because it expects such use-cases to be handled via some implementation of vhost (user, kernel, vdpa). In practice, we have virtio notifications and rcu lock/unlock on a per-packet basis and not very efficient accesses to the guest memory. Communication channels between backend and frontend devices do not allow passing more than one packet at a time as well. Some of these challenges can be avoided in the future by adding better batching into device emulation or by implementing vhost-af-xdp variant. There are also a few kernel limitations. AF_XDP sockets do not support any kinds of checksum or segmentation offloading. Buffers are limited to a page size (4K), i.e. MTU is limited. Multi-buffer support implementation for AF_XDP is in progress, but not ready yet. Also, transmission in all non-zero-copy modes is synchronous, i.e. done in a syscall. That doesn't allow high packet rates on virtual interfaces. However, keeping in mind all of these challenges, current implementation of the AF_XDP backend shows a decent performance while running on top of a physical NIC with zero-copy support. Test setup: 2 VMs running on 2 physical hosts connected via ConnectX6-Dx card. Network backend is configured to open the NIC directly in native mode. The driver supports zero-copy. NIC is configured to use 1 queue. Inside a VM - iperf3 for basic TCP performance testing and dpdk-testpmd for PPS testing. iperf3 result: TCP stream : 19.1 Gbps dpdk-testpmd (single queue, single CPU core, 64 B packets) results: Tx only : 3.4 Mpps Rx only : 2.0 Mpps L2 FWD Loopback : 1.5 Mpps In skb mode the same setup shows much lower performance, similar to the setup where pair of physical NICs is replaced with veth pair: iperf3 result: TCP stream : 9 Gbps dpdk-testpmd (single queue, single CPU core, 64 B packets) results: Tx only : 1.2 Mpps Rx only : 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by:
Ilya Maximets <i.maximets@ovn.org> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> (docker/lcitool) Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Andrew Melnychenko authored
New features are subject to check with vhost-user and vdpa. Signed-off-by:
Yuri Benditovich <yuri.benditovich@daynix.com> Signed-off-by:
Andrew Melnychenko <andrew@daynix.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Yuri Benditovich authored
Tap indicates support for USO features according to capabilities of current kernel module. Signed-off-by:
Yuri Benditovich <yuri.benditovich@daynix.com> Signed-off-by:
Andrew Melnychecnko <andrew@daynix.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
Andrew Melnychenko authored
Passing additional parameters (USOv4 and USOv6 offloads) when setting TAP offloads Signed-off-by:
Yuri Benditovich <yuri.benditovich@daynix.com> Signed-off-by:
Andrew Melnychenko <andrew@daynix.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
- Sep 13, 2023
-
-
Jonathan Perkin authored
qemu 8.1.0 breaks on illumos platforms due to _XOPEN_SOURCE and others no longer being set correctly, leading to breakage such as: https://us-central.manta.mnx.io/pkgsrc/public/reports/trunk/tools/20230908.1404/qemu-8.1.0/build.log This is a result of meson conversion which incorrectly matches against 'solaris' instead of 'sunos' for uname. First time submitting a patch here, hope I did it correctly. Thanks. Signed-off-by:
Jonathan Perkin <jonathan@perkin.org.uk> Message-ID: <ZPtdxtum9UVPy58J@perkin.org.uk> Cc: qemu-stable@nongnu.org Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Sep 08, 2023
-
-
Michael Tokarev authored
Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru> Reviewed-by:
Eric Blake <eblake@redhat.com>
-
- Sep 07, 2023
-
-
Paolo Bonzini authored
CONFIG_SOLARIS is only used to pick tap implementations. But the target OS is invariant and does not depend on the configuration, so move away from config_host and just use unconditional rules in softmmu_ss. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-