Skip to content
Snippets Groups Projects
  1. Nov 07, 2023
    • Marc-André Lureau's avatar
      qmp/hmp: disable screendump if PIXMAN is missing · f38aa2c7
      Marc-André Lureau authored
      
      The command requires color conversion and line-by-line feeding. We could
      have a simple fallback for simple formats though.
      
      Signed-off-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: default avatarThomas Huth <thuth@redhat.com>
      f38aa2c7
    • Carwyn Ellis's avatar
      ui/cocoa: add zoom-to-fit display option · 5ec0898b
      Carwyn Ellis authored
      
      Provides a display option, zoom-to-fit, that enables scaling of the
      display when full-screen mode is enabled.
      
      Also ensures that the corresponding menu item is marked as enabled when
      the option is set to on.
      
      Signed-off-by: default avatarCarwyn Ellis <carwynellis@gmail.com>
      Reviewed-by: default avatarAkihiko Odaki <akihiko.odaki@daynix.com>
      Message-Id: <20231027154920.80626-2-carwynellis@gmail.com>
      5ec0898b
    • Daniel Henrique Barboza's avatar
      qapi,risc-v: add query-cpu-model-expansion · aeb2bc59
      Daniel Henrique Barboza authored
      
      This API is used to inspect the characteristics of a given CPU model. It
      also allows users to validate a CPU model with a certain configuration,
      e.g. if "-cpu X,a=true,b=false" is a valid setup for a given QEMU
      binary. We'll start implementing the first part. The second requires
      more changes in RISC-V CPU boot flow.
      
      The implementation is inspired by the existing ARM
      query-cpu-model-expansion impl in target/arm/arm-qmp-cmds.c. We'll
      create a RISCVCPU object with the required model, fetch its existing
      properties, add a couple of relevant boolean options (pmp and mmu) and
      display it to users.
      
      Here's an usage example:
      
      ./build/qemu-system-riscv64 -S -M virt -display none \
        -qmp  tcp:localhost:1234,server,wait=off
      
      ./scripts/qmp/qmp-shell localhost:1234
      Welcome to the QMP low-level shell!
      Connected to QEMU 8.1.50
      
      (QEMU)  query-cpu-model-expansion type=full model={"name":"rv64"}
      {"return": {"model": {"name": "rv64", "props": {"zicond": false, "x-zvfh": false, "mmu": true, "x-zvfbfwma": false, "x-zvfbfmin": false, "xtheadbs": false, "xtheadbb": false, "xtheadba": false, "xtheadmemidx": false, "smstateen": false, "zfinx": false, "Zve64f": false, "Zve32f": false, "x-zvfhmin": false, "xventanacondops": false, "xtheadcondmov": false, "svpbmt": false, "zbs": true, "zbc": true, "zbb": true, "zba": true, "zicboz": true, "xtheadmac": false, "Zfh": false, "Zfa": true, "zbkx": false, "zbkc": false, "zbkb": false, "Zve64d": false, "x-zfbfmin": false, "zk": false, "x-epmp": false, "xtheadmempair": false, "zkt": false, "zks": false, "zkr": false, "zkn": false, "Zfhmin": false, "zksh": false, "zknh": false, "zkne": false, "zknd": false, "zhinx": false, "Zicsr": true, "sscofpmf": false, "Zihintntl": true, "sstc": true, "xtheadcmo": false, "x-zvbb": false, "zksed": false, "x-zvkned": false, "xtheadsync": false, "x-zvkg": false, "zhinxmin": false, "svadu": true, "xtheadfmv": false, "x-zvksed": false, "svnapot": false, "pmp": true, "x-zvknhb": false, "x-zvknha": false, "xtheadfmemidx": false, "x-zvksh": false, "zdinx": false, "zicbom": true, "Zihintpause": true, "svinval": false, "zcf": false, "zce": false, "zcd": false, "zcb": false, "zca": false, "x-ssaia": false, "x-smaia": false, "zmmul": false, "x-zvbc": false, "Zifencei": true, "zcmt": false, "zcmp": false, "Zawrs": true}}}}
      
      Signed-off-by: default avatarDaniel Henrique Barboza <dbarboza@ventanamicro.com>
      Reviewed-by: default avatarAlistair Francis <alistair.francis@wdc.com>
      Message-ID: <20231018195638.211151-3-dbarboza@ventanamicro.com>
      Signed-off-by: default avatarAlistair Francis <alistair.francis@wdc.com>
      aeb2bc59
  2. Nov 06, 2023
  3. Nov 03, 2023
    • Song Gao's avatar
      target/loongarch: Implement query-cpu-model-expansion · 31f694b9
      Song Gao authored
      
      Add support for the query-cpu-model-expansion QMP command to LoongArch.
      We support query the cpu features.
      
        e.g
          la464 and max cpu support LSX/LASX, default enable,
          la132 not support LSX/LASX.
      
          1. start with '-cpu max,lasx=off'
      
          (QEMU) query-cpu-model-expansion type=static  model={"name":"max"}
          {"return": {"model": {"name": "max", "props": {"lasx": false, "lsx": true}}}}
      
          2. start with '-cpu la464,lasx=off'
          (QEMU) query-cpu-model-expansion type=static  model={"name":"la464"}
          {"return": {"model": {"name": "max", "props": {"lasx": false, "lsx": true}}}
      
          3. start with '-cpu la132,lasx=off'
          qemu-system-loongarch64: can't apply global la132-loongarch-cpu.lasx=off: Property 'la132-loongarch-cpu.lasx' not found
      
          4. start with '-cpu max,lasx=off' or start with '-cpu la464,lasx=off' query cpu model la132
          (QEMU) query-cpu-model-expansion type=static  model={"name":"la132"}
          {"return": {"model": {"name": "la132"}}}
      
      Acked-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarSong Gao <gaosong@loongson.cn>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-Id: <20231020084925.3457084-4-gaosong@loongson.cn>
      31f694b9
  4. Nov 02, 2023
  5. Nov 01, 2023
    • Steve Sistare's avatar
      cpr: reboot mode · a87e6451
      Steve Sistare authored
      
      Add the cpr-reboot migration mode.  Usage:
      
      $ qemu-system-$arch -monitor stdio ...
      QEMU 8.1.50 monitor - type 'help' for more information
      (qemu) migrate_set_capability x-ignore-shared on
      (qemu) migrate_set_parameter mode cpr-reboot
      (qemu) migrate -d file:vm.state
      (qemu) info status
      VM status: paused (postmigrate)
      (qemu) quit
      
      $ qemu-system-$arch -monitor stdio -incoming defer ...
      QEMU 8.1.50 monitor - type 'help' for more information
      (qemu) migrate_set_capability x-ignore-shared on
      (qemu) migrate_set_parameter mode cpr-reboot
      (qemu) migrate_incoming file:vm.state
      (qemu) info status
      VM status: running
      
      In this mode, the migrate command saves state to a file, allowing one
      to quit qemu, reboot to an updated kernel, and restart an updated version
      of qemu.  The caller must specify a migration URI that writes to and reads
      from a file.  Unlike normal mode, the use of certain local storage options
      does not block the migration, but the caller must not modify guest block
      devices between the quit and restart.  To avoid saving guest RAM to the
      file, the memory backend must be shared, and the @x-ignore-shared migration
      capability must be set.  Guest RAM must be non-volatile across reboot, such
      as by backing it with a dax device, but this is not enforced.  The restarted
      qemu arguments must match those used to initially start qemu, plus the
      -incoming option.
      
      Signed-off-by: default avatarSteve Sistare <steven.sistare@oracle.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-ID: <1698263069-406971-6-git-send-email-steven.sistare@oracle.com>
      a87e6451
    • Steve Sistare's avatar
      migration: mode parameter · eea1e5c9
      Steve Sistare authored
      
      Create a mode migration parameter that can be used to select alternate
      migration algorithms.  The default mode is normal, representing the
      current migration algorithm, and does not need to be explicitly set.
      
      No functional change until a new mode is added, except that the mode is
      shown by the 'info migrate' command.
      
      Signed-off-by: default avatarSteve Sistare <steven.sistare@oracle.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-ID: <1698263069-406971-2-git-send-email-steven.sistare@oracle.com>
      eea1e5c9
  6. Oct 31, 2023
  7. Oct 20, 2023
  8. Oct 19, 2023
  9. Oct 17, 2023
    • Juan Quintela's avatar
      migration: Improve json and formatting · e4ceec29
      Juan Quintela authored
      
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-ID: <20231013104736.31722-2-quintela@redhat.com>
      e4ceec29
    • Peter Xu's avatar
      migration: Allow user to specify available switchover bandwidth · 8b239597
      Peter Xu authored
      
      Migration bandwidth is a very important value to live migration.  It's
      because it's one of the major factors that we'll make decision on when to
      switchover to destination in a precopy process.
      
      This value is currently estimated by QEMU during the whole live migration
      process by monitoring how fast we were sending the data.  This can be the
      most accurate bandwidth if in the ideal world, where we're always feeding
      unlimited data to the migration channel, and then it'll be limited to the
      bandwidth that is available.
      
      However in reality it may be very different, e.g., over a 10Gbps network we
      can see query-migrate showing migration bandwidth of only a few tens of
      MB/s just because there are plenty of other things the migration thread
      might be doing.  For example, the migration thread can be busy scanning
      zero pages, or it can be fetching dirty bitmap from other external dirty
      sources (like vhost or KVM).  It means we may not be pushing data as much
      as possible to migration channel, so the bandwidth estimated from "how many
      data we sent in the channel" can be dramatically inaccurate sometimes.
      
      With that, the decision to switchover will be affected, by assuming that we
      may not be able to switchover at all with such a low bandwidth, but in
      reality we can.
      
      The migration may not even converge at all with the downtime specified,
      with that wrong estimation of bandwidth, keeping iterations forever with a
      low estimation of bandwidth.
      
      The issue is QEMU itself may not be able to avoid those uncertainties on
      measuing the real "available migration bandwidth".  At least not something
      I can think of so far.
      
      One way to fix this is when the user is fully aware of the available
      bandwidth, then we can allow the user to help providing an accurate value.
      
      For example, if the user has a dedicated channel of 10Gbps for migration
      for this specific VM, the user can specify this bandwidth so QEMU can
      always do the calculation based on this fact, trusting the user as long as
      specified.  It may not be the exact bandwidth when switching over (in which
      case qemu will push migration data as fast as possible), but much better
      than QEMU trying to wildly guess, especially when very wrong.
      
      A new parameter "avail-switchover-bandwidth" is introduced just for this.
      So when the user specified this parameter, instead of trusting the
      estimated value from QEMU itself (based on the QEMUFile send speed), it
      trusts the user more by using this value to decide when to switchover,
      assuming that we'll have such bandwidth available then.
      
      Note that specifying this value will not throttle the bandwidth for
      switchover yet, so QEMU will always use the full bandwidth possible for
      sending switchover data, assuming that should always be the most important
      way to use the network at that time.
      
      This can resolve issues like "unconvergence migration" which is caused by
      hilarious low "migration bandwidth" detected for whatever reason.
      
      Reported-by: default avatarZhiyi Guo <zhguo@redhat.com>
      Reviewed-by: default avatarJoao Martins <joao.m.martins@oracle.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-ID: <20231010221922.40638-1-peterx@redhat.com>
      8b239597
  10. Oct 11, 2023
  11. Oct 10, 2023
    • Andrei Gudkov's avatar
      migration/dirtyrate: use QEMU_CLOCK_HOST to report start-time · 320a6ccc
      Andrei Gudkov authored
      
      Currently query-dirty-rate uses QEMU_CLOCK_REALTIME as
      the source for start-time field. This translates to
      clock_gettime(CLOCK_MONOTONIC), i.e. number of seconds
      since host boot. This is not very useful. The only
      reasonable use case of start-time I can imagine is to
      check whether previously completed measurements are
      too old or not. But this makes sense only if start-time
      is reported as host wall-clock time.
      
      This patch replaces source of start-time from
      QEMU_CLOCK_REALTIME to QEMU_CLOCK_HOST.
      
      Signed-off-by: default avatarAndrei Gudkov <gudkov.andrei@huawei.com>
      Reviewed-by: default avatarHyman Huang <yong.huang@smartx.com>
      Message-Id: <399861531e3b24a1ecea2ba453fb2c3d129fb03a.1693905328.git.gudkov.andrei@huawei.com>
      Signed-off-by: default avatarHyman Huang <yong.huang@smartx.com>
      320a6ccc
    • Andrei Gudkov's avatar
      migration/calc-dirty-rate: millisecond-granularity period · 34a68001
      Andrei Gudkov authored
      
      This patch allows to measure dirty page rate for
      sub-second intervals of time. An optional argument is
      introduced -- calc-time-unit. For example:
      {"execute": "calc-dirty-rate", "arguments":
        {"calc-time": 500, "calc-time-unit": "millisecond"} }
      
      Millisecond granularity allows to make predictions whether
      migration will succeed or not. To do this, calculate dirty
      rate with calc-time set to max allowed downtime (e.g. 300ms),
      convert measured rate into volume of dirtied memory,
      and divide by network throughput. If the value is lower
      than max allowed downtime, then migration will converge.
      
      Measurement results for single thread randomly writing to
      a 1/4/24GiB memory region:
      
      +----------------+-----------------------------------------------+
      | calc-time      |                dirty rate MiB/s               |
      | (milliseconds) +----------------+---------------+--------------+
      |                | theoretical    | page-sampling | dirty-bitmap |
      |                | (at 3M wr/sec) |               |              |
      +----------------+----------------+---------------+--------------+
      |                               1GiB                             |
      +----------------+----------------+---------------+--------------+
      |            100 |           6996 |          7100 |         3192 |
      |            200 |           4606 |          4660 |         2655 |
      |            300 |           3305 |          3280 |         2371 |
      |            400 |           2534 |          2525 |         2154 |
      |            500 |           2041 |          2044 |         1871 |
      |            750 |           1365 |          1341 |         1358 |
      |           1000 |           1024 |          1052 |         1025 |
      |           1500 |            683 |           678 |          684 |
      |           2000 |            512 |           507 |          513 |
      +----------------+----------------+---------------+--------------+
      |                               4GiB                             |
      +----------------+----------------+---------------+--------------+
      |            100 |          10232 |          8880 |         4070 |
      |            200 |           8954 |          8049 |         3195 |
      |            300 |           7889 |          7193 |         2881 |
      |            400 |           6996 |          6530 |         2700 |
      |            500 |           6245 |          5772 |         2312 |
      |            750 |           4829 |          4586 |         2465 |
      |           1000 |           3865 |          3780 |         2178 |
      |           1500 |           2694 |          2633 |         2004 |
      |           2000 |           2041 |          2031 |         1789 |
      +----------------+----------------+---------------+--------------+
      |                               24GiB                            |
      +----------------+----------------+---------------+--------------+
      |            100 |          11495 |          8640 |         5597 |
      |            200 |          11226 |          8616 |         3527 |
      |            300 |          10965 |          8386 |         2355 |
      |            400 |          10713 |          8370 |         2179 |
      |            500 |          10469 |          8196 |         2098 |
      |            750 |           9890 |          7885 |         2556 |
      |           1000 |           9354 |          7506 |         2084 |
      |           1500 |           8397 |          6944 |         2075 |
      |           2000 |           7574 |          6402 |         2062 |
      +----------------+----------------+---------------+--------------+
      
      Theoretical values are computed according to the following formula:
      size * (1 - (1-(4096/size))^(time*wps)) / (time * 2^20),
      where size is in bytes, time is in seconds, and wps is number of
      writes per second.
      
      Signed-off-by: default avatarAndrei Gudkov <gudkov.andrei@huawei.com>
      Reviewed-by: default avatarHyman Huang <yong.huang@smartx.com>
      Message-Id: <d802e6b8053eb60fbec1a784cf86f67d9528e0a8.1693895970.git.gudkov.andrei@huawei.com>
      Signed-off-by: default avatarHyman Huang <yong.huang@smartx.com>
      34a68001
  12. Sep 20, 2023
  13. Sep 19, 2023
    • David Hildenbrand's avatar
      backends/hostmem-file: Add "rom" property to support VM templating with R/O files · e92666b0
      David Hildenbrand authored
      
      For now, "share=off,readonly=on" would always result in us opening the
      file R/O and mmap'ing the opened file MAP_PRIVATE R/O -- effectively
      turning it into ROM.
      
      Especially for VM templating, "share=off" is a common use case. However,
      that use case is impossible with files that lack write permissions,
      because "share=off,readonly=on" will not give us writable RAM.
      
      The sole user of ROM via memory-backend-file are R/O NVDIMMs, but as we
      have users (Kata Containers) that rely on the existing behavior --
      malicious VMs should not be able to consume COW memory for R/O NVDIMMs --
      we cannot change the semantics of "share=off,readonly=on"
      
      So let's add a new "rom" property with on/off/auto values. "auto" is
      the default and what most people will use: for historical reasons, to not
      change the old semantics, it defaults to the value of the "readonly"
      property.
      
      For VM templating, one can now use:
          -object memory-backend-file,share=off,readonly=on,rom=off,...
      
      But we'll disallow:
          -object memory-backend-file,share=on,readonly=on,rom=off,...
      because we would otherwise get an error when trying to mmap the R/O file
      shared and writable. An explicit error message is cleaner.
      
      We will also disallow for now:
          -object memory-backend-file,share=off,readonly=off,rom=on,...
          -object memory-backend-file,share=on,readonly=off,rom=on,...
      It's not harmful, but also not really required for now.
      
      Alternatives that were abandoned:
      * Make "unarmed=on" for the NVDIMM set the memory region container
        readonly. We would still see a change of ROM->RAM and possibly run
        into memslot limits with vhost-user. Further, there might be use cases
        for "unarmed=on" that should still allow writing to that memory
        (temporary files, system RAM, ...).
      * Add a new "readonly=on/off/auto" parameter for NVDIMMs. Similar issues
        as with "unarmed=on".
      * Make "readonly" consume "on/off/file" instead of being a 'bool' type.
        This would slightly changes the behavior of the "readonly" parameter:
        values like true/false (as accepted by a 'bool'type) would no longer be
        accepted.
      
      Message-ID: <20230906120503.359863-4-david@redhat.com>
      Acked-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      e92666b0
  14. Sep 18, 2023
    • Ilya Maximets's avatar
      net: add initial support for AF_XDP network backend · cb039ef3
      Ilya Maximets authored
      
      AF_XDP is a network socket family that allows communication directly
      with the network device driver in the kernel, bypassing most or all
      of the kernel networking stack.  In the essence, the technology is
      pretty similar to netmap.  But, unlike netmap, AF_XDP is Linux-native
      and works with any network interfaces without driver modifications.
      Unlike vhost-based backends (kernel, user, vdpa), AF_XDP doesn't
      require access to character devices or unix sockets.  Only access to
      the network interface itself is necessary.
      
      This patch implements a network backend that communicates with the
      kernel by creating an AF_XDP socket.  A chunk of userspace memory
      is shared between QEMU and the host kernel.  4 ring buffers (Tx, Rx,
      Fill and Completion) are placed in that memory along with a pool of
      memory buffers for the packet data.  Data transmission is done by
      allocating one of the buffers, copying packet data into it and
      placing the pointer into Tx ring.  After transmission, device will
      return the buffer via Completion ring.  On Rx, device will take
      a buffer form a pre-populated Fill ring, write the packet data into
      it and place the buffer into Rx ring.
      
      AF_XDP network backend takes on the communication with the host
      kernel and the network interface and forwards packets to/from the
      peer device in QEMU.
      
      Usage example:
      
        -device virtio-net-pci,netdev=guest1,mac=00:16:35:AF:AA:5C
        -netdev af-xdp,ifname=ens6f1np1,id=guest1,mode=native,queues=1
      
      XDP program bridges the socket with a network interface.  It can be
      attached to the interface in 2 different modes:
      
      1. skb - this mode should work for any interface and doesn't require
               driver support.  With a caveat of lower performance.
      
      2. native - this does require support from the driver and allows to
                  bypass skb allocation in the kernel and potentially use
                  zero-copy while getting packets in/out userspace.
      
      By default, QEMU will try to use native mode and fall back to skb.
      Mode can be forced via 'mode' option.  To force 'copy' even in native
      mode, use 'force-copy=on' option.  This might be useful if there is
      some issue with the driver.
      
      Option 'queues=N' allows to specify how many device queues should
      be open.  Note that all the queues that are not open are still
      functional and can receive traffic, but it will not be delivered to
      QEMU.  So, the number of device queues should generally match the
      QEMU configuration, unless the device is shared with something
      else and the traffic re-direction to appropriate queues is correctly
      configured on a device level (e.g. with ethtool -N).
      'start-queue=M' option can be used to specify from which queue id
      QEMU should start configuring 'N' queues.  It might also be necessary
      to use this option with certain NICs, e.g. MLX5 NICs.  See the docs
      for examples.
      
      In a general case QEMU will need CAP_NET_ADMIN and CAP_SYS_ADMIN
      or CAP_BPF capabilities in order to load default XSK/XDP programs to
      the network interface and configure BPF maps.  It is possible, however,
      to run with no capabilities.  For that to work, an external process
      with enough capabilities will need to pre-load default XSK program,
      create AF_XDP sockets and pass their file descriptors to QEMU process
      on startup via 'sock-fds' option.  Network backend will need to be
      configured with 'inhibit=on' to avoid loading of the program.
      QEMU will need 32 MB of locked memory (RLIMIT_MEMLOCK) per queue
      or CAP_IPC_LOCK.
      
      There are few performance challenges with the current network backends.
      
      First is that they do not support IO threads.  This means that data
      path is handled by the main thread in QEMU and may slow down other
      work or may be slowed down by some other work.  This also means that
      taking advantage of multi-queue is generally not possible today.
      
      Another thing is that data path is going through the device emulation
      code, which is not really optimized for performance.  The fastest
      "frontend" device is virtio-net.  But it's not optimized for heavy
      traffic either, because it expects such use-cases to be handled via
      some implementation of vhost (user, kernel, vdpa).  In practice, we
      have virtio notifications and rcu lock/unlock on a per-packet basis
      and not very efficient accesses to the guest memory.  Communication
      channels between backend and frontend devices do not allow passing
      more than one packet at a time as well.
      
      Some of these challenges can be avoided in the future by adding better
      batching into device emulation or by implementing vhost-af-xdp variant.
      
      There are also a few kernel limitations.  AF_XDP sockets do not
      support any kinds of checksum or segmentation offloading.  Buffers
      are limited to a page size (4K), i.e. MTU is limited.  Multi-buffer
      support implementation for AF_XDP is in progress, but not ready yet.
      Also, transmission in all non-zero-copy modes is synchronous, i.e.
      done in a syscall.  That doesn't allow high packet rates on virtual
      interfaces.
      
      However, keeping in mind all of these challenges, current implementation
      of the AF_XDP backend shows a decent performance while running on top
      of a physical NIC with zero-copy support.
      
      Test setup:
      
      2 VMs running on 2 physical hosts connected via ConnectX6-Dx card.
      Network backend is configured to open the NIC directly in native mode.
      The driver supports zero-copy.  NIC is configured to use 1 queue.
      
      Inside a VM - iperf3 for basic TCP performance testing and dpdk-testpmd
      for PPS testing.
      
      iperf3 result:
       TCP stream      : 19.1 Gbps
      
      dpdk-testpmd (single queue, single CPU core, 64 B packets) results:
       Tx only         : 3.4 Mpps
       Rx only         : 2.0 Mpps
       L2 FWD Loopback : 1.5 Mpps
      
      In skb mode the same setup shows much lower performance, similar to
      the setup where pair of physical NICs is replaced with veth pair:
      
      iperf3 result:
        TCP stream      : 9 Gbps
      
      dpdk-testpmd (single queue, single CPU core, 64 B packets) results:
        Tx only         : 1.2 Mpps
        Rx only         : 1.0 Mpps
        L2 FWD Loopback : 0.7 Mpps
      
      Results in skb mode or over the veth are close to results of a tap
      backend with vhost=on and disabled segmentation offloading bridged
      with a NIC.
      
      Signed-off-by: default avatarIlya Maximets <i.maximets@ovn.org>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> (docker/lcitool)
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      cb039ef3
  15. Sep 04, 2023
  16. Aug 02, 2023
Loading