Skip to content
Snippets Groups Projects
  1. Sep 08, 2023
    • Stefan Hajnoczi's avatar
      io: follow coroutine AioContext in qio_channel_yield() · 06e0f098
      Stefan Hajnoczi authored
      
      The ongoing QEMU multi-queue block layer effort makes it possible for multiple
      threads to process I/O in parallel. The nbd block driver is not compatible with
      the multi-queue block layer yet because QIOChannel cannot be used easily from
      coroutines running in multiple threads. This series changes the QIOChannel API
      to make that possible.
      
      In the current API, calling qio_channel_attach_aio_context() sets the
      AioContext where qio_channel_yield() installs an fd handler prior to yielding:
      
        qio_channel_attach_aio_context(ioc, my_ctx);
        ...
        qio_channel_yield(ioc); // my_ctx is used here
        ...
        qio_channel_detach_aio_context(ioc);
      
      This API design has limitations: reading and writing must be done in the same
      AioContext and moving between AioContexts involves a cumbersome sequence of API
      calls that is not suitable for doing on a per-request basis.
      
      There is no fundamental reason why a QIOChannel needs to run within the
      same AioContext every time qio_channel_yield() is called. QIOChannel
      only uses the AioContext while inside qio_channel_yield(). The rest of
      the time, QIOChannel is independent of any AioContext.
      
      In the new API, qio_channel_yield() queries the AioContext from the current
      coroutine using qemu_coroutine_get_aio_context(). There is no need to
      explicitly attach/detach AioContexts anymore and
      qio_channel_attach_aio_context() and qio_channel_detach_aio_context() are gone.
      One coroutine can read from the QIOChannel while another coroutine writes from
      a different AioContext.
      
      This API change allows the nbd block driver to use QIOChannel from any thread.
      It's important to keep in mind that the block driver already synchronizes
      QIOChannel access and ensures that two coroutines never read simultaneously or
      write simultaneously.
      
      This patch updates all users of qio_channel_attach_aio_context() to the
      new API. Most conversions are simple, but vhost-user-server requires a
      new qemu_coroutine_yield() call to quiesce the vu_client_trip()
      coroutine when not attached to any AioContext.
      
      While the API is has become simpler, there is one wart: QIOChannel has a
      special case for the iohandler AioContext (used for handlers that must not run
      in nested event loops). I didn't find an elegant way preserve that behavior, so
      I added a new API called qio_channel_set_follow_coroutine_ctx(ioc, true|false)
      for opting in to the new AioContext model. By default QIOChannel uses the
      iohandler AioHandler. Code that formerly called
      qio_channel_attach_aio_context() now calls
      qio_channel_set_follow_coroutine_ctx(ioc, true) once after the QIOChannel is
      created.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Acked-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Message-ID: <20230830224802.493686-5-stefanha@redhat.com>
      [eblake: also fix migration/rdma.c]
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      06e0f098
    • Stefan Hajnoczi's avatar
      io: check there are no qio_channel_yield() coroutines during ->finalize() · acd4be64
      Stefan Hajnoczi authored
      
      Callers must clean up their coroutines before calling
      object_unref(OBJECT(ioc)) to prevent an fd handler leak. Add an
      assertion to check this.
      
      This patch is preparation for the fd handler changes that follow.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230830224802.493686-4-stefanha@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      acd4be64
  2. Aug 01, 2023
    • Daniel P. Berrangé's avatar
      io: remove io watch if TLS channel is closed during handshake · 10be627d
      Daniel P. Berrangé authored
      
      The TLS handshake make take some time to complete, during which time an
      I/O watch might be registered with the main loop. If the owner of the
      I/O channel invokes qio_channel_close() while the handshake is waiting
      to continue the I/O watch must be removed. Failing to remove it will
      later trigger the completion callback which the owner is not expecting
      to receive. In the case of the VNC server, this results in a SEGV as
      vnc_disconnect_start() tries to shutdown a client connection that is
      already gone / NULL.
      
      CVE-2023-3354
      Reported-by: default avatarjiangyegen <jiangyegen@huawei.com>
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      10be627d
  3. May 30, 2023
    • Stefan Hajnoczi's avatar
      aio: remove aio_disable_external() API · 60f782b6
      Stefan Hajnoczi authored
      All callers now pass is_external=false to aio_set_fd_handler() and
      aio_set_event_notifier(). The aio_disable_external() API that
      temporarily disables fd handlers that were registered is_external=true
      is therefore dead code.
      
      Remove aio_disable_external(), aio_enable_external(), and the
      is_external arguments to aio_set_fd_handler() and
      aio_set_event_notifier().
      
      The entire test-fdmon-epoll test is removed because its sole purpose was
      testing aio_disable_external().
      
      Parts of this patch were generated using the following coccinelle
      (https://coccinelle.lip6.fr/
      
      ) semantic patch:
      
        @@
        expression ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque;
        @@
        - aio_set_fd_handler(ctx, fd, is_external, io_read, io_write, io_poll, io_poll_ready, opaque)
        + aio_set_fd_handler(ctx, fd, io_read, io_write, io_poll, io_poll_ready, opaque)
      
        @@
        expression ctx, notifier, is_external, io_read, io_poll, io_poll_ready;
        @@
        - aio_set_event_notifier(ctx, notifier, is_external, io_read, io_poll, io_poll_ready)
        + aio_set_event_notifier(ctx, notifier, io_read, io_poll, io_poll_ready)
      
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20230516190238.8401-21-stefanha@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      60f782b6
  4. May 19, 2023
    • Kevin Wolf's avatar
      nbd/server: Fix drained_poll to wake coroutine in right AioContext · 7c1f51bf
      Kevin Wolf authored
      
      nbd_drained_poll() generally runs in the main thread, not whatever
      iothread the NBD server coroutine is meant to run in, so it can't
      directly reenter the coroutines to wake them up.
      
      The code seems to have the right intention, it specifies the correct
      AioContext when it calls qemu_aio_coroutine_enter(). However, this
      functions doesn't schedule the coroutine to run in that AioContext, but
      it assumes it is already called in the home thread of the AioContext.
      
      To fix this, add a new thread-safe qio_channel_wake_read() that can be
      called in the main thread to wake up the coroutine in its AioContext,
      and use this in nbd_drained_poll().
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20230517152834.277483-3-kwolf@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      7c1f51bf
  5. Apr 20, 2023
    • Paolo Bonzini's avatar
      io: mark mixed functions that can suspend · 1dd91b22
      Paolo Bonzini authored
      
      There should be no paths from a coroutine_fn to aio_poll, however in
      practice coroutine_mixed_fn will call aio_poll in the !qemu_in_coroutine()
      path.  By marking mixed functions, we can track accurately the call paths
      that execute entirely in coroutine context, and find more missing
      coroutine_fn markers.  This results in more accurate checks that
      coroutine code does not end up blocking.
      
      If the marking were extended transitively to all functions that call
      these ones, static analysis could be done much more efficiently.
      However, this is a start and makes it possible to use vrc's path-based
      searches to find potential bugs where coroutine_fns call blocking functions.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1dd91b22
  6. Apr 12, 2023
  7. Mar 14, 2023
    • Matheus Tavares Bernardino's avatar
      io/channel-tls: plug memory leakage on GSource · c3a2c84a
      Matheus Tavares Bernardino authored
      
      This leakage can be seen through test-io-channel-tls:
      
      $ ../configure --target-list=aarch64-softmmu --enable-sanitizers
      $ make ./tests/unit/test-io-channel-tls
      $ ./tests/unit/test-io-channel-tls
      
      Indirect leak of 104 byte(s) in 1 object(s) allocated from:
          #0 0x7f81d1725808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144
          #1 0x7f81d135ae98 in g_malloc (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x57e98)
          #2 0x55616c5d4c1b in object_new_with_propv ../qom/object.c:795
          #3 0x55616c5d4a83 in object_new_with_props ../qom/object.c:768
          #4 0x55616c5c5415 in test_tls_creds_create ../tests/unit/test-io-channel-tls.c:70
          #5 0x55616c5c5a6b in test_io_channel_tls ../tests/unit/test-io-channel-tls.c:158
          #6 0x7f81d137d58d  (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d)
      
      Indirect leak of 32 byte(s) in 1 object(s) allocated from:
          #0 0x7f81d1725a06 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:153
          #1 0x7f81d1472a20 in gnutls_dh_params_init (/lib/x86_64-linux-gnu/libgnutls.so.30+0x46a20)
          #2 0x55616c6485ff in qcrypto_tls_creds_x509_load ../crypto/tlscredsx509.c:634
          #3 0x55616c648ba2 in qcrypto_tls_creds_x509_complete ../crypto/tlscredsx509.c:694
          #4 0x55616c5e1fea in user_creatable_complete ../qom/object_interfaces.c:28
          #5 0x55616c5d4c8c in object_new_with_propv ../qom/object.c:807
          #6 0x55616c5d4a83 in object_new_with_props ../qom/object.c:768
          #7 0x55616c5c5415 in test_tls_creds_create ../tests/unit/test-io-channel-tls.c:70
          #8 0x55616c5c5a6b in test_io_channel_tls ../tests/unit/test-io-channel-tls.c:158
          #9 0x7f81d137d58d  (/lib/x86_64-linux-gnu/libglib-2.0.so.0+0x7a58d)
      
      ...
      
      SUMMARY: AddressSanitizer: 49143 byte(s) leaked in 184 allocation(s).
      
      The docs for `g_source_add_child_source(source, child_source)` says
      "source will hold a reference on child_source while child_source is
      attached to it." Therefore, we should unreference the child source at
      `qio_channel_tls_read_watch()` after attaching it to `source`. With this
      change, ./tests/unit/test-io-channel-tls shows no leakages.
      
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Signed-off-by: default avatarMatheus Tavares Bernardino <quic_mathbern@quicinc.com>
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      c3a2c84a
  8. Mar 13, 2023
  9. Feb 15, 2023
  10. Feb 06, 2023
  11. Oct 26, 2022
    • Bin Meng's avatar
      io/channel-watch: Fix socket watch on Windows · 23f77f05
      Bin Meng authored
      
      Random failure was observed when running qtests on Windows due to
      "Broken pipe" detected by qmp_fd_receive(). What happened is that
      the qtest executable sends testing data over a socket to the QEMU
      under test but no response is received. The errno of the recv()
      call from the qtest executable indicates ETIMEOUT, due to the qmp
      chardev's tcp_chr_read() is never called to receive testing data
      hence no response is sent to the other side.
      
      tcp_chr_read() is registered as the callback of the socket watch
      GSource. The reason of the callback not being called by glib, is
      that the source check fails to indicate the source is ready. There
      are two socket watch sources created to monitor the same socket
      event object from the char-socket backend in update_ioc_handlers().
      During the source check phase, qio_channel_socket_source_check()
      calls WSAEnumNetworkEvents() to discover occurrences of network
      events for the indicated socket, clear internal network event records,
      and reset the event object. Testing shows that if we don't reset the
      event object by not passing the event handle to WSAEnumNetworkEvents()
      the symptom goes away and qtest runs very stably.
      
      It seems we don't need to call WSAEnumNetworkEvents() at all, as we
      don't parse the result of WSANETWORKEVENTS returned from this API.
      We use select() to poll the socket status. Fix this instability by
      dropping the WSAEnumNetworkEvents() call.
      
      Some side notes:
      
      During the testing, I removed the following codes in update_ioc_handlers():
      
        remove_hup_source(s);
        s->hup_source = qio_channel_create_watch(s->ioc, G_IO_HUP);
        g_source_set_callback(s->hup_source, (GSourceFunc)tcp_chr_hup,
                              chr, NULL);
        g_source_attach(s->hup_source, chr->gcontext);
      
      and such change also makes the symptom go away.
      
      And if I moved the above codes to the beginning, before the call to
      io_add_watch_poll(), the symptom also goes away.
      
      It seems two sources watching on the same socket event object is
      the key that leads to the instability. The order of adding a source
      watch seems to also play a role but I can't explain why.
      Hopefully a Windows and glib expert could explain this behavior.
      
      Signed-off-by: default avatarBin Meng <bin.meng@windriver.com>
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      23f77f05
    • Bin Meng's avatar
      io/channel-watch: Drop the unnecessary cast · 6c822a03
      Bin Meng authored
      
      There is no need to do a type cast on ssource->socket as it is
      already declared as a SOCKET.
      
      Suggested-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: default avatarBin Meng <bin.meng@windriver.com>
      Reviewed-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      6c822a03
    • Bin Meng's avatar
      io/channel-watch: Drop a superfluous '#ifdef WIN32' · 985be62d
      Bin Meng authored
      
      In the win32 version qio_channel_create_socket_watch() body there is
      no need to do a '#ifdef WIN32'.
      
      Signed-off-by: default avatarBin Meng <bin.meng@windriver.com>
      Reviewed-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      985be62d
  12. Oct 12, 2022
    • Marc-André Lureau's avatar
      io/command: implement support for win32 · ec5b6c9c
      Marc-André Lureau authored
      
      The initial implementation was changing the pipe state created by GLib
      to PIPE_NOWAIT, but it turns out it doesn't work (read/write returns an
      error). Since reading may return less than the requested amount, it
      seems to be non-blocking already. However, the IO operation may block
      until the FD is ready, I can't find good sources of information, to be
      safe we can just poll for readiness before.
      
      Alternatively, we could setup the FDs ourself, and use UNIX sockets on
      Windows, which can be used in blocking/non-blocking mode. I haven't
      tried it, as I am not sure it is necessary.
      
      Signed-off-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Message-Id: <20221006113657.2656108-6-marcandre.lureau@redhat.com>
      ec5b6c9c
    • Marc-André Lureau's avatar
      io/command: use glib GSpawn, instead of open-coding fork/exec · a95570e3
      Marc-André Lureau authored
      
      Simplify qio_channel_command_new_spawn() with GSpawn API. This will
      allow to build for WIN32 in the following patches.
      
      As pointed out by Daniel Berrangé: there is a change in semantics here
      too. The current code only touches stdin/stdout/stderr. Any other FDs
      which do NOT have O_CLOEXEC set will be inherited. With the new code,
      all FDs except stdin/out/err will be explicitly closed, because we don't
      set the flag G_SPAWN_LEAVE_DESCRIPTORS_OPEN. The only place we use
      QIOChannelCommand today is the migration exec: protocol, and that is
      only declared to use stdin/stdout.
      
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Message-Id: <20221006113657.2656108-5-marcandre.lureau@redhat.com>
      a95570e3
  13. Sep 22, 2022
  14. Aug 05, 2022
    • Leonardo Bras's avatar
      QIOChannelSocket: Add support for MSG_ZEROCOPY + IPV6 · 5258a7e2
      Leonardo Bras authored
      
      For using MSG_ZEROCOPY, there are two steps:
      1 - io_writev() the packet, which enqueues the packet for sending, and
      2 - io_flush(), which gets confirmation that all packets got correctly sent
      
      Currently, if MSG_ZEROCOPY is used to send packets over IPV6, no error will
      be reported in (1), but it will fail in the first time (2) happens.
      
      This happens because (2) currently checks for cmsg_level & cmsg_type
      associated with IPV4 only, before reporting any error.
      
      Add checks for cmsg_level & cmsg_type associated with IPV6, and thus enable
      support for MSG_ZEROCOPY + IPV6
      
      Fixes: 2bc58ffc ("QIOChannelSocket: Implement io_writev zero copy flag & io_flush for CONFIG_LINUX")
      Signed-off-by: default avatarLeonardo Bras <leobras@redhat.com>
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      5258a7e2
  15. Jul 20, 2022
  16. Jun 22, 2022
  17. May 16, 2022
    • Leonardo Bras's avatar
      QIOChannelSocket: Implement io_writev zero copy flag & io_flush for CONFIG_LINUX · 2bc58ffc
      Leonardo Bras authored
      
      For CONFIG_LINUX, implement the new zero copy flag and the optional callback
      io_flush on QIOChannelSocket, but enables it only when MSG_ZEROCOPY
      feature is available in the host kernel, which is checked on
      qio_channel_socket_connect_sync()
      
      qio_channel_socket_flush() was implemented by counting how many times
      sendmsg(...,MSG_ZEROCOPY) was successfully called, and then reading the
      socket's error queue, in order to find how many of them finished sending.
      Flush will loop until those counters are the same, or until some error occurs.
      
      Notes on using writev() with QIO_CHANNEL_WRITE_FLAG_ZERO_COPY:
      1: Buffer
      - As MSG_ZEROCOPY tells the kernel to use the same user buffer to avoid copying,
      some caution is necessary to avoid overwriting any buffer before it's sent.
      If something like this happen, a newer version of the buffer may be sent instead.
      - If this is a problem, it's recommended to call qio_channel_flush() before freeing
      or re-using the buffer.
      
      2: Locked memory
      - When using MSG_ZERCOCOPY, the buffer memory will be locked after queued, and
      unlocked after it's sent.
      - Depending on the size of each buffer, and how often it's sent, it may require
      a larger amount of locked memory than usually available to non-root user.
      - If the required amount of locked memory is not available, writev_zero_copy
      will return an error, which can abort an operation like migration,
      - Because of this, when an user code wants to add zero copy as a feature, it
      requires a mechanism to disable it, so it can still be accessible to less
      privileged users.
      
      Signed-off-by: default avatarLeonardo Bras <leobras@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-Id: <20220513062836.965425-4-leobras@redhat.com>
      Signed-off-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      2bc58ffc
    • Leonardo Bras's avatar
      QIOChannel: Add flags on io_writev and introduce io_flush callback · b88651cb
      Leonardo Bras authored
      
      Add flags to io_writev and introduce io_flush as optional callback to
      QIOChannelClass, allowing the implementation of zero copy writes by
      subclasses.
      
      How to use them:
      - Write data using qio_channel_writev*(...,QIO_CHANNEL_WRITE_FLAG_ZERO_COPY),
      - Wait write completion with qio_channel_flush().
      
      Notes:
      As some zero copy write implementations work asynchronously, it's
      recommended to keep the write buffer untouched until the return of
      qio_channel_flush(), to avoid the risk of sending an updated buffer
      instead of the buffer state during write.
      
      As io_flush callback is optional, if a subclass does not implement it, then:
      - io_flush will return 0 without changing anything.
      
      Also, some functions like qio_channel_writev_full_all() were adapted to
      receive a flag parameter. That allows shared code between zero copy and
      non-zero copy writev, and also an easier implementation on new flags.
      
      Signed-off-by: default avatarLeonardo Bras <leobras@redhat.com>
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Message-Id: <20220513062836.965425-3-leobras@redhat.com>
      Signed-off-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      b88651cb
  18. May 03, 2022
  19. Apr 06, 2022
  20. Mar 22, 2022
  21. Jan 12, 2022
    • Stefan Hajnoczi's avatar
      aio-posix: split poll check from ready handler · 826cc324
      Stefan Hajnoczi authored
      
      Adaptive polling measures the execution time of the polling check plus
      handlers called when a polled event becomes ready. Handlers can take a
      significant amount of time, making it look like polling was running for
      a long time when in fact the event handler was running for a long time.
      
      For example, on Linux the io_submit(2) syscall invoked when a virtio-blk
      device's virtqueue becomes ready can take 10s of microseconds. This
      can exceed the default polling interval (32 microseconds) and cause
      adaptive polling to stop polling.
      
      By excluding the handler's execution time from the polling check we make
      the adaptive polling calculation more accurate. As a result, the event
      loop now stays in polling mode where previously it would have fallen
      back to file descriptor monitoring.
      
      The following data was collected with virtio-blk num-queues=2
      event_idx=off using an IOThread. Before:
      
      168k IOPS, IOThread syscalls:
      
        9837.115 ( 0.020 ms): IO iothread1/620155 io_submit(ctx_id: 140512552468480, nr: 16, iocbpp: 0x7fcb9f937db0)    = 16
        9837.158 ( 0.002 ms): IO iothread1/620155 write(fd: 103, buf: 0x556a2ef71b88, count: 8)                         = 8
        9837.161 ( 0.001 ms): IO iothread1/620155 write(fd: 104, buf: 0x556a2ef71b88, count: 8)                         = 8
        9837.163 ( 0.001 ms): IO iothread1/620155 ppoll(ufds: 0x7fcb90002800, nfds: 4, tsp: 0x7fcb9f1342d0, sigsetsize: 8) = 3
        9837.164 ( 0.001 ms): IO iothread1/620155 read(fd: 107, buf: 0x7fcb9f939cc0, count: 512)                        = 8
        9837.174 ( 0.001 ms): IO iothread1/620155 read(fd: 105, buf: 0x7fcb9f939cc0, count: 512)                        = 8
        9837.176 ( 0.001 ms): IO iothread1/620155 read(fd: 106, buf: 0x7fcb9f939cc0, count: 512)                        = 8
        9837.209 ( 0.035 ms): IO iothread1/620155 io_submit(ctx_id: 140512552468480, nr: 32, iocbpp: 0x7fca7d0cebe0)    = 32
      
      174k IOPS (+3.6%), IOThread syscalls:
      
        9809.566 ( 0.036 ms): IO iothread1/623061 io_submit(ctx_id: 140539805028352, nr: 32, iocbpp: 0x7fd0cdd62be0)    = 32
        9809.625 ( 0.001 ms): IO iothread1/623061 write(fd: 103, buf: 0x5647cfba5f58, count: 8)                         = 8
        9809.627 ( 0.002 ms): IO iothread1/623061 write(fd: 104, buf: 0x5647cfba5f58, count: 8)                         = 8
        9809.663 ( 0.036 ms): IO iothread1/623061 io_submit(ctx_id: 140539805028352, nr: 32, iocbpp: 0x7fd0d0388b50)    = 32
      
      Notice that ppoll(2) and eventfd read(2) syscalls are eliminated because
      the IOThread stays in polling mode instead of falling back to file
      descriptor monitoring.
      
      As usual, polling is not implemented on Windows so this patch ignores
      the new io_poll_read() callback in aio-win32.c.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarStefano Garzarella <sgarzare@redhat.com>
      Message-id: 20211207132336.36627-2-stefanha@redhat.com
      
      [Fixed up aio_set_event_notifier() calls in
      tests/unit/test-fdmon-epoll.c added after this series was queued.
      --Stefan]
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      826cc324
  22. Sep 30, 2021
  23. Jul 14, 2021
  24. Jun 08, 2021
Loading