Skip to content
Snippets Groups Projects
  1. Sep 29, 2022
  2. Sep 26, 2022
  3. Sep 02, 2022
  4. Aug 26, 2022
  5. Aug 12, 2022
    • Philippe Mathieu-Daudé's avatar
      cutils: Add missing dyld(3) include on macOS · 4311682e
      Philippe Mathieu-Daudé authored
      
      Commit 06680b15 moved qemu_*_exec_dir() to cutils but forgot
      to move the macOS dyld(3) include, resulting in the following
      error (when building with Homebrew GCC on macOS Monterey 12.4):
      
        [313/1197] Compiling C object libqemuutil.a.p/util_cutils.c.o
        FAILED: libqemuutil.a.p/util_cutils.c.o
        ../../util/cutils.c:1039:13: error: implicit declaration of function '_NSGetExecutablePath' [-Werror=implicit-function-declaration]
         1039 |         if (_NSGetExecutablePath(fpath, &len) == 0) {
              |             ^~~~~~~~~~~~~~~~~~~~
        ../../util/cutils.c:1039:13: error: nested extern declaration of '_NSGetExecutablePath' [-Werror=nested-externs]
      
      Fix by moving the include line to cutils.
      
      Fixes: 06680b15 ("include: move qemu_*_exec_dir() to cutils")
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Message-id: 20220809222046.30812-1-f4bug@amsat.org
      Reviewed-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      4311682e
  6. Aug 05, 2022
  7. Jul 19, 2022
  8. Jul 18, 2022
  9. Jul 13, 2022
    • Akihiko Odaki's avatar
      module: Use bundle mechanism · 98753e9a
      Akihiko Odaki authored
      
      Before this change, the directory of the executable was being added to
      resolve modules in the build tree. However, get_relocated_path() can now
      resolve them with the new bundle mechanism.
      
      Signed-off-by: default avatarAkihiko Odaki <akihiko.odaki@gmail.com>
      Message-Id: <20220624145039.49929-5-akihiko.odaki@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      98753e9a
    • Akihiko Odaki's avatar
      cutils: Introduce bundle mechanism · cf60ccc3
      Akihiko Odaki authored
      
      Developers often run QEMU without installing. The bundle mechanism
      allows to look up files which should be present in installation even in
      such a situation.
      
      It is a general mechanism and can find any files in the installation
      tree. The build tree will have a new directory, qemu-bundle, to
      represent what files the installation tree would have for reference by
      the executables.
      
      Note that it abandons compatibility with Windows older than 8. The
      extended support for the prior version, 7 ended more than 2 years ago,
      and it is unlikely that someone would like to run the latest QEMU on
      such an old system.
      
      Signed-off-by: default avatarAkihiko Odaki <akihiko.odaki@gmail.com>
      Suggested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20220624145039.49929-3-akihiko.odaki@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cf60ccc3
  10. Jun 29, 2022
  11. Jun 28, 2022
  12. Jun 24, 2022
  13. Jun 21, 2022
  14. Jun 20, 2022
  15. Jun 14, 2022
  16. Jun 06, 2022
  17. May 28, 2022
  18. May 25, 2022
  19. May 12, 2022
    • Paolo Bonzini's avatar
      coroutine-lock: qemu_co_queue_restart_all is a coroutine-only qemu_co_enter_all · f0d43b1e
      Paolo Bonzini authored
      
      qemu_co_queue_restart_all is basically the same as qemu_co_enter_all
      but without a QemuLockable argument.  That's perfectly fine, but only as
      long as the function is marked coroutine_fn.  If used outside coroutine
      context, qemu_co_queue_wait will attempt to take the lock and that
      is just broken: if you are calling qemu_co_queue_restart_all outside
      coroutine context, the lock is going to be a QemuMutex which cannot be
      taken twice by the same thread.
      
      The patch adds the marker to qemu_co_queue_restart_all and to its sole
      non-coroutine_fn caller; it then reimplements the function in terms of
      qemu_co_enter_all_impl, to remove duplicated code and to clarify that the
      latter also works in coroutine context.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20220427130830.150180-4-pbonzini@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f0d43b1e
    • Paolo Bonzini's avatar
      coroutine-lock: introduce qemu_co_queue_enter_all · d6ee15ad
      Paolo Bonzini authored
      
      Because qemu_co_queue_restart_all does not release the lock, it should
      be used only in coroutine context.  Introduce a new function that,
      like qemu_co_enter_next, does release the lock, and use it whenever
      qemu_co_queue_restart_all was used outside coroutine context.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20220427130830.150180-3-pbonzini@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d6ee15ad
    • Paolo Bonzini's avatar
      coroutine-lock: qemu_co_queue_next is a coroutine-only qemu_co_enter_next · 248af9e8
      Paolo Bonzini authored
      
      qemu_co_queue_next is basically the same as qemu_co_enter_next but
      without a QemuLockable argument.  That's perfectly fine, but only
      as long as the function is marked coroutine_fn.  If used outside
      coroutine context, qemu_co_queue_wait will attempt to take the lock
      and that is just broken: if you are calling qemu_co_queue_next outside
      coroutine context, the lock is going to be a QemuMutex which cannot be
      taken twice by the same thread.
      
      The patch adds the marker and reimplements qemu_co_queue_next in terms of
      qemu_co_enter_next_impl, to remove duplicated code and to clarify that the
      latter also works in coroutine context.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20220427130830.150180-2-pbonzini@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      248af9e8
    • Kevin Wolf's avatar
      coroutine: Revert to constant batch size · 9ec7a59b
      Kevin Wolf authored
      Commit 4c41c69e changed the way the coroutine pool is sized because for
      virtio-blk devices with a large queue size and heavy I/O, it was just
      too small and caused coroutines to be deleted and reallocated soon
      afterwards. The change made the size dynamic based on the number of
      queues and the queue size of virtio-blk devices.
      
      There are two important numbers here: Slightly simplified, when a
      coroutine terminates, it is generally stored in the global release pool
      up to a certain pool size, and if the pool is full, it is freed.
      Conversely, when allocating a new coroutine, the coroutines in the
      release pool are reused if the pool already has reached a certain
      minimum size (the batch size), otherwise we allocate new coroutines.
      
      The problem after commit 4c41c69e is that it not only increases the
      maximum pool size (which is the intended effect), but also the batch
      size for reusing coroutines (which is a bug). It means that in cases
      with many devices and/or a large queue size (which defaults to the
      number of vcpus for virtio-blk-pci), many thousand coroutines could be
      sitting in the release pool without being reused.
      
      This is not only a waste of memory and allocations, but it actually
      makes the QEMU process likely to hit the vm.max_map_count limit on Linux
      because each coroutine requires two mappings (its stack and the guard
      page for the stack), causing it to abort() in qemu_alloc_stack() because
      when the limit is hit, mprotect() starts to fail with ENOMEM.
      
      In order to fix the problem, change the batch size back to 64 to avoid
      uselessly accumulating coroutines in the release pool, but keep the
      dynamic maximum pool size so that coroutines aren't freed too early
      in heavy I/O scenarios.
      
      Note that this fix doesn't strictly make it impossible to hit the limit,
      but this would only happen if most of the coroutines are actually in use
      at the same time, not just sitting in a pool. This is the same behaviour
      as we already had before commit 4c41c69e. Fully preventing this would
      require allowing qemu_coroutine_create() to return an error, but it
      doesn't seem to be a scenario that people hit in practice.
      
      Cc: qemu-stable@nongnu.org
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2079938
      
      
      Fixes: 4c41c69e
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20220510151020.105528-3-kwolf@redhat.com>
      Tested-by: default avatarHiroki Narukawa <hnarukaw@yahoo-corp.jp>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      9ec7a59b
    • Kevin Wolf's avatar
      coroutine: Rename qemu_coroutine_inc/dec_pool_size() · 98e3ab35
      Kevin Wolf authored
      
      It's true that these functions currently affect the batch size in which
      coroutines are reused (i.e. moved from the global release pool to the
      allocation pool of a specific thread), but this is a bug and will be
      fixed in a separate patch.
      
      In fact, the comment in the header file already just promises that it
      influences the pool size, so reflect this in the name of the functions.
      As a nice side effect, the shorter function name makes some line
      wrapping unnecessary.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20220510151020.105528-2-kwolf@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      98e3ab35
  20. May 09, 2022
  21. May 04, 2022
    • Stefan Hajnoczi's avatar
      coroutine-win32: use QEMU_DEFINE_STATIC_CO_TLS() · c1fe6943
      Stefan Hajnoczi authored
      
      Thread-Local Storage variables cannot be used directly from coroutine
      code because the compiler may optimize TLS variable accesses across
      qemu_coroutine_yield() calls. When the coroutine is re-entered from
      another thread the TLS variables from the old thread must no longer be
      used.
      
      Use QEMU_DEFINE_STATIC_CO_TLS() for the current and leader variables.
      
      I think coroutine-win32.c could get away with __thread because the
      variables are only used in situations where either the stale value is
      correct (current) or outside coroutine context (loading leader when
      current is NULL). Due to the difficulty of being sure that this is
      really safe in all scenarios it seems worth converting it anyway.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20220307153853.602859-4-stefanha@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      c1fe6943
    • Stefan Hajnoczi's avatar
      coroutine: use QEMU_DEFINE_STATIC_CO_TLS() · ac387a08
      Stefan Hajnoczi authored
      
      Thread-Local Storage variables cannot be used directly from coroutine
      code because the compiler may optimize TLS variable accesses across
      qemu_coroutine_yield() calls. When the coroutine is re-entered from
      another thread the TLS variables from the old thread must no longer be
      used.
      
      Use QEMU_DEFINE_STATIC_CO_TLS() for the current and leader variables.
      The alloc_pool QSLIST needs a typedef so the return value of
      get_ptr_alloc_pool() can be stored in a local variable.
      
      One example of why this code is necessary: a coroutine that yields
      before calling qemu_coroutine_create() to create another coroutine is
      affected by the TLS issue.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20220307153853.602859-3-stefanha@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      ac387a08
    • Stefan Hajnoczi's avatar
      coroutine-ucontext: use QEMU_DEFINE_STATIC_CO_TLS() · 34145a30
      Stefan Hajnoczi authored
      
      Thread-Local Storage variables cannot be used directly from coroutine
      code because the compiler may optimize TLS variable accesses across
      qemu_coroutine_yield() calls. When the coroutine is re-entered from
      another thread the TLS variables from the old thread must no longer be
      used.
      
      Use QEMU_DEFINE_STATIC_CO_TLS() for the current and leader variables.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20220307153853.602859-2-stefanha@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      34145a30
  22. May 03, 2022
Loading