Skip to content
Snippets Groups Projects
  1. Mar 13, 2023
  2. Mar 07, 2023
  3. Mar 05, 2023
  4. Feb 27, 2023
  5. Feb 23, 2023
  6. Feb 21, 2023
  7. Feb 17, 2023
  8. Feb 14, 2023
  9. Feb 11, 2023
    • Peter Xu's avatar
      util/userfaultfd: Support /dev/userfaultfd · c40c0463
      Peter Xu authored
      
      Teach QEMU to use /dev/userfaultfd when it existed and fallback to the
      system call if either it's not there or doesn't have enough permission.
      
      Firstly, as long as the app has permission to access /dev/userfaultfd, it
      always have the ability to trap kernel faults which QEMU mostly wants.
      Meanwhile, in some context (e.g. containers) the userfaultfd syscall can be
      forbidden, so it can be the major way to use postcopy in a restricted
      environment with strict seccomp setup.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      c40c0463
  10. Feb 08, 2023
  11. Feb 06, 2023
  12. Feb 04, 2023
  13. Feb 02, 2023
    • Emilio Cota's avatar
      util/qht: use striped locks under TSAN · 68f7b2be
      Emilio Cota authored
      
      Fixes this tsan crash, easy to reproduce with any large enough program:
      
      $ tests/unit/test-qht
      1..2
      ThreadSanitizer: CHECK failed: sanitizer_deadlock_detector.h:67 "((n_all_locks_)) < (((sizeof(all_locks_with_contexts_)/sizeof((all_locks_with_contexts_)[0]))))" (0x40, 0x40) (tid=1821568)
          #0 __tsan::CheckUnwind() ../../../../src/libsanitizer/tsan/tsan_rtl.cpp:353 (libtsan.so.2+0x90034)
          #1 __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) ../../../../src/libsanitizer/sanitizer_common/sanitizer_termination.cpp:86 (libtsan.so.2+0xca555)
          #2 __sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::addLock(unsigned long, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:67 (libtsan.so.2+0xb3616)
          #3 __sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::addLock(unsigned long, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:59 (libtsan.so.2+0xb3616)
          #4 __sanitizer::DeadlockDetector<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >::onLockAfter(__sanitizer::DeadlockDetectorTLS<__sanitizer::TwoLevelBitVector<1ul, __sanitizer::BasicBitVector<unsigned long> > >*, unsigned long, unsigned int) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector.h:216 (libtsan.so.2+0xb3616)
          #5 __sanitizer::DD::MutexAfterLock(__sanitizer::DDCallback*, __sanitizer::DDMutex*, bool, bool) ../../../../src/libsanitizer/sanitizer_common/sanitizer_deadlock_detector1.cpp:169 (libtsan.so.2+0xb3616)
          #6 __tsan::MutexPostLock(__tsan::ThreadState*, unsigned long, unsigned long, unsigned int, int) ../../../../src/libsanitizer/tsan/tsan_rtl_mutex.cpp:200 (libtsan.so.2+0xa3382)
          #7 __tsan_mutex_post_lock ../../../../src/libsanitizer/tsan/tsan_interface_ann.cpp:384 (libtsan.so.2+0x76bc3)
          #8 qemu_spin_lock /home/cota/src/qemu/include/qemu/thread.h:259 (test-qht+0x44a97)
          #9 qht_map_lock_buckets ../util/qht.c:253 (test-qht+0x44a97)
          #10 do_qht_iter ../util/qht.c:809 (test-qht+0x45f33)
          #11 qht_iter ../util/qht.c:821 (test-qht+0x45f33)
          #12 iter_check ../tests/unit/test-qht.c:121 (test-qht+0xe473)
          #13 qht_do_test ../tests/unit/test-qht.c:202 (test-qht+0xe473)
          #14 qht_test ../tests/unit/test-qht.c:240 (test-qht+0xe7c1)
          #15 test_default ../tests/unit/test-qht.c:246 (test-qht+0xe828)
          #16 <null> <null> (libglib-2.0.so.0+0x7daed)
          #17 <null> <null> (libglib-2.0.so.0+0x7d80a)
          #18 <null> <null> (libglib-2.0.so.0+0x7d80a)
          #19 g_test_run_suite <null> (libglib-2.0.so.0+0x7dfe9)
          #20 g_test_run <null> (libglib-2.0.so.0+0x7e055)
          #21 main ../tests/unit/test-qht.c:259 (test-qht+0xd2c6)
          #22 __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 (libc.so.6+0x29d8f)
          #23 __libc_start_main_impl ../csu/libc-start.c:392 (libc.so.6+0x29e3f)
          #24 _start <null> (test-qht+0xdb44)
      
      Signed-off-by: default avatarEmilio Cota <cota@braap.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-Id: <20230111151628.320011-5-cota@braap.org>
      Signed-off-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Message-Id: <20230124180127.1881110-30-alex.bennee@linaro.org>
      68f7b2be
    • Emilio Cota's avatar
      util/qht: add missing atomic_set(hashes[i]) · def48ddd
      Emilio Cota authored
      
      We forgot to add this one in "a8906439 util/qht: atomically set b->hashes".
      
      Detected with tsan.
      
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: default avatarEmilio Cota <cota@braap.org>
      Message-Id: <20230111151628.320011-3-cota@braap.org>
      Signed-off-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Message-Id: <20230124180127.1881110-28-alex.bennee@linaro.org>
      def48ddd
  14. Jan 23, 2023
    • Chao Gao's avatar
      util/aio: Defer disabling poll mode as long as possible · 816a430c
      Chao Gao authored
      
      When we measure FIO read performance (cache=writethrough, bs=4k,
      iodepth=64) in VMs, ~80K/s notifications (e.g., EPT_MISCONFIG) are observed
      from guest to qemu.
      
      It turns out those frequent notificatons are caused by interference from
      worker threads. Worker threads queue bottom halves after completing IO
      requests.  Pending bottom halves may lead to either aio_compute_timeout()
      zeros timeout and pass it to try_poll_mode() or run_poll_handlers() returns
      no progress after noticing pending aio_notify() events. Both cause
      run_poll_handlers() to call poll_set_started(false) to disable poll mode.
      However, for both cases, as timeout is already zeroed, the event loop
      (i.e., aio_poll()) just processes bottom halves and then starts the next
      event loop iteration. So, disabling poll mode has no value but leads to
      unnecessary notifications from guest.
      
      To minimize unnecessary notifications from guest, defer disabling poll
      mode to when the event loop is about to be blocked.
      
      With this patch applied, FIO seq-read performance (bs=4k, iodepth=64,
      cache=writethrough) in VMs increases from 330K/s to 413K/s IOPS.
      
      Suggested-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: default avatarChao Gao <chao.gao@intel.com>
      Message-id: 20220710120849.63086-1-chao.gao@intel.com
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      816a430c
  15. Jan 20, 2023
  16. Jan 19, 2023
  17. Jan 16, 2023
  18. Jan 11, 2023
  19. Jan 09, 2023
  20. Jan 05, 2023
Loading