Skip to content
Snippets Groups Projects
  1. Sep 01, 2023
  2. Aug 31, 2023
  3. Aug 29, 2023
  4. Aug 22, 2023
  5. Jul 26, 2023
  6. Jul 12, 2023
  7. Jul 07, 2023
    • Fiona Ebner's avatar
      qemu_cleanup: begin drained section after vm_shutdown() · ca2a5e63
      Fiona Ebner authored
      
      in order to avoid requests being stuck in a BlockBackend's request
      queue during cleanup. Having such requests can lead to a deadlock [0]
      with a virtio-scsi-pci device using iothread that's busy with IO when
      initiating a shutdown with QMP 'quit'.
      
      There is a race where such a queued request can continue sometime
      (maybe after bdrv_child_free()?) during bdrv_root_unref_child() [1].
      The completion will hold the AioContext lock and wait for the BQL
      during SCSI completion, but the main thread will hold the BQL and
      wait for the AioContext as part of bdrv_root_unref_child(), leading to
      the deadlock [0].
      
      [0]:
      
      > Thread 3 (Thread 0x7f3bbd87b700 (LWP 135952) "qemu-system-x86"):
      > #0  __lll_lock_wait (futex=futex@entry=0x564183365f00 <qemu_global_mutex>, private=0) at lowlevellock.c:52
      > #1  0x00007f3bc1c0d843 in __GI___pthread_mutex_lock (mutex=0x564183365f00 <qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
      > #2  0x0000564182939f2e in qemu_mutex_lock_impl (mutex=0x564183365f00 <qemu_global_mutex>, file=0x564182b7f774 "../softmmu/physmem.c", line=2593) at ../util/qemu-thread-posix.c:94
      > #3  0x000056418247cc2a in qemu_mutex_lock_iothread_impl (file=0x564182b7f774 "../softmmu/physmem.c", line=2593) at ../softmmu/cpus.c:504
      > #4  0x00005641826d5325 in prepare_mmio_access (mr=0x5641856148a0) at ../softmmu/physmem.c:2593
      > #5  0x00005641826d6fe7 in address_space_stl_internal (as=0x56418679b310, addr=4276113408, val=16418, attrs=..., result=0x0, endian=DEVICE_LITTLE_ENDIAN) at /home/febner/repos/qemu/memory_ldst.c.inc:318
      > #6  0x00005641826d7154 in address_space_stl_le (as=0x56418679b310, addr=4276113408, val=16418, attrs=..., result=0x0) at /home/febner/repos/qemu/memory_ldst.c.inc:357
      > #7  0x0000564182374b07 in pci_msi_trigger (dev=0x56418679b0d0, msg=...) at ../hw/pci/pci.c:359
      > #8  0x000056418237118b in msi_send_message (dev=0x56418679b0d0, msg=...) at ../hw/pci/msi.c:379
      > #9  0x0000564182372c10 in msix_notify (dev=0x56418679b0d0, vector=8) at ../hw/pci/msix.c:542
      > #10 0x000056418243719c in virtio_pci_notify (d=0x56418679b0d0, vector=8) at ../hw/virtio/virtio-pci.c:77
      > #11 0x00005641826933b0 in virtio_notify_vector (vdev=0x5641867a34a0, vector=8) at ../hw/virtio/virtio.c:1985
      > #12 0x00005641826948d6 in virtio_irq (vq=0x5641867ac078) at ../hw/virtio/virtio.c:2461
      > #13 0x0000564182694978 in virtio_notify (vdev=0x5641867a34a0, vq=0x5641867ac078) at ../hw/virtio/virtio.c:2473
      > #14 0x0000564182665b83 in virtio_scsi_complete_req (req=0x7f3bb000e5d0) at ../hw/scsi/virtio-scsi.c:115
      > #15 0x00005641826670ce in virtio_scsi_complete_cmd_req (req=0x7f3bb000e5d0) at ../hw/scsi/virtio-scsi.c:641
      > #16 0x000056418266736b in virtio_scsi_command_complete (r=0x7f3bb0010560, resid=0) at ../hw/scsi/virtio-scsi.c:712
      > #17 0x000056418239aac6 in scsi_req_complete (req=0x7f3bb0010560, status=2) at ../hw/scsi/scsi-bus.c:1526
      > #18 0x000056418239e090 in scsi_handle_rw_error (r=0x7f3bb0010560, ret=-123, acct_failed=false) at ../hw/scsi/scsi-disk.c:242
      > #19 0x000056418239e13f in scsi_disk_req_check_error (r=0x7f3bb0010560, ret=-123, acct_failed=false) at ../hw/scsi/scsi-disk.c:265
      > #20 0x000056418239e482 in scsi_dma_complete_noio (r=0x7f3bb0010560, ret=-123) at ../hw/scsi/scsi-disk.c:340
      > #21 0x000056418239e5d9 in scsi_dma_complete (opaque=0x7f3bb0010560, ret=-123) at ../hw/scsi/scsi-disk.c:371
      > #22 0x00005641824809ad in dma_complete (dbs=0x7f3bb000d9d0, ret=-123) at ../softmmu/dma-helpers.c:107
      > #23 0x0000564182480a72 in dma_blk_cb (opaque=0x7f3bb000d9d0, ret=-123) at ../softmmu/dma-helpers.c:127
      > #24 0x00005641827bf78a in blk_aio_complete (acb=0x7f3bb00021a0) at ../block/block-backend.c:1563
      > #25 0x00005641827bfa5e in blk_aio_write_entry (opaque=0x7f3bb00021a0) at ../block/block-backend.c:1630
      > #26 0x000056418295638a in coroutine_trampoline (i0=-1342102448, i1=32571) at ../util/coroutine-ucontext.c:177
      > #27 0x00007f3bc0caed40 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
      > #28 0x00007f3bbd8757f0 in ?? ()
      > #29 0x0000000000000000 in ?? ()
      >
      > Thread 1 (Thread 0x7f3bbe3e9280 (LWP 135944) "qemu-system-x86"):
      > #0  __lll_lock_wait (futex=futex@entry=0x5641856f2a00, private=0) at lowlevellock.c:52
      > #1  0x00007f3bc1c0d8d1 in __GI___pthread_mutex_lock (mutex=0x5641856f2a00) at ../nptl/pthread_mutex_lock.c:115
      > #2  0x0000564182939f2e in qemu_mutex_lock_impl (mutex=0x5641856f2a00, file=0x564182c0e319 "../util/async.c", line=728) at ../util/qemu-thread-posix.c:94
      > #3  0x000056418293a140 in qemu_rec_mutex_lock_impl (mutex=0x5641856f2a00, file=0x564182c0e319 "../util/async.c", line=728) at ../util/qemu-thread-posix.c:149
      > #4  0x00005641829532d5 in aio_context_acquire (ctx=0x5641856f29a0) at ../util/async.c:728
      > #5  0x000056418279d5df in bdrv_set_aio_context_commit (opaque=0x5641856e6e50) at ../block.c:7493
      > #6  0x000056418294e288 in tran_commit (tran=0x56418630bfe0) at ../util/transactions.c:87
      > #7  0x000056418279d880 in bdrv_try_change_aio_context (bs=0x5641856f7130, ctx=0x56418548f810, ignore_child=0x0, errp=0x0) at ../block.c:7626
      > #8  0x0000564182793f39 in bdrv_root_unref_child (child=0x5641856f47d0) at ../block.c:3242
      > #9  0x00005641827be137 in blk_remove_bs (blk=0x564185709880) at ../block/block-backend.c:914
      > #10 0x00005641827bd689 in blk_remove_all_bs () at ../block/block-backend.c:583
      > #11 0x0000564182798699 in bdrv_close_all () at ../block.c:5117
      > #12 0x000056418248a5b2 in qemu_cleanup () at ../softmmu/runstate.c:821
      > #13 0x0000564182738603 in qemu_default_main () at ../softmmu/main.c:38
      > #14 0x0000564182738631 in main (argc=30, argv=0x7ffd675a8a48) at ../softmmu/main.c:48
      >
      > (gdb) p *((QemuMutex*)0x5641856f2a00)
      > $1 = {lock = {__data = {__lock = 2, __count = 2, __owner = 135952, ...
      > (gdb) p *((QemuMutex*)0x564183365f00)
      > $2 = {lock = {__data = {__lock = 2, __count = 0, __owner = 135944, ...
      
      [1]:
      
      > Thread 1 "qemu-system-x86" hit Breakpoint 5, bdrv_drain_all_end () at ../block/io.c:551
      > #0  bdrv_drain_all_end () at ../block/io.c:551
      > #1  0x00005569810f0376 in bdrv_graph_wrlock (bs=0x0) at ../block/graph-lock.c:156
      > #2  0x00005569810bd3e0 in bdrv_replace_child_noperm (child=0x556982e2d7d0, new_bs=0x0) at ../block.c:2897
      > #3  0x00005569810bdef2 in bdrv_root_unref_child (child=0x556982e2d7d0) at ../block.c:3227
      > #4  0x00005569810e8137 in blk_remove_bs (blk=0x556982e42880) at ../block/block-backend.c:914
      > #5  0x00005569810e7689 in blk_remove_all_bs () at ../block/block-backend.c:583
      > #6  0x00005569810c2699 in bdrv_close_all () at ../block.c:5117
      > #7  0x0000556980db45b2 in qemu_cleanup () at ../softmmu/runstate.c:821
      > #8  0x0000556981062603 in qemu_default_main () at ../softmmu/main.c:38
      > #9  0x0000556981062631 in main (argc=30, argv=0x7ffd7a82a418) at ../softmmu/main.c:48
      > [Switching to Thread 0x7fe76dab2700 (LWP 103649)]
      >
      > Thread 3 "qemu-system-x86" hit Breakpoint 4, blk_inc_in_flight (blk=0x556982e42880) at ../block/block-backend.c:1505
      > #0  blk_inc_in_flight (blk=0x556982e42880) at ../block/block-backend.c:1505
      > #1  0x00005569810e8f36 in blk_wait_while_drained (blk=0x556982e42880) at ../block/block-backend.c:1312
      > #2  0x00005569810e9231 in blk_co_do_pwritev_part (blk=0x556982e42880, offset=3422961664, bytes=4096, qiov=0x556983028060, qiov_offset=0, flags=0) at ../block/block-backend.c:1402
      > #3  0x00005569810e9a4b in blk_aio_write_entry (opaque=0x556982e2cfa0) at ../block/block-backend.c:1628
      > #4  0x000055698128038a in coroutine_trampoline (i0=-2090057872, i1=21865) at ../util/coroutine-ucontext.c:177
      > #5  0x00007fe770f50d40 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
      > #6  0x00007ffd7a829570 in ?? ()
      > #7  0x0000000000000000 in ?? ()
      
      Signed-off-by: default avatarFiona Ebner <f.ebner@proxmox.com>
      Message-ID: <20230706131418.423713-1-f.ebner@proxmox.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ca2a5e63
  8. Jun 27, 2023
  9. Jun 26, 2023
  10. Jun 23, 2023
  11. Jun 20, 2023
  12. Jun 13, 2023
    • Steve Sistare's avatar
      exec/memory: Introduce RAM_NAMED_FILE flag · b0182e53
      Steve Sistare authored
      
      migrate_ignore_shared() is an optimization that avoids copying memory
      that is visible and can be mapped on the target.  However, a
      memory-backend-ram or a memory-backend-memfd block with the RAM_SHARED
      flag set is not migrated when migrate_ignore_shared() is true.  This is
      wrong, because the block has no named backing store, and its contents will
      be lost.  To fix, ignore shared memory iff it is a named file.  Define a
      new flag RAM_NAMED_FILE to distinguish this case.
      
      Signed-off-by: default avatarSteve Sistare <steven.sistare@oracle.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <1686151116-253260-1-git-send-email-steven.sistare@oracle.com>
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      b0182e53
  13. Jun 06, 2023
    • Paolo Bonzini's avatar
      atomics: eliminate mb_read/mb_set · 06831001
      Paolo Bonzini authored
      
      qatomic_mb_read and qatomic_mb_set were the very first atomic primitives
      introduced for QEMU; their semantics are unclear and they provide a false
      sense of safety.
      
      The last use of qatomic_mb_read() has been removed, so delete it.
      qatomic_mb_set() instead can survive as an optimized
      qatomic_set()+smp_mb(), similar to Linux's smp_store_mb(), but
      rename it to qatomic_set_mb() to match the order of the two
      operations.
      
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      06831001
  14. Jun 05, 2023
  15. Jun 01, 2023
  16. May 25, 2023
    • Mark Cave-Ayland's avatar
      softmmu/ioport.c: make MemoryRegionPortioList owner of portio_list MemoryRegions · 690705ca
      Mark Cave-Ayland authored
      
      Currently when portio_list MemoryRegions are freed using portio_list_destroy() the RCU
      thread segfaults generating a backtrace similar to that below:
      
          #0 0x5555599a34b6 in phys_section_destroy ../softmmu/physmem.c:996
          #1 0x5555599a37a3 in phys_sections_free ../softmmu/physmem.c:1011
          #2 0x5555599b24aa in address_space_dispatch_free ../softmmu/physmem.c:2430
          #3 0x55555996a283 in flatview_destroy ../softmmu/memory.c:292
          #4 0x55555a2cb9fb in call_rcu_thread ../util/rcu.c:284
          #5 0x55555a29b71d in qemu_thread_start ../util/qemu-thread-posix.c:541
          #6 0x7ffff4a0cea6 in start_thread nptl/pthread_create.c:477
          #7 0x7ffff492ca2e in __clone (/lib/x86_64-linux-gnu/libc.so.6+0xfca2e)
      
      The problem here is that portio_list_destroy() unparents the portio_list
      MemoryRegions causing them to be freed immediately, however the flatview
      still has a reference to the MemoryRegion and so causes a use-after-free
      segfault when the RCU thread next updates the flatview.
      
      Solve the lifetime issue by making MemoryRegionPortioList the owner of the
      portio_list MemoryRegions, and then reparenting them to the portio_list
      owner. This ensures that they can be accessed as QOM children via the
      portio_list owner, yet the MemoryRegionPortioList owns the refcount.
      
      Update portio_list_destroy() to unparent the MemoryRegion from the
      portio_list owner (while keeping mrpio->mr live until finalization of the
      MemoryRegionPortioList), so that the portio_list MemoryRegions remain
      allocated until flatview_destroy() removes the final refcount upon the
      next flatview update.
      
      Signed-off-by: default avatarMark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Message-Id: <20230419151652.362717-4-mark.cave-ayland@ilande.co.uk>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      690705ca
    • Mark Cave-Ayland's avatar
      softmmu/ioport.c: QOMify MemoryRegionPortioList · 28770689
      Mark Cave-Ayland authored
      
      The aim of QOMification is so that the lifetime of the MemoryRegionPortioList
      structure can be managed using QOM's in-built refcounting instead of having to
      handle this manually.
      
      Due to the use of an opaque pointer it isn't possible to model the new
      TYPE_MEMORY_REGION_PORTIO_LIST directly using QOM properties, however since
      use of the new object is restricted to the portio API we can simply set the
      opaque pointer (and the heap-allocated port list) internally.
      
      Signed-off-by: default avatarMark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Message-Id: <20230419151652.362717-3-mark.cave-ayland@ilande.co.uk>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      28770689
    • Mark Cave-Ayland's avatar
      softmmu/ioport.c: allocate MemoryRegionPortioList ports on the heap · d2f07b75
      Mark Cave-Ayland authored
      
      In order to facilitate a conversion of MemoryRegionPortioList to a QOM object
      move the allocation of MemoryRegionPortioList ports to the heap instead of
      using a variable-length member at the end of the MemoryRegionPortioList
      structure.
      
      Signed-off-by: default avatarMark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Message-Id: <20230419151652.362717-2-mark.cave-ayland@ilande.co.uk>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d2f07b75
  17. May 23, 2023
  18. May 22, 2023
  19. May 18, 2023
    • Gavin Shan's avatar
      migration: Add last stage indicator to global dirty log · 1e493be5
      Gavin Shan authored
      
      The global dirty log synchronization is used when KVM and dirty ring
      are enabled. There is a particularity for ARM64 where the backup
      bitmap is used to track dirty pages in non-running-vcpu situations.
      It means the dirty ring works with the combination of ring buffer
      and backup bitmap. The dirty bits in the backup bitmap needs to
      collected in the last stage of live migration.
      
      In order to identify the last stage of live migration and pass it
      down, an extra parameter is added to the relevant functions and
      callbacks. This last stage indicator isn't used until the dirty
      ring is enabled in the subsequent patches.
      
      No functional change intended.
      
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Tested-by: default avatarZhenyu Zhang <zhenyzha@redhat.com>
      Message-Id: <20230509022122.20888-2-gshan@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1e493be5
  20. May 15, 2023
Loading