Skip to content
Snippets Groups Projects
  1. Jun 29, 2021
  2. Jun 25, 2021
    • Peter Xu's avatar
      KVM: Fix dirty ring mmap incorrect size due to renaming accident · dcafa248
      Peter Xu authored
      Found this when I wanted to try the per-vcpu dirty rate series out, then I
      found that it's not really working and it can quickly hang death a guest.  I
      found strange errors (e.g. guest crash after migration) happens even without
      the per-vcpu dirty rate series.
      
      When merging dirty ring, probably no one notice that the trivial renaming diff
      [1] missed two existing references of kvm_dirty_ring_sizes; they do matter
      since otherwise we'll mmap() a shorter range of memory after the renaming.
      
      I think it didn't SIGBUS for me easily simply because some other stuff within
      qemu mmap()ed right after the dirty rings (e.g. when testing 4096 slots, it
      aligned with one small page on x86), so when we access the rings we've been
      reading/writting to random memory elsewhere of qemu.
      
      Fix the two sizes when map/unmap the shared dirty gfn memory.
      
      [1] https://lore.kernel.org/qemu-devel/dac5f0c6-1bca-3daf-e5d2-6451dbbaca93@redhat.com/
      
      
      
      Cc: Hyman Huang <huangy81@chinatelecom.cn>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210609014355.217110-1-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      dcafa248
  3. Jun 19, 2021
  4. Jun 14, 2021
  5. Jun 11, 2021
  6. Jun 03, 2021
  7. Jun 02, 2021
  8. May 26, 2021
    • Philippe Mathieu-Daudé's avatar
      accel/tcg: Keep TranslationBlock headers local to TCG · e5ceadff
      Philippe Mathieu-Daudé authored
      
      Only the TCG accelerator uses the TranslationBlock API.
      Move the tb-context.h / tb-hash.h / tb-lookup.h from the
      global namespace to the TCG one (in accel/tcg).
      
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Message-Id: <20210524170453.3791436-3-f4bug@amsat.org>
      Signed-off-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      e5ceadff
    • Philippe Mathieu-Daudé's avatar
      accel/tcg: Reduce 'exec/tb-context.h' inclusion · 824f4bac
      Philippe Mathieu-Daudé authored
      
      Only 2 headers require "exec/tb-context.h". Instead of having
      all files including "exec/exec-all.h" also including it, directly
      include it where it is required:
      - accel/tcg/cpu-exec.c
      - accel/tcg/translate-all.c
      
      For plugins/plugin.h, we were implicitly relying on
        exec/exec-all.h -> exec/tb-context.h -> qemu/qht.h
      which is now included directly.
      
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Message-Id: <20210524170453.3791436-2-f4bug@amsat.org>
      [rth: Fix plugins/plugin.h compilation]
      Signed-off-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      824f4bac
    • Peter Xu's avatar
      KVM: Dirty ring support · b4420f19
      Peter Xu authored
      
      KVM dirty ring is a new interface to pass over dirty bits from kernel to the
      userspace.  Instead of using a bitmap for each memory region, the dirty ring
      contains an array of dirtied GPAs to fetch (in the form of offset in slots).
      For each vcpu there will be one dirty ring that binds to it.
      
      kvm_dirty_ring_reap() is the major function to collect dirty rings.  It can be
      called either by a standalone reaper thread that runs in the background,
      collecting dirty pages for the whole VM.  It can also be called directly by any
      thread that has BQL taken.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-11-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b4420f19
    • Peter Xu's avatar
      KVM: Disable manual dirty log when dirty ring enabled · a81a5926
      Peter Xu authored
      
      KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is for KVM_CLEAR_DIRTY_LOG, which is only
      useful for KVM_GET_DIRTY_LOG.  Skip enabling it for kvm dirty ring.
      
      More importantly, KVM_DIRTY_LOG_INITIALLY_SET will not wr-protect all the pages
      initially, which is against how kvm dirty ring is used - there's no way for kvm
      dirty ring to re-protect a page before it's notified as being written first
      with a GFN entry in the ring!  So when KVM_DIRTY_LOG_INITIALLY_SET is enabled
      with dirty ring, we'll see silent data loss after migration.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-10-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a81a5926
    • Peter Xu's avatar
      KVM: Add dirty-ring-size property · 2ea5cb0a
      Peter Xu authored
      
      Add a parameter for dirty gfn count for dirty rings.  If zero, dirty ring is
      disabled.  Otherwise dirty ring will be enabled with the per-vcpu gfn count as
      specified.  If dirty ring cannot be enabled due to unsupported kernel or
      illegal parameter, it'll fallback to dirty logging.
      
      By default, dirty ring is not enabled (dirty-gfn-count default to 0).
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-9-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2ea5cb0a
    • Peter Xu's avatar
      KVM: Cache kvm slot dirty bitmap size · 563d32ba
      Peter Xu authored
      
      Cache it too because we'll reference it more frequently in the future.
      
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-8-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      563d32ba
    • Peter Xu's avatar
      KVM: Simplify dirty log sync in kvm_set_phys_mem · 29b7e8be
      Peter Xu authored
      
      kvm_physical_sync_dirty_bitmap() on the whole section is inaccurate, because
      the section can be a superset of the memslot that we're working on.  The result
      is that if the section covers multiple kvm memslots, we could be doing the
      synchronization for multiple times for each kvmslot in the section.
      
      With the two helpers that we just introduced, it's very easy to do it right now
      by calling the helpers.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-7-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      29b7e8be
    • Peter Xu's avatar
      KVM: Provide helper to sync dirty bitmap from slot to ramblock · 2c20b27e
      Peter Xu authored
      
      kvm_physical_sync_dirty_bitmap() calculates the ramblock offset in an
      awkward way from the MemoryRegionSection that passed in from the
      caller.  The truth is for each KVMSlot the ramblock offset never
      change for the lifecycle.  Cache the ramblock offset for each KVMSlot
      into the structure when the KVMSlot is created.
      
      With that, we can further simplify kvm_physical_sync_dirty_bitmap()
      with a helper to sync KVMSlot dirty bitmap to the ramblock dirty
      bitmap of a specific KVMSlot.
      
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-6-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2c20b27e
    • Peter Xu's avatar
      KVM: Provide helper to get kvm dirty log · e65e5f50
      Peter Xu authored
      
      Provide a helper kvm_slot_get_dirty_log() to make the function
      kvm_physical_sync_dirty_bitmap() clearer.  We can even cache the as_id
      into KVMSlot when it is created, so that we don't even need to pass it
      down every time.
      
      Since at it, remove return value of kvm_physical_sync_dirty_bitmap()
      because it should never fail.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-5-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e65e5f50
    • Peter Xu's avatar
      KVM: Create the KVMSlot dirty bitmap on flag changes · ea776d15
      Peter Xu authored
      
      Previously we have two places that will create the per KVMSlot dirty
      bitmap:
      
        1. When a newly created KVMSlot has dirty logging enabled,
        2. When the first log_sync() happens for a memory slot.
      
      The 2nd case is lazy-init, while the 1st case is not (which is a fix
      of what the 2nd case missed).
      
      To do explicit initialization of dirty bitmaps, what we're missing is
      to create the dirty bitmap when the slot changed from not-dirty-track
      to dirty-track.  Do that in kvm_slot_update_flags().
      
      With that, we can safely remove the 2nd lazy-init.
      
      This change will be needed for kvm dirty ring because kvm dirty ring
      does not use the log_sync() interface at all.
      
      Also move all the pre-checks into kvm_slot_init_dirty_bitmap().
      
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-4-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ea776d15
    • Peter Xu's avatar
      KVM: Use a big lock to replace per-kml slots_lock · a2f77862
      Peter Xu authored
      
      Per-kml slots_lock will bring some trouble if we want to take all slots_lock of
      all the KMLs, especially when we're in a context that we could have taken some
      of the KML slots_lock, then we even need to figure out what we've taken and
      what we need to take.
      
      Make this simple by merging all KML slots_lock into a single slots lock.
      
      Per-kml slots_lock isn't anything that helpful anyway - so far only x86 has two
      address spaces (so, two slots_locks).  All the rest archs will be having one
      address space always, which means there's actually one slots_lock so it will be
      the same as before.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210506160549.130416-3-peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a2f77862
    • Paolo Bonzini's avatar
      KVM: do not allow setting properties at runtime · 70cbae42
      Paolo Bonzini authored
      
      Only allow accelerator properties to be set when the
      accelerator is being created.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      70cbae42
  9. May 25, 2021
Loading