Skip to content
Snippets Groups Projects
  1. Nov 02, 2023
  2. Oct 30, 2023
  3. Jul 12, 2023
  4. May 10, 2023
  5. May 03, 2023
  6. Apr 27, 2023
  7. Apr 24, 2023
  8. Dec 15, 2022
  9. Jul 20, 2022
  10. Apr 21, 2022
  11. Jan 28, 2022
  12. Nov 09, 2021
  13. Nov 01, 2021
    • David Hildenbrand's avatar
      migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination · 9470c5e0
      David Hildenbrand authored
      
      Currently, when someone (i.e., the VM) accesses discarded parts inside a
      RAMBlock with a RamDiscardManager managing the corresponding mapped memory
      region, postcopy will request migration of the corresponding page from the
      source. The source, however, will never answer, because it refuses to
      migrate such pages with undefined content ("logically unplugged"): the
      pages are never dirty, and get_queued_page() will consequently skip
      processing these postcopy requests.
      
      Especially reading discarded ("logically unplugged") ranges is supposed to
      work in some setups (for example with current virtio-mem), although it
      barely ever happens: still, not placing a page would currently stall the
      VM, as it cannot make forward progress.
      
      Let's check the state via the RamDiscardManager (the state e.g.,
      of virtio-mem is migrated during precopy) and avoid sending a request
      that will never get answered. Place a fresh zero page instead to keep
      the VM working. This is the same behavior that would happen
      automatically without userfaultfd being active, when accessing virtual
      memory regions without populated pages -- "populate on demand".
      
      For now, there are valid cases (as documented in the virtio-mem spec) where
      a VM might read discarded memory; in the future, we will disallow that.
      Then, we might want to handle that case differently, e.g., warning the
      user that the VM seems to be mis-behaving.
      
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Signed-off-by: default avatarJuan Quintela <quintela@redhat.com>
      9470c5e0
  14. Apr 07, 2021
  15. Feb 08, 2021
  16. Sep 25, 2020
  17. Jun 01, 2020
  18. Mar 13, 2020
    • Hailiang Zhang's avatar
      COLO: Optimize memory back-up process · 0393031a
      Hailiang Zhang authored
      
      This patch will reduce the downtime of VM for the initial process,
      Previously, we copied all these memory in preparing stage of COLO
      while we need to stop VM, which is a time-consuming process.
      Here we optimize it by a trick, back-up every page while in migration
      process while COLO is enabled, though it affects the speed of the
      migration, but it obviously reduce the downtime of back-up all SVM'S
      memory in COLO preparing stage.
      
      Signed-off-by: default avatarzhanghailiang <zhang.zhanghailiang@huawei.com>
      Message-Id: <20200224065414.36524-5-zhang.zhanghailiang@huawei.com>
      Signed-off-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
        minor typo fixes
      0393031a
  19. Jan 29, 2020
Loading