Skip to content
Snippets Groups Projects
  1. Jul 05, 2021
    • Peter Xu's avatar
      migration: Allow reset of postcopy_recover_triggered when failed · b7f9afd4
      Peter Xu authored
      
      It's possible qemu_start_incoming_migration() failed at any point, when it
      happens we should reset postcopy_recover_triggered to false so that the user
      can still retry with a saner incoming port.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210629181356.217312-3-peterx@redhat.com>
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      b7f9afd4
    • Peter Xu's avatar
      migration: Move yank outside qemu_start_incoming_migration() · cc48c587
      Peter Xu authored
      
      Starting from commit b5eea99e, qmp_migrate_recover() calls unregister
      before calling qemu_start_incoming_migration(). I believe it wanted to mitigate
      the next call to yank_register_instance(), but I think that's wrong.
      
      Firstly, if during recover, we should keep the yank instance there, not
      "quickly removing and adding it back".
      
      Meanwhile, calling qmp_migrate_recover() twice with b5eea99e will directly
      crash the dest qemu (right now it can't; but it'll start to work right after
      the next patch) because the 1st call of qmp_migrate_recover() will unregister
      permanently when the channel failed to establish, then the 2nd call of
      qmp_migrate_recover() crashes at yank_unregister_instance().
      
      This patch fixes it by moving yank ops out of qemu_start_incoming_migration()
      into qmp_migrate_incoming.  For qmp_migrate_recover(), drop the unregister of
      yank instance too since we keep it there during the recovery phase.
      
      Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Message-Id: <20210629181356.217312-2-peterx@redhat.com>
      Signed-off-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      cc48c587
    • Feng Lin's avatar
      migration: fix the memory overwriting risk in add_to_iovec · c00d434a
      Feng Lin authored
      
      When testing migration, a Segmentation fault qemu core is generated.
      0  error_free (err=0x1)
      1  0x00007f8b862df647 in qemu_fclose (f=f@entry=0x55e06c247640)
      2  0x00007f8b8516d59a in migrate_fd_cleanup (s=s@entry=0x55e06c0e1ef0)
      3  0x00007f8b8516d66c in migrate_fd_cleanup_bh (opaque=0x55e06c0e1ef0)
      4  0x00007f8b8626a47f in aio_bh_poll (ctx=ctx@entry=0x55e06b5a16d0)
      5  0x00007f8b8626e71f in aio_dispatch (ctx=0x55e06b5a16d0)
      6  0x00007f8b8626a33d in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>)
      7  0x00007f8b866bdba4 in g_main_context_dispatch ()
      8  0x00007f8b8626cde9 in glib_pollfds_poll ()
      9  0x00007f8b8626ce62 in os_host_main_loop_wait (timeout=<optimized out>)
      10 0x00007f8b8626cffd in main_loop_wait (nonblocking=nonblocking@entry=0)
      11 0x00007f8b862ef01f in main_loop ()
      Using gdb print the struct QEMUFile f = {
        ...,
        iovcnt = 65, last_error = 21984,
        last_error_obj = 0x1, shutdown = true
      }
      Well iovcnt is overflow, because the max size of MAX_IOV_SIZE is 64.
      struct QEMUFile {
          ...;
          struct iovec iov[MAX_IOV_SIZE];
          unsigned int iovcnt;
          int last_error;
          Error *last_error_obj;
          bool shutdown;
      };
      iovcnt and last_error is overwrited by add_to_iovec().
      Right now, add_to_iovec() increase iovcnt before check the limit.
      And it seems that add_to_iovec() assumes that iovcnt will set to zero
      in qemu_fflush(). But qemu_fflush() will directly return when f->shutdown
      is true.
      
      The situation may occur when libvirtd restart during migration, after
      f->shutdown is set, before calling qemu_file_set_error() in
      qemu_file_shutdown().
      
      So the safiest way is checking the iovcnt before increasing it.
      
      Signed-off-by: default avatarFeng Lin <linfeng23@huawei.com>
      Message-Id: <20210625062138.1899-1-linfeng23@huawei.com>
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
        Fix typo in 'writeable' which is actually misnamed 'writable'
      c00d434a
  2. Jun 29, 2021
  3. Jun 15, 2021
    • David Hildenbrand's avatar
      memory: Introduce RAM_NORESERVE and wire it up in qemu_ram_mmap() · 8dbe22c6
      David Hildenbrand authored
      
      Let's introduce RAM_NORESERVE, allowing mmap'ing with MAP_NORESERVE. The
      new flag has the following semantics:
      
      "
      RAM is mmap-ed with MAP_NORESERVE. When set, reserving swap space (or huge
      pages if applicable) is skipped: will bail out if not supported. When not
      set, the OS will do the reservation, if supported for the memory type.
      "
      
      Allow passing it into:
      - memory_region_init_ram_nomigrate()
      - memory_region_init_resizeable_ram()
      - memory_region_init_ram_from_file()
      
      ... and teach qemu_ram_mmap() and qemu_anon_ram_alloc() about the flag.
      Bail out if the flag is not supported, which is the case right now for
      both, POSIX and win32. We will add Linux support next and allow specifying
      RAM_NORESERVE via memory backends.
      
      The target use case is virtio-mem, which dynamically exposes memory
      inside a large, sparse memory area to the VM.
      
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine core
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210510114328.21835-9-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8dbe22c6
  4. Jun 14, 2021
  5. Jun 11, 2021
  6. Jun 08, 2021
  7. Jun 02, 2021
  8. May 26, 2021
  9. May 14, 2021
  10. May 13, 2021
  11. May 02, 2021
  12. Apr 07, 2021
  13. Apr 06, 2021
  14. Apr 01, 2021
Loading