Skip to content
Snippets Groups Projects
  1. Apr 06, 2022
  2. Mar 21, 2022
  3. Mar 07, 2022
  4. Mar 06, 2022
  5. Feb 21, 2022
  6. Jan 20, 2022
  7. Nov 29, 2021
  8. Nov 02, 2021
  9. Oct 29, 2021
  10. Oct 23, 2021
  11. Sep 30, 2021
  12. Aug 26, 2021
  13. Aug 17, 2021
  14. Jul 08, 2021
  15. Jun 15, 2021
    • David Hildenbrand's avatar
      util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESERVE under Linux · d94e0bc9
      David Hildenbrand authored
      Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
      effect on most shared mappings - except for hugetlbfs and anonymous memory.
      
      Linux man page:
        "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
        space is reserved, one has the guarantee that it is possible to modify
        the mapping. When swap space is not reserved one might get SIGSEGV
        upon a write if no physical memory is available. See also the discussion
        of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
        2.6, this flag had effect only for private writable mappings."
      
      Note that the "guarantee" part is wrong with memory overcommit in Linux.
      
      Also, in Linux hugetlbfs is treated differently - we configure reservation
      of huge pages from the pool, not reservation of swap space (huge pages
      cannot be swapped).
      
      The rough behavior is [1]:
      a) !Hugetlbfs:
      
        1) Without MAP_NORESERVE *or* with memory overcommit under Linux
           disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
           accounting/reservation happens:
            For a file backed map
             SHARED or READ-only - 0 cost (the file is the map not swap)
             PRIVATE WRITABLE - size of mapping per instance
      
            For an anonymous or /dev/zero map
             SHARED   - size of mapping
             PRIVATE READ-only - 0 cost (but of little use)
             PRIVATE WRITABLE - size of mapping per instance
      
        2) With MAP_NORESERVE, no accounting/reservation happens.
      
      b) Hugetlbfs:
      
        1) Without MAP_NORESERVE, huge pages are reserved.
      
        2) With MAP_NORESERVE, no huge pages are reserved.
      
      Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
      to configure it for !hugetlbfs globally; this toggle now allows
      configuring it more fine-grained, not for the whole system.
      
      The target use case is virtio-mem, which dynamically exposes memory
      inside a large, sparse memory area to the VM.
      
      [1] https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
      
      
      
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine core
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210510114328.21835-10-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d94e0bc9
    • David Hildenbrand's avatar
      memory: Introduce RAM_NORESERVE and wire it up in qemu_ram_mmap() · 8dbe22c6
      David Hildenbrand authored
      
      Let's introduce RAM_NORESERVE, allowing mmap'ing with MAP_NORESERVE. The
      new flag has the following semantics:
      
      "
      RAM is mmap-ed with MAP_NORESERVE. When set, reserving swap space (or huge
      pages if applicable) is skipped: will bail out if not supported. When not
      set, the OS will do the reservation, if supported for the memory type.
      "
      
      Allow passing it into:
      - memory_region_init_ram_nomigrate()
      - memory_region_init_resizeable_ram()
      - memory_region_init_ram_from_file()
      
      ... and teach qemu_ram_mmap() and qemu_anon_ram_alloc() about the flag.
      Bail out if the flag is not supported, which is the case right now for
      both, POSIX and win32. We will add Linux support next and allow specifying
      RAM_NORESERVE via memory backends.
      
      The target use case is virtio-mem, which dynamically exposes memory
      inside a large, sparse memory area to the VM.
      
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine core
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210510114328.21835-9-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8dbe22c6
    • David Hildenbrand's avatar
      util/mmap-alloc: Pass flags instead of separate bools to qemu_ram_mmap() · b444f5c0
      David Hildenbrand authored
      
      Let's pass flags instead of bools to prepare for passing other flags and
      update the documentation of qemu_ram_mmap(). Introduce new QEMU_MAP_
      flags that abstract the mmap() PROT_ and MAP_ flag handling and simplify
      it.
      
      We expose only flags that are currently supported by qemu_ram_mmap().
      Maybe, we'll see qemu_mmap() in the future as well that can implement these
      flags.
      
      Note: We don't use MAP_ flags as some flags (e.g., MAP_SYNC) are only
      defined for some systems and we want to always be able to identify
      these flags reliably inside qemu_ram_mmap() -- for example, to properly
      warn when some future flags are not available or effective on a system.
      Also, this way we can simplify PROT_ handling as well.
      
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine core
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210510114328.21835-8-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b444f5c0
    • David Hildenbrand's avatar
      softmmu/memory: Pass ram_flags to qemu_ram_alloc() and qemu_ram_alloc_internal() · ebef62d0
      David Hildenbrand authored
      
      Let's pass ram_flags to qemu_ram_alloc() and qemu_ram_alloc_internal(),
      preparing for passing additional flags.
      
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine core
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210510114328.21835-7-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ebef62d0
    • David Hildenbrand's avatar
      softmmu/physmem: Fix qemu_ram_remap() to handle shared anonymous memory · dbb92eea
      David Hildenbrand authored
      
      RAM_SHARED now also properly indicates shared anonymous memory. Let's check
      that flag for anonymous memory as well, to restore the proper mapping.
      
      Fixes: 06329cce ("mem: add share parameter to memory-backend-ram")
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210406080126.24010-4-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      dbb92eea
    • David Hildenbrand's avatar
      softmmu/physmem: Fix ram_block_discard_range() to handle shared anonymous memory · cdfa56c5
      David Hildenbrand authored
      
      We can create shared anonymous memory via
          "-object memory-backend-ram,share=on,..."
      which is, for example, required by PVRDMA for mremap() to work.
      
      Shared anonymous memory is weird, though. Instead of MADV_DONTNEED, we
      have to use MADV_REMOVE: MADV_DONTNEED will only remove / zap all
      relevant page table entries of the current process, the backend storage
      will not get removed, resulting in no reduced memory consumption and
      a repopulation of previous content on next access.
      
      Shared anonymous memory is internally really just shmem, but without a
      fd exposed. As we cannot use fallocate() without the fd to discard the
      backing storage, MADV_REMOVE gets the same job done without a fd as
      documented in "man 2 madvise". Removing backing storage implicitly
      invalidates all page table entries with relevant mappings - an additional
      MADV_DONTNEED is not required.
      
      Fixes: 06329cce ("mem: add share parameter to memory-backend-ram")
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210406080126.24010-3-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cdfa56c5
    • David Hildenbrand's avatar
      softmmu/physmem: Mark shared anonymous memory RAM_SHARED · 7ce18ca0
      David Hildenbrand authored
      
      Let's drop the "shared" parameter from ram_block_add() and properly
      store it in the flags of the ram block instead, such that
      qemu_ram_is_shared() properly succeeds on all ram blocks that were mapped
      MAP_SHARED.
      
      We'll use this information next to fix some cases with shared anonymous
      memory.
      
      Reviewed-by: default avatarIgor Kotrasinski <i.kotrasinsk@partner.samsung.com>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210406080126.24010-2-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7ce18ca0
  16. May 26, 2021
  17. May 13, 2021
  18. May 02, 2021
  19. Mar 16, 2021
    • Alexander Bulekov's avatar
      fuzz: move some DMA hooks · 7cac7fea
      Alexander Bulekov authored
      
      For the sparse-mem device, we want the fuzzer to populate entire DMA
      reads from sparse-mem, rather than hooking into the individual MMIO
      memory_region_dispatch_read operations. Otherwise, the fuzzer will treat
      each sequential read separately (and populate it with a separate
      pattern). Work around this by rearranging some DMA hooks. Since the
      fuzzer has it's own logic to skip accidentally writing to MMIO regions,
      we can call the DMA cb, outside the flatview_translate loop.
      
      Signed-off-by: default avatarAlexander Bulekov <alxndr@bu.edu>
      Reviewed-by: default avatarDarren Kenny <darren.kenny@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7cac7fea
  20. Mar 15, 2021
  21. Mar 06, 2021
Loading