Skip to content
Snippets Groups Projects
  1. Nov 15, 2020
  2. Sep 23, 2020
    • Stefan Hajnoczi's avatar
      qemu/atomic.h: rename atomic_ to qatomic_ · d73415a3
      Stefan Hajnoczi authored
      
      clang's C11 atomic_fetch_*() functions only take a C11 atomic type
      pointer argument. QEMU uses direct types (int, etc) and this causes a
      compiler error when a QEMU code calls these functions in a source file
      that also included <stdatomic.h> via a system header file:
      
        $ CC=clang CXX=clang++ ./configure ... && make
        ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid)
      
      Avoid using atomic_*() names in QEMU's atomic.h since that namespace is
      used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h
      and <stdatomic.h> can co-exist. I checked /usr/include on my machine and
      searched GitHub for existing "qatomic_" users but there seem to be none.
      
      This patch was generated using:
      
        $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \
          sort -u >/tmp/changed_identifiers
        $ for identifier in $(</tmp/changed_identifiers); do
              sed -i "s%\<$identifier\>%q$identifier%g" \
                  $(git grep -I -l "\<$identifier\>")
          done
      
      I manually fixed line-wrap issues and misaligned rST tables.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Acked-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
      d73415a3
  3. Jul 10, 2020
  4. Jun 16, 2020
  5. May 27, 2020
    • Alex Bennée's avatar
      cpus-common: ensure auto-assigned cpu_indexes don't clash · 716386e3
      Alex Bennée authored
      
      Basing the cpu_index on the number of currently allocated vCPUs fails
      when vCPUs aren't removed in a LIFO manner. This is especially true
      when we are allocating a cpu_index for each guest thread in
      linux-user where there is no ordering constraint on their allocation
      and de-allocation.
      
      [I've dropped the assert which is there to guard against out-of-order
      removal as this should probably be caught higher up the stack. Maybe
      we could just ifdef CONFIG_SOFTTMU it?]
      
      Signed-off-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Acked-by: default avatarIgor Mammedow <imammedo@redhat.com>
      Cc: Nikolay Igotti <igotti@gmail.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Message-Id: <20200520140541.30256-13-alex.bennee@linaro.org>
      716386e3
  6. May 04, 2020
  7. Oct 28, 2019
  8. Aug 21, 2019
  9. Aug 20, 2019
    • Roman Kagan's avatar
      cpus-common: nuke finish_safe_work · e533f45d
      Roman Kagan authored
      
      It was introduced in commit ab129972,
      with the following motivation:
      
        Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with
        qemu_cpu_list_lock: together with a call to exclusive_idle (via
        cpu_exec_start/end) in cpu_list_add, this protects exclusive work
        against concurrent CPU addition and removal.
      
      However, it seems to be redundant, because the cpu-exclusive
      infrastructure provides suffificent protection against the newly added
      CPU starting execution while the cpu-exclusive work is running, and the
      aforementioned traversing of the cpu list is protected by
      qemu_cpu_list_lock.
      
      Besides, this appears to be the only place where the cpu-exclusive
      section is entered with the BQL taken, which has been found to trigger
      AB-BA deadlock as follows:
      
          vCPU thread                             main thread
          -----------                             -----------
      async_safe_run_on_cpu(self,
                            async_synic_update)
      ...                                         [cpu hot-add]
      process_queued_cpu_work()
        qemu_mutex_unlock_iothread()
                                                  [grab BQL]
        start_exclusive()                         cpu_list_add()
        async_synic_update()                        finish_safe_work()
          qemu_mutex_lock_iothread()                  cpu_exec_start()
      
      So remove it.  This paves the way to establishing a strict nesting rule
      of never entering the exclusive section with the BQL taken.
      
      Signed-off-by: default avatarRoman Kagan <rkagan@virtuozzo.com>
      Message-Id: <20190523105440.27045-2-rkagan@virtuozzo.com>
      e533f45d
  10. Jan 11, 2019
  11. Aug 23, 2018
    • Emilio G. Cota's avatar
      qom: convert the CPU list to RCU · 068a5ea0
      Emilio G. Cota authored
      
      Iterating over the list without using atomics is undefined behaviour,
      since the list can be modified concurrently by other threads (e.g.
      every time a new thread is created in user-mode).
      
      Fix it by implementing the CPU list as an RCU QTAILQ. This requires
      a little bit of extra work to traverse list in reverse order (see
      previous patch), but other than that the conversion is trivial.
      
      Signed-off-by: default avatarEmilio G. Cota <cota@braap.org>
      Message-Id: <20180819091335.22863-12-cota@braap.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      068a5ea0
  12. Oct 31, 2016
  13. Sep 27, 2016
Loading