- May 25, 2021
-
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-13-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-12-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-11-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-10-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-9-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-8-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-7-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Split these operations out into a header that can be shared between neon and sve. The "sat" pointer acts both as a boolean for control of saturating behavior and controls the difference in behavior between neon and sve -- QC bit or no QC bit. Widen the shift operand in the new helpers, as the SVE2 insns treat the whole input element as significant. For the neon uses, truncate the shift to int8_t while passing the parameter. Implement right-shift rounding as tmp = src >> (shift - 1); dst = (tmp >> 1) + (tmp & 1); This is the same number of instructions as the current tmp = 1 << (shift - 1); dst = (src + tmp) >> shift; without any possibility of intermediate overflow. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-6-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-5-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-4-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
For MUL, we can rely on generic support. For SMULH and UMULH, create some trivial helpers. For PMUL, back in a21bb78e, we organized helper_gvec_pmul_b in preparation for this use. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-3-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Will be used for SVE2 isa subset enablement. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210525010358.152808-2-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Philippe Mathieu-Daudé authored
When selecting an ARM target on Debian unstable, we get: Compiling C++ object libcommon.fa.p/disas_libvixl_vixl_utils.cc.o FAILED: libcommon.fa.p/disas_libvixl_vixl_utils.cc.o c++ -Ilibcommon.fa.p -I. -I.. [...] -o libcommon.fa.p/disas_libvixl_vixl_utils.cc.o -c ../disas/libvixl/vixl/utils.cc In file included from /home/philmd/qemu/disas/libvixl/vixl/utils.h:30, from ../disas/libvixl/vixl/utils.cc:27: /usr/include/string.h:36:43: error: missing binary operator before token "(" 36 | #if defined __cplusplus && (__GNUC_PREREQ (4, 4) \ | ^ /usr/include/string.h:53:62: error: missing binary operator before token "(" 53 | #if defined __USE_MISC || defined __USE_XOPEN || __GLIBC_USE (ISOC2X) | ^ /usr/include/string.h:165:21: error: missing binary operator before token "(" 165 | || __GLIBC_USE (LIB_EXT2) || __GLIBC_USE (ISOC2X)) | ^ /usr/include/string.h:174:43: error: missing binary operator before token "(" 174 | #if defined __USE_XOPEN2K8 || __GLIBC_USE (LIB_EXT2) || __GLIBC_USE (ISOC2X) | ^ /usr/include/string.h:492:19: error: missing binary operator before token "(" 492 | #if __GNUC_PREREQ (3,4) | ^ Relevant information from the host: $ lsb_release -d Description: Debian GNU/Linux 11 (bullseye) $ gcc --version gcc (Debian 10.2.1-6) 10.2.1 20210110 $ dpkg -S /usr/include/string.h libc6-dev: /usr/include/string.h $ apt-cache show libc6-dev Package: libc6-dev Version: 2.31-11 Partially cherry-pick vixl commit 78973f258039f6e96 [*]: Refactor VIXL to use `extern` block when including C header that do not have a C++ counterpart. which is similar to commit 875df03b ('osdep: protect qemu/osdep.h with extern "C"'). [*] https://git.linaro.org/arm/vixl.git/commit/?id=78973f258039f6e96 Buglink: https://bugs.launchpad.net/qemu/+bug/1914870 Suggested-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-id: 20210516171023.510778-1-f4bug@amsat.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Rebecca Cran authored
Indicate support for FEAT_TLBIOS and FEAT_TLBIRANGE by setting ID_AA64ISAR0.TLB to 2 for the max AARCH64 CPU type. Signed-off-by:
Rebecca Cran <rebecca@nuviainc.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-4-rebecca@nuviainc.com Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Rebecca Cran authored
ARMv8.4 adds the mandatory FEAT_TLBIOS. It provides TLBI maintenance instructions that extend to the Outer Shareable domain. Signed-off-by:
Rebecca Cran <rebecca@nuviainc.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-3-rebecca@nuviainc.com Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Rebecca Cran authored
ARMv8.4 adds the mandatory FEAT_TLBIRANGE. It provides TLBI maintenance instructions that apply to a range of input addresses. Signed-off-by:
Rebecca Cran <rebecca@nuviainc.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210512182337.18563-2-rebecca@nuviainc.com Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Rename to match tlb_flush_range_locked. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-9-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Rename to match tlb_flush_range_locked. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-8-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Forward tlb_flush_page_bits_by_mmuidx_all_cpus_synced to tlb_flush_range_by_mmuidx_all_cpus_synced passing TARGET_PAGE_SIZE. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-7-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Forward tlb_flush_page_bits_by_mmuidx_all_cpus to tlb_flush_range_by_mmuidx_all_cpus passing TARGET_PAGE_SIZE. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-6-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Forward tlb_flush_page_bits_by_mmuidx to tlb_flush_range_by_mmuidx passing TARGET_PAGE_SIZE. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-5-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
We will not be able to fit address + length into a 64-bit packet. Drop this optimization before re-organizing this code. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-10-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> [PMM: Moved patch earlier in the series] Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Rename the structure to match the rename of tlb_flush_range_locked. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-4-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Rename tlb_flush_page_bits_locked() -> tlb_flush_range_locked(), and have callers pass a length argument (currently TARGET_PAGE_SIZE) via the TLBFlushPageBitsByMMUIdxData structure. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-3-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Using g_memdup is a bit more compact than g_new + memcpy. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20210509151618.2331764-2-f4bug@amsat.org Message-Id: <20210508201640.1045808-1-richard.henderson@linaro.org> [PMD: Split from bigger patch] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
When an M-profile CPU is restoring registers from the stack on exception return, the stack pointer to use is determined based on bits in the magic exception return type value. We were not getting this logic entirely correct. Whether we use one of the Secure stack pointers or one of the Non-Secure stack pointers depends on the EXCRET.S bit. However, whether we use the MSP or the PSP then depends on the SPSEL bit in either the CONTROL_S or CONTROL_NS register. We were incorrectly selecting MSP vs PSP based on the EXCRET.SPSEL bit. (In the pseudocode this is in the PopStack() function, which calls LookUpSp_with_security_mode() which in turn looks at the relevant CONTROL.SPSEL bit.) The buggy behaviour wasn't noticeable in most cases, because we write EXCRET.SPSEL to the CONTROL.SPSEL bit for the S/NS register selected by EXCRET.ES, so we only do the wrong thing when EXCRET.S and EXCRET.ES are different. This will happen when secure code takes a secure exception, which then tail-chains to a non-secure exception which finally returns to the original secure code. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210520130905.2049-1-peter.maydell@linaro.org
-
Peter Maydell authored
The SSE-300 has an ITCM at 0x0000_0000 and a DTCM at 0x2000_0000. Currently we model these in the AN547 board, but this is conceptually wrong, because they are a part of the SSE-300 itself. Move the modelling of the TCMs out of mps2-tz.c into sse300.c. This has no guest-visible effects. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510190844.17799-7-peter.maydell@linaro.org
-
Peter Maydell authored
Currently we model the ITCM in the AN547's RAMInfo list. This is incorrect because this RAM is really a part of the SSE-300. We can't just delete it from the RAMInfo list, though, because this would make boot_ram_size() assert because it wouldn't be able to find an entry in the list covering guest address 0. Allow a board to specify a boot RAM size manually if it doesn't have any RAM itself at address 0 and is relying on the SSE for that, and set the correct value for the AN547. The other boards can continue to use the "look it up from the RAMInfo list" logic. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510190844.17799-6-peter.maydell@linaro.org
-
Peter Maydell authored
Convert armsse_realize() to use ERRP_GUARD(), following the rules in include/qapi/error.h. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510190844.17799-5-peter.maydell@linaro.org
-
Peter Maydell authored
The SSE-300 was not correctly modelling its internal SRAMs: * the SRAM address width default is 18 * the SRAM is mapped at 0x2100_0000, not 0x2000_0000 like the SSE-200 and IoTKit The default address width is no longer guest-visible since our only SSE-300 board sets it explicitly to a non-default value, but following the hardware's default will help for any future boards we need to model. Reported-by:
Devaraj Ranganna <devaraj.ranganna@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510190844.17799-4-peter.maydell@linaro.org
-
Peter Maydell authored
The AN547 sets the SRAM_ADDR_WIDTH for the SSE-300 to 21; since this is not the default value for the SSE-300, model this in mps2-tz.c as a per-board value. Reported-by:
Devaraj Ranganna <devaraj.ranganna@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510190844.17799-3-peter.maydell@linaro.org
-
Peter Maydell authored
The SRAM at 0x2000_0000 is part of the SSE-200 itself, and we model it that way in hw/arm/armsse.c (along with the associated MPCs). We incorrectly also added an entry to the RAMInfo array for the AN524 in hw/arm/mps2-tz.c, which was pointless because the CPU would never see it. Delete it. The bug had no guest-visible effect because devices in the SSE-200 take priority over those in the board model (armsse.c maps s->board_memory at priority -2). Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510190844.17799-2-peter.maydell@linaro.org
-
Peter Maydell authored
In icc_eoir_write() we assume that we can identify the group of the IRQ being completed based purely on which register is being written to and the current CPU state, and that "CPU state matches group indicated by register" is the only necessary access check. This isn't correct: if the CPU is not in Secure state then EOIR1 will only complete Group 1 NS IRQs, but if the CPU is in EL3 it can complete both Group 1 S and Group 1 NS IRQs. (The pseudocode ICC_EOIR1_EL1 makes this clear.) We were also missing the logic to prevent EOIR0 writes completing G0 IRQs when they should not. Rearrange the logic to first identify the group of the current highest priority interrupt and then look at whether we should complete it or ignore the access based on which register was accessed and the state of the CPU. The resulting behavioural change is: * EL3 can now complete G1NS interrupts * G0 interrupt completion is now ignored if the GIC and the CPU have the security extension enabled and the CPU is not secure Reported-by:
Chan Kim <ckim@etri.re.kr> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210510150016.24910-1-peter.maydell@linaro.org
-
Eric Auger authored
6d9cd115 ("hw/arm/smmuv3: Enforce invalidation on a power of two range") failed to completely fix misalignment issues with range invalidation. For instance invalidations patterns like "invalidate 32 4kB pages starting from 0xff395000 are not correctly handled" due to the fact the previous fix only made sure the number of invalidated pages were a power of 2 but did not properly handle the start address was not aligned with the range. This can be noticed when boothing a fedora 33 with protected virtio-blk-pci. Signed-off-by:
Eric Auger <eric.auger@redhat.com> Fixes: 6d9cd115 ("hw/arm/smmuv3: Enforce invalidation on a power of two range") Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- May 24, 2021
-
-
Peter Maydell authored
Pull request (Resent due to an email preparation mistake.) # gpg: Signature made Mon 24 May 2021 14:01:42 BST # gpg: using RSA key 8695A8BFD3F97CDAAC35775A9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" [full] # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" [full] # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * remotes/stefanha-gitlab/tags/block-pull-request: coroutine-sleep: introduce qemu_co_sleep coroutine-sleep: replace QemuCoSleepState pointer with struct in the API coroutine-sleep: move timer out of QemuCoSleepState coroutine-sleep: allow qemu_co_sleep_wake that wakes nothing coroutine-sleep: disallow NULL QemuCoSleepState** argument coroutine-sleep: use a stack-allocated timer bitops.h: Improve find_xxx_bit() documentation multi-process: Initialize variables declared with g_auto* Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
target/xtensa updates for v6.1: - don't generate extra EXCP_DEBUG on exception - fix l32ex access ring - clean up unaligned access # gpg: Signature made Fri 21 May 2021 14:59:30 BST # gpg: using RSA key 2B67854B98E5327DCDEB17D851F9CC91F83FA044 # gpg: issuer "jcmvbkbc@gmail.com" # gpg: Good signature from "Max Filippov <filippov@cadence.com>" [unknown] # gpg: aka "Max Filippov <max.filippov@cogentembedded.com>" [full] # gpg: aka "Max Filippov <jcmvbkbc@gmail.com>" [full] # Primary key fingerprint: 2B67 854B 98E5 327D CDEB 17D8 51F9 CC91 F83F A044 * remotes/xtensa/tags/20210521-xtensa: target/xtensa: clean up unaligned access target/xtensa: fix access ring in l32ex target/xtensa: don't generate extra EXCP_DEBUG on exception Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- May 21, 2021
-
-
Paolo Bonzini authored
Allow using QemuCoSleep to sleep forever until woken by qemu_co_sleep_wake. This makes the logic of qemu_co_sleep_ns_wakeable easy to understand. In the future we will introduce an API that can work even if the sleep and wake happen from different threads. For now, initializing w->to_wake after timer_mod is fine because the timer can only fire in the same AioContext. Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Message-id: 20210517100548.28806-7-pbonzini@redhat.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-
Paolo Bonzini authored
Right now, users of qemu_co_sleep_ns_wakeable are simply passing a pointer to QemuCoSleepState by reference to the function. But QemuCoSleepState really is just a Coroutine*; making the content of the struct public is just as efficient and lets us skip the user_state_pointer indirection. Since the usage is changed, take the occasion to rename the struct to QemuCoSleep. Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Message-id: 20210517100548.28806-6-pbonzini@redhat.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-
Paolo Bonzini authored
This simplification is enabled by the previous patch. Now aio_co_wake will only be called once, therefore we do not care about a spurious firing of the timer after a qemu_co_sleep_wake. Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Message-id: 20210517100548.28806-5-pbonzini@redhat.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-
Paolo Bonzini authored
All callers of qemu_co_sleep_wake are checking whether they are passing a NULL argument inside the pointer-to-pointer: do the check in qemu_co_sleep_wake itself. As a side effect, qemu_co_sleep_wake can be called more than once and it will only wake the coroutine once; after the first time, the argument will be set to NULL via *sleep_state->user_state_pointer. However, this would not be safe unless co_sleep_cb keeps using the QemuCoSleepState* directly, so make it go through the pointer-to-pointer instead. Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Message-id: 20210517100548.28806-4-pbonzini@redhat.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-