- Sep 16, 2023
-
-
Richard Henderson authored
Split out int_st_mmio_leN, to be used by both do_st_mmio_leN and do_st16_mmio_leN. Move the locks down into the two functions, since each one now covers all accesses to once page. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split out int_ld_mmio_beN, to be used by both do_ld_mmio_beN and do_ld16_mmio_beN. Move the locks down into the two functions, since each one now covers all accesses to once page. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Avoid multiple calls to io_prepare for unaligned acceses. One call to do_st_mmio_leN will never cross pages. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Avoid multiple calls to io_prepare for unaligned acceses. One call to do_ld_mmio_beN will never cross pages. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Push computation down into the if statements to the point the data is used. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Rather than saving MemoryRegionSection and offset, save phys_addr and MemoryRegion. This matches up much closer with the plugin api. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Since the introduction of CPUTLBEntryFull, we can recover the full cpu address space physical address without having to examine the MemoryRegionSection. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
These are common code from io_readx and io_writex. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Now that we defer address space update and tlb_flush until the next async_run_on_cpu, the plugin run at the end of the instruction no longer has to contend with a flushed tlb. Therefore, delete SavedIOTLB entirely. Properly return false from tlb_plugin_lookup when we do not have a tlb match. Fixes a bug in which SavedIOTLB had stale data, because there were multiple i/o accesses within a single insn. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Tested-by:
Song Gao <gaosong@loongson.cn> Reviewed-by:
Song Gao <gaosong@loongson.cn> Message-Id: <20230831030904.1194667-2-richard.henderson@linaro.org>
-
- Sep 15, 2023
-
-
LIU Zhiwei authored
When memory region is ram, the lower TARGET_PAGE_BITS is not the physical section number. Instead, its value is always 0. Add comment and assert to make it clear. Signed-off-by:
LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Message-Id: <20230901060118.379-1-zhiwei_liu@linux.alibaba.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Nicholas Piggin authored
mttcg asserts that an execution ending with EXCP_HALTED must have cpu->halted. However between the event or instruction that sets cpu->halted and requests exit and the assertion here, an asynchronous event could clear cpu->halted. This leads to crashes running AIX on ppc/pseries because it uses H_CEDE/H_PROD hcalls, where H_CEDE sets self->halted = 1 and H_PROD sets other cpu->halted = 0 and kicks it. H_PROD could be turned into an interrupt to wake, but several other places in ppc, sparc, and semihosting follow what looks like a similar pattern setting halted = 0 directly. So remove this assertion. Reported-by:
Ivan Warren <ivan@vmfacility.fr> Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Message-Id: <20230829010658.8252-1-npiggin@gmail.com> [rth: Keep the case label and adjust the comment.] Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Sep 07, 2023
-
-
Paolo Bonzini authored
While the option still needs to be parsed in the configure script (it's needed by tests/tcg, and also to decide about recursing into contrib/plugins), passing it to Meson can be done with -D instead of using config-host.mak. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Aug 31, 2023
-
-
Michael Tokarev authored
Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru> Message-ID: <20230823065335.1919380-18-mjt@tls.msk.ru> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Message-ID: <20230823065335.1919380-19-mjt@tls.msk.ru> Signed-off-by:
Philippe Mathieu-Daudé <philmd@linaro.org>
-
- Aug 29, 2023
-
-
Richard Henderson authored
After system startup, run the update to memory_dispatch and the tlb_flush on the cpu. This eliminates a race, wherein a running cpu sees the memory_dispatch change but has not yet seen the tlb_flush. Since the update now happens on the cpu, we need not use qatomic_rcu_read to protect the read of memory_dispatch. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1826 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1834 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1846 Tested-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 24, 2023
-
-
Anton Johansson authored
As we are now using vaddr for representing guest addresses, update the static assert to check that vaddr fits in the run_on_cpu_data union. Signed-off-by:
Anton Johansson <anjo@rev.ng> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-10-anjo@rev.ng> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Anton Johansson authored
Signed-off-by:
Anton Johansson <anjo@rev.ng> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-9-anjo@rev.ng> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Anton Johansson authored
Changes the address type of the guest memory read/write functions from target_ulong to abi_ptr. (abi_ptr is currently typedef'd to target_ulong but that will change in a following commit.) This will reduce the coupling between accel/ and target/. Note: Function pointers that point to cpu_[st|ld]*() in target/riscv and target/rx are also updated in this commit. Signed-off-by:
Anton Johansson <anjo@rev.ng> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230807155706.9580-6-anjo@rev.ng> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 10, 2023
-
-
Richard Henderson authored
When load_atom_extract_al16_or_al8 is inexpensive, we want to use it early, in order to avoid the overhead of required_atomicity. However, we must not read past the end of the page. If there are more than 8 bytes remaining, then both the "aligned 16" and "aligned 8" paths align down so that the read has at least 16 bytes remaining on the page. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 06, 2023
-
-
Mikhail Tyutin authored
Apply save_iotlb_data() to io_readx() as well as to io_writex(). This fixes SEGFAULT on qemu_plugin_hwaddr_phys_addr() call plugins for addresses inside of MMIO region. Signed-off-by:
Dmitriy Solovev <d.solovev@yadro.com> Signed-off-by:
Mikhail Tyutin <m.tyutin@yadro.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230804110903.19968-1-m.tyutin@yadro.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Aug 05, 2023
-
-
Richard Henderson authored
In the single-page case we were issuing misaligned i/o to the memory subsystem, which does not handle it properly. Split such accesses via do_{ld,st}_mmio_*. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1800 Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
If the address and size are aligned, send larger chunks to the memory subsystem. This will be required to make more use of these helpers. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Replace MMULookupPageData* with CPUTLBEntryFull, addr, size. Move QEMU_IOTHREAD_LOCK_GUARD to the caller. This simplifies the usage from do_ld16_beN and do_st16_leN, where we weren't locking the entire operation, and required hoop jumping for passing addr and size. Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jul 31, 2023
-
-
Richard Henderson authored
On overflow of code_gen_buffer, we unlock the guest pages we had been translating, but failed to clear gen_tb. On restart, if we cannot allocate a TB, we exit to the main loop to perform the flush of all TBs as soon as possible. With garbage in gen_tb, we hit an assert: ../src/accel/tcg/tb-maint.c:348:page_unlock__debug: \ assertion failed: (page_is_locked(pd)) Fixes: deba7870 ("accel/tcg: Always lock pages before translation") Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jul 24, 2023
-
-
Luca Bonissi authored
These should match 'start' as target_ulong, not target_long. On 32bit targets, the parameter was sign-extended to uint64_t, so only the first mmap within the upper 2GB memory can succeed. Signed-off-by:
Luca Bonissi <qemu@bonslack.org> Message-Id: <327460e2-0ebd-9edb-426b-1df80d16c32a@bonslack.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Anton Johansson authored
In replacing target_ulong with vaddr and TARGET_FMT_lx with VADDR_PRIx, the zero-padding of TARGET_FMT_lx got lost. Readd 16-wide zero-padding for logging consistency. Suggested-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Anton Johansson <anjo@rev.ng> Message-Id: <20230713120746.26897-1-anjo@rev.ng> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jul 23, 2023
-
-
Richard Henderson authored
For user-only, the probe for page writability may race with another thread's mprotect. Take the mmap_lock around the operation. This is still faster than the start/end_exclusive fallback. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
In the initial commit, cdfac37b, the sense of the test is incorrect, as the -1/0 return was confusing. In bef6f008, we mechanically invert all callers while changing to false/true return, preserving the incorrectness of the test. Now that the return sense is sane, it's easy to see that if !write, then the page is not modifiable (i.e. most likely read-only, with PROT_NONE handled via SIGSEGV). Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jul 17, 2023
-
-
Peter Maydell authored
In commit f0a08b09 we changed the type of the PC from target_ulong to vaddr. In doing so we inadvertently dropped the zero-padding on the PC in trace lines (the second item inside the [] in these lines). They used to look like this on AArch64, for instance: Trace 0: 0x7f2260000100 [00000000/0000000040000000/00000061/ff200000] and now they look like this: Trace 0: 0x7f4f50000100 [00000000/40000000/00000061/ff200000] and if the PC happens to be somewhere low like 0x5000 then the field is shown as /5000/. This is because TARGET_FMT_lx is a "%08x" or "%016x" specifier, depending on TARGET_LONG_SIZE, whereas VADDR_PRIx is just PRIx64 with no width specifier. Restore the zero-padding by adding an 016 width specifier to this tracing and a couple of others that were similarly recently changed to use VADDR_PRIx without a width specifier. We can't unfortunately restore the "32-bit guests are padded to 8 hex digits and 64-bit guests to 16 hex digits" behaviour so easily. Fixes: f0a08b09 ("accel/tcg/cpu-exec.c: Widen pc to vaddr") Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Anton Johansson <anjo@rev.ng> Message-id: 20230711165434.4123674-1-peter.maydell@linaro.org
-
- Jul 15, 2023
-
-
Richard Henderson authored
We adjust CONFIG_ATOMIC128 and CONFIG_CMPXCHG128 with CONFIG_ATOMIC128_OPT in atomic128.h. It is difficult to tell when those changes have been applied with the ifdef we must use with CONFIG_CMPXCHG128. So instead use HAVE_CMPXCHG128, which triggers -Werror-undef when the proper header has not been included. Improves tcg_gen_atomic_cmpxchg_i128 for s390x host, which requires CONFIG_ATOMIC128_OPT. Without this we fall back to EXCP_ATOMIC to single-step 128-bit atomics, which is slow enough to cause some tests to time out. Reported-by:
Thomas Huth <thuth@redhat.com> Tested-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We had done this for user-mode by invoking page_protect within the translator loop. Extend this to handle system mode as well. Move page locking out of tb_link_page. Reported-by:
Liren Wei <lrwei@bupt.edu.cn> Reported-by:
Richard W.M. Jones <rjones@redhat.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Tested-by:
Richard W.M. Jones <rjones@redhat.com>
-
Richard Henderson authored
Replace the 0/-1 result with true/false. Invert the sense of the test of all callers. Document the function. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230707204054.8792-25-richard.henderson@linaro.org>
-
Richard Henderson authored
Only PAGE_WRITE needs special attention, all others can be handled as we do for PAGE_READ. Adjust the mask. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230707204054.8792-24-richard.henderson@linaro.org>
-
Richard Henderson authored
Use the interval tree to locate an unused range in the VM. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230707204054.8792-17-richard.henderson@linaro.org>
-
Richard Henderson authored
Examine the interval tree to validate that a region has no existing mappings. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20230707204054.8792-10-richard.henderson@linaro.org>
-
Richard Henderson authored
Share the setjmp cleanup between cpu_exec_step_atomic and cpu_exec_setjmp. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Richard W.M. Jones <rjones@redhat.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jul 03, 2023
-
-
Alex Bennée authored
The lack of SVE memory instrumentation has been an omission in plugin handling since it was introduced. Fortunately we can utilise the probe_* functions to force all all memory access to follow the slow path. We do this by checking the access type and presence of plugin memory callbacks and if set return the TLB_MMIO flag. We have to jump through a few hoops in user mode to re-use the flag but it was the desired effect: ./qemu-system-aarch64 -display none -serial mon:stdio \ -M virt -cpu max -semihosting-config enable=on \ -kernel ./tests/tcg/aarch64-softmmu/memory-sve \ -plugin ./contrib/plugins/libexeclog.so,ifilter=st1w,afilter=0x40001808 -d plugin gives (disas doesn't currently understand st1w): 0, 0x40001808, 0xe54342a0, ".byte 0xa0, 0x42, 0x43, 0xe5", store, 0x40213010, RAM, store, 0x40213014, RAM, store, 0x40213018, RAM And for user-mode: ./qemu-aarch64 \ -plugin contrib/plugins/libexeclog.so,afilter=0x4007c0 \ -d plugin \ ./tests/tcg/aarch64-linux-user/sha512-sve gives: 1..10 ok 1 - do_test(&tests[i]) 0, 0x4007c0, 0xa4004b80, ".byte 0x80, 0x4b, 0x00, 0xa4", load, 0x5500800370, load, 0x5500800371, load, 0x5500800372, load, 0x5500800373, load, 0x5500800374, load, 0x5500800375, load, 0x5500800376, load, 0x5500800377, load, 0x5500800378, load, 0x5500800379, load, 0x550080037a, load, 0x550080037b, load, 0x550080037c, load, 0x550080037d, load, 0x550080037e, load, 0x550080037f, load, 0x5500800380, load, 0x5500800381, load, 0x5500800382, load, 0x5500800383, load, 0x5500800384, load, 0x5500800385, load, 0x5500800386, lo ad, 0x5500800387, load, 0x5500800388, load, 0x5500800389, load, 0x550080038a, load, 0x550080038b, load, 0x550080038c, load, 0x550080038d, load, 0x550080038e, load, 0x550080038f, load, 0x5500800390, load, 0x5500800391, load, 0x5500800392, load, 0x5500800393, load, 0x5500800394, load, 0x5500800395, load, 0x5500800396, load, 0x5500800397, load, 0x5500800398, load, 0x5500800399, load, 0x550080039a, load, 0x550080039b, load, 0x550080039c, load, 0x550080039d, load, 0x550080039e, load, 0x550080039f, load, 0x55008003a0, load, 0x55008003a1, load, 0x55008003a2, load, 0x55008003a3, load, 0x55008003a4, load, 0x55008003a5, load, 0x55008003a6, load, 0x55008003a7, load, 0x55008003a8, load, 0x55008003a9, load, 0x55008003aa, load, 0x55008003ab, load, 0x55008003ac, load, 0x55008003ad, load, 0x55008003ae, load, 0x55008003af (4007c0 is the ld1b in the sha512-sve) Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Cc: Robert Henry <robhenry@microsoft.com> Cc: Aaron Lindsay <aaron@os.amperecomputing.com> Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Message-Id: <20230630180423.558337-20-alex.bennee@linaro.org>
-
- Jul 01, 2023
-
-
Mark Cave-Ayland authored
Ensure that that both the start and last addresses are within the same guest page. Signed-off-by:
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20230629082522.606219-3-mark.cave-ayland@ilande.co.uk> [rth: Use tcg_debug_assert, simplify the expression] Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Mark Cave-Ayland authored
Due to a copy-paste error in tb_invalidate_phys_range, the wrong start address was passed to tb_invalidate_phys_page_range__locked. Correct is to use the start of each page in turn. Signed-off-by:
Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Fixes: e506ad6a ("accel/tcg: Pass last not end to tb_invalidate_phys_range") Message-Id: <20230629082522.606219-2-mark.cave-ayland@ilande.co.uk> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-