- May 17, 2023
-
-
Richard Henderson authored
Expand from TCGv to TCGTemp inline in the translators, and validate that the size matches tcg_ctx->addr_type. These inlines will eventually be seen only by target-specific code. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- May 16, 2023
-
-
Richard Henderson authored
This will enable replacement of TARGET_LONG_BITS within tcg/. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Since we do this inside gen_empty_mem_cb anyway, let's do this earlier inside tcg expansion. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We only need to make copies for loads, when the destination overlaps the address. For now, only eliminate the copy for stores and 128-bit loads. Rename plugin_prep_mem_callbacks to plugin_maybe_preserve_addr, returning NULL if no copy is made. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
As do_gen_mem_cb is called once, merge it into gen_empty_mem_cb. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
As gen_mem_wrapped is only used in plugin_gen_empty_mem_callback, we can avoid the curiosity of union mem_gen_fn by inlining it. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Always pass the target address as uint64_t. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Always pass the target address as uint64_t. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Always pass the target address as uint64_t. Adjust tcg_out_{ld,st}_helper_args to match. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We already pass uint64_t to restore_state_to_opc; this changes all of the other uses from insn_start through the encoding to decoding. Reviewed-by:
Anton Johansson <anjo@rev.ng> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
No change to the ultimate load/store routines yet, so some atomicity conditions not yet honored, but plumbs the change to alignment through the relevant functions. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Examine MemOp for atomicity and alignment, adjusting alignment as required to implement atomicity on the host. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Now that tcg_out_helper_load_regs is not recursive, we can merge it into its only caller, tcg_out_helper_load_slots. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
With x86_64 as host, we do not have any temporaries with which to resolve cycles, but we do have xchg. As a side bonus, the set of graphs that can be made with 3 nodes and all nodes conflicting is small: two. We can solve the cycle with a single temp. This is required for x86_64 to handle stores of i128: 1 address register and 2 data registers. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Add opcodes for backend support for 128-bit memory operations. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Replace the unparameterized TCG_TARGET_HAS_MEMORY_BSWAP macro with a function with a memop argument. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
The system is required to emulate unaligned accesses, even if the hardware does not support it. The resulting trap may or may not be more efficient than the qemu slow path. There are linux kernel patches in flight to allow userspace to query hardware support; we can re-evaluate whether to enable this by default after that. In the meantime, softmmu now matches useronly, where we already assumed that unaligned accesses are supported. Reviewed-by:
LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Test the final byte of an unaligned access. Use BSTRINS.D to clear the range of bits, rather than AND. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
This should be true of all loongarch64 running Linux. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
These functions are now unused. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Drop the target-specific trampolines for the standard slow path. This lets us use tcg_out_helper_{ld,st}_args, and handles the new atomicity bits within MemOp. At the same time, use the full load/store helpers for user-only mode. Drop inline unaligned access support for user-only mode, as it does not handle atomicity. Use TCG_REG_T[1-3] in the tlb lookup, instead of TCG_REG_O[0-2]. This allows the constraints to be simplified. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Emphasize that the constant is unsigned. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Shuffle the order in tcg_out_movi_int to check s13 first, and drop this check from tcg_out_movi_imm32. This might make the sequence for in_prologue larger, but not worth worrying about. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Emphasize that the constant is signed. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Instead of using helper_unaligned_{ld,st}, use the full load/store helpers. This will allow the fast path to increase alignment to implement atomicity while not immediately raising an alignment exception. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-