Richard Henderson
authored
Improvements to qemu/int128
Fixes for 128/64 division.
Cleanup tcg/optimize.c
Optimize redundant sign extensions
# gpg: Signature made Thu 28 Oct 2021 09:06:00 PM PDT
# gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg: issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* remotes/rth/tags/pull-tcg-20211028: (60 commits)
softmmu: fix for "after access" watchpoints
softmmu: remove useless condition in watchpoint check
softmmu: fix watchpoint processing in icount mode
tcg/optimize: Propagate sign info for shifting
tcg/optimize: Propagate sign info for bit counting
tcg/optimize: Propagate sign info for setcond
tcg/optimize: Propagate sign info for logical operations
tcg/optimize: Optimize sign extensions
tcg/optimize: Use fold_xx_to_i for rem
tcg/optimize: Use fold_xi_to_x for div
tcg/optimize: Use fold_xi_to_x for mul
tcg/optimize: Use fold_xx_to_i for orc
tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
tcg: Extend call args using the correct opcodes
tcg/optimize: Sink commutative operand swapping into fold functions
tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
tcg/optimize: Split out fold_masks
tcg/optimize: Split out fold_ix_to_i
tcg/optimize: Split out fold_xi_to_x
...
Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
Name | Last commit | Last update |
---|