- Oct 29, 2021
-
-
Chenyi Qiang authored
Because core-capability releated features are model-specific and KVM won't support it, remove the core-capability in CPU model to avoid the warning message. Signed-off-by:
Chenyi Qiang <chenyi.qiang@intel.com> Message-Id: <20210827064818.4698-3-chenyi.qiang@intel.com> Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com>
-
Chih-Min Chao authored
The sNaN propagation behavior has been changed since cd20cee7 in https://github.com/riscv/riscv-isa-manual . In Priv spec v1.10, RVF is v2.0. fmin.s and fmax.s are implemented with IEEE 754-2008 minNum and maxNum operations. In Priv spec v1.11, RVF is v2.2. fmin.s and fmax.s are amended to implement IEEE 754-2019 minimumNumber and maximumNumber operations. Therefore, to prevent the risk of having too many version variables. Instead of introducing an extra *fext_ver* variable, we tie RVF version to Priv version. Though it's not completely accurate but is close enough. Signed-off-by:
Chih-Min Chao <chihmin.chao@sifive.com> Signed-off-by:
Frank Chang <frank.chang@sifive.com> Acked-by:
Alistair Francis <alistair.francis@wdc.com> Message-Id: <20211021160847.2748577-3-frank.chang@sifive.com> Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Jose Martins authored
There is no need to "force an hs exception" as the current privilege level, the state of the global ie and of the delegation registers should be enough to route the interrupt to the appropriate privilege level in riscv_cpu_do_interrupt. The is true for both asynchronous and synchronous exceptions, specifically, guest page faults which must be hardwired to zero hedeleg. As such the hs_force_except mechanism can be removed. Signed-off-by:
Jose Martins <josemartins90@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211026145126.11025-3-josemartins90@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Jose Martins authored
VS interrupts (2, 6, 10) were not correctly forwarded to hs-mode when not delegated in hideleg (which was not being taken into account). This was mainly because hs level sie was not always considered enabled when it should. The spec states that "Interrupts for higher-privilege modes, y>x, are always globally enabled regardless of the setting of the global yIE bit for the higher-privilege mode." and also "For purposes of interrupt global enables, HS-mode is considered more privileged than VS-mode, and VS-mode is considered more privileged than VU-mode". Also, vs-level interrupts were not being taken into account unless V=1, but should be unless delegated. Finally, there is no need for a special case for to handle vs interrupts as the current privilege level, the state of the global ie and of the delegation registers should be enough to route all interrupts to the appropriate privilege level in riscv_cpu_do_interrupt. Signed-off-by:
Jose Martins <josemartins90@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211026145126.11025-2-josemartins90@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Taylor Simpson authored
Change SET_USR_FIELD to write to hex_new_value[HEX_REG_USR] instead of hex_gpr[HEX_REG_USR]. Then, we need code to mark the instructions that can set implicitly set USR - Macros added to hex_common.py - A_FPOP added in translate.c Test case added in tests/tcg/hexagon/overflow.c Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Taylor Simpson <tsimpson@quicinc.com>
-
Taylor Simpson authored
Change additional tcg_const_tl to tcg_constant_tl Note that gen_pred_cancal had slot_mask initialized with tcg_const_tl. However, it is not constant throughout, so we initialize it with tcg_temp_new and replace the first use with the constant value. Inspired-by:
Richard Henderson <richard.henderson@linaro.org> Inspired-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Taylor Simpson <tsimpson@quicinc.com>
-
- Oct 28, 2021
-
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Bin Meng <bmeng.cn@gmail.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211025173609.2724490-9-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Anatoly Parshintsev authored
Signed-off-by:
Anatoly Parshintsev <kupokupokupopo@gmail.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211025173609.2724490-8-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211025173609.2724490-7-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211025173609.2724490-6-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211025173609.2724490-5-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211025173609.2724490-4-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211025173609.2724490-3-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alexey Baturo authored
Signed-off-by:
Alexey Baturo <space.monkey.delivers@gmail.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Bin Meng <bmeng.cn@gmail.com> Message-id: 20211025173609.2724490-2-space.monkey.delivers@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Luis Pires authored
These will be used to implement new decimal floating point instructions from Power ISA 3.1. The remainder is now returned directly by divu128/divs128, freeing up phigh to receive the high 64 bits of the quotient. Signed-off-by:
Luis Pires <luis.pires@eldorado.org.br> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211025191154.350831-4-luis.pires@eldorado.org.br> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Luis Pires authored
In preparation for changing the divu128/divs128 implementations to allow for quotients larger than 64 bits, move the div-by-zero and overflow checks to the callers. Signed-off-by:
Luis Pires <luis.pires@eldorado.org.br> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211025191154.350831-2-luis.pires@eldorado.org.br> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Oct 22, 2021
-
-
Philippe Mathieu-Daudé authored
Since commit 12b6e9b2 ("disas: Clean up CPUDebug initialization") the disassemble_info->bfd_endian enum is set for all targets in target_disas(). We can directly call print_insn_nios2() and simplify. Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Laurent Vivier <laurent@vivier.eu> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20210807110939.95853-3-f4bug@amsat.org> Signed-off-by:
Laurent Vivier <laurent@vivier.eu>
-
Richard Henderson authored
The position of this read-only field is dependent on the current xlen. Rather than having to compute that difference in many places, compute it only on read. Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-16-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Use the official debug read interface to the csrs, rather than referencing the env slots directly. Put the list of csrs to dump into a table. Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-15-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Most shift instructions require a separate implementation for RV32 when TARGET_LONG_BITS == 64. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-14-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
The count zeros instructions require a separate implementation for RV32 when TARGET_LONG_BITS == 64. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-13-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
When target_long is 64-bit, we still want a 32-bit bswap for rev8. Since this opcode is specific to RV32, we need not conditionalize. Acked-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-12-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
The multiply high-part instructions require a separate implementation for RV32 when TARGET_LONG_BITS == 64. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-11-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
- Oct 21, 2021
-
-
Richard Henderson authored
In preparation for RV128, consider more than just "w" for operand size modification. This will be used for the "d" insns from RV128 as well. Rename oper_len to get_olen to better match get_xlen. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-10-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
In preparation for RV128, replace a simple predicate with a more versatile test. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-9-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
We're currently assuming SEW <= 3, and the "else" from the SEW == 3 must be less. Use a switch and explicitly bound both SEW and SEQ for all cases. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-8-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Use the same REQUIRE_64BIT check that we use elsewhere, rather than open-coding the use of is_32bit. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-7-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Begin adding support for switching XLEN at runtime. Extract the effective XLEN from MISA and MSTATUS and store for use during translation. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-6-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Shortly, the set of supported XL will not be just 32 and 64, and representing that properly using the enumeration will be imperative. Two places, booting and gdb, intentionally use misa_mxl_max to emphasize the use of the reset value of misa.mxl, and not the current cpu state. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-5-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
The hw representation of misa.mxl is at the high bits of the misa csr. Representing this in the same way inside QEMU results in overly complex code trying to check that field. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-4-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Move the MXL_RV* defines to enumerators. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-3-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Richard Henderson authored
Move the function to cpu_helper.c, as it is large and growing. Reviewed-by:
LIU Zhiwei <zhiwei_liu@c-sky.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211020031709.359469-2-richard.henderson@linaro.org Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alistair Francis authored
Organise the CPU properties so that standard extensions come first then followed by experimental extensions. Signed-off-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Frank Chang <frank.chang@sifive.com> Reviewed-by:
Bin Meng <bmeng.cn@gmail.com> Message-id: b6598570f60c5ee7f402be56d837bb44b289cc4d.1634531504.git.alistair.francis@wdc.com
-
Alistair Francis authored
Since commit 1a9540d1 ("target/riscv: Drop support for ISA spec version 1.09.1") these definitions are unused, remove them. Signed-off-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Frank Chang <frank.chang@sifive.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Bin Meng <bmeng.cn@gmail.com> Message-id: f4d8a7a035f39c0a35d44c1e371c5c99cc2fa15a.1634531504.git.alistair.francis@wdc.com
-
Frank Chang authored
TB_FLAGS mem_idx bits was extended from 2 bits to 3 bits in commit: c445593d, but other TB_FLAGS bits for rvv and rvh were not shift as well so these bits may overlap with each other when rvv is enabled. Signed-off-by:
Frank Chang <frank.chang@sifive.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-Id: <20211015074627.3957162-2-frank.chang@sifive.com> Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Philipp Tomsich authored
The earlier implementation fell into a corner case for bytes that were 0x01, giving a wrong result (but not affecting our application test cases for strings, as an ASCII value 0x01 is rare in those...). This changes the algorithm to: 1. Mask out the high-bit of each bytes (so that each byte is <= 127). 2. Add 127 to each byte (i.e. if the low 7 bits are not 0, this will overflow into the highest bit of each byte). 3. Bitwise-or the original value back in (to cover those cases where the source byte was exactly 128) to saturate the high-bit. 4. Shift-and-mask (implemented as a mask-and-shift) to extract the MSB of each byte into its LSB. 5. Multiply with 0xff to fan out the LSB to all bits of each byte. Fixes: d7a4fcb0 ("target/riscv: Add orc.b instruction for Zbb, removing gorc/gorci") Signed-off-by:
Philipp Tomsich <philipp.tomsich@vrull.eu> Reported-by:
Vincent Palatin <vpalatin@rivosinc.com> Tested-by:
Vincent Palatin <vpalatin@rivosinc.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211013184125.2010897-1-philipp.tomsich@vrull.eu Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Travis Geiselbrecht authored
Ensure the columns for all of the register names and values line up. No functional change, just a minor tweak to the output. Signed-off-by:
Travis Geiselbrecht <travisg@gmail.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20211009055019.545153-1-travisg@gmail.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Frank Chang authored
oprsz and maxsz are passed with the same value in commit: eee2d61e. However, vmv.v.v was missed in that commit and should pass the same value as well in its tcg_gen_gvec_2_ptr() call. Signed-off-by:
Frank Chang <frank.chang@sifive.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20211007081803.1705656-1-frank.chang@sifive.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Daniel Henrique Barboza authored
Problem state needs to be able to read and write the PMU counters, otherwise it won't be aware of any sampling result that the PMU produces after a Perf run. This patch does that in a similar fashion as already done in the previous patches. PMCs 5 and 6 have a special condition, aside from the constraints that are common with PMCs 1-4, where they are not part of the PMU if MMCR0_PMCC is 0b11. Signed-off-by:
Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211018010133.315842-5-danielhb413@gmail.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Daniel Henrique Barboza authored
Similar to the previous patch, let's add problem state read/write access to the MMCR2 SPR, which is also a group A PMU SPR that needs to be filtered to be read/written by userspace. Signed-off-by:
Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20211018010133.315842-4-danielhb413@gmail.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-