- Nov 05, 2020
-
-
Pavel Dovgalyuk authored
This patch adds some gen_io_start() calls to allow execution of s390x targets in icount mode with -smp 1. It enables deterministic timers and record/replay features. Suggested-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Acked-by:
David Hildenbrand <david@redhat.com> Message-Id: <160455551747.32240.17074484658979970129.stgit@pasha-ThinkPad-X280> Signed-off-by:
Cornelia Huck <cohuck@redhat.com>
-
Chen Qun authored
When using -Wimplicit-fallthrough in our CFLAGS, the compiler showed warning: ../target/ppc/excp_helper.c: In function ‘powerpc_excp’: ../target/ppc/excp_helper.c:529:13: warning: this statement may fall through [-Wimplicit-fallthrough=] 529 | msr |= env->error_code; | ~~~~^~~~~~~~~~~~~~~~~~ ../target/ppc/excp_helper.c:530:5: note: here 530 | case POWERPC_EXCP_HDECR: /* Hypervisor decrementer exception */ | ^~~~ Add the corresponding "fall through" comment to fix it. Reported-by:
Euler Robot <euler.robot@huawei.com> Signed-off-by:
Chen Qun <kuhn.chenqun@huawei.com> Message-Id: <20201028055107.2170401-1-kuhn.chenqun@huawei.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
- Nov 03, 2020
-
-
Huacai Chen authored
MIPSR6 (not only MIPS32R6) processors support unaligned access in hardware, so set MO_UNALN in their default_tcg_memop_mask. Btw, new Loongson-3 (such as Loongson-3A4000) also support unaligned access, since both old and new Loongson-3 use the same binaries, we can simply set MO_UNALN for all Loongson-3 processors. Signed-off-by:
Huacai Chen <chenhc@lemote.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <1604053541-27822-3-git-send-email-chenhc@lemote.com> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org>
-
Chetan Pant authored
There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by:
Chetan Pant <chetan4windows@gmail.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20201016143509.26692-1-chetan4windows@gmail.com> [PMD: Split hw/ vs target/] Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org>
-
Xinhao Zhang authored
Fix code style. Space required before the open parenthesis '('. Signed-off-by:
Xinhao Zhang <zhangxinhao1@huawei.com> Signed-off-by:
Kai Deng <dengkai1@huawei.com> Reported-by:
Euler Robot <euler.robot@huawei.com> Reviewed-by:
Bin Meng <bin.meng@windriver.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201030004815.4172849-1-zhangxinhao1@huawei.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Yifei Jiang authored
In the case of supporting V extension, add V extension description to vmstate_riscv_cpu. Signed-off-by:
Yifei Jiang <jiangyifei@huawei.com> Signed-off-by:
Yipeng Yin <yinyipeng1@huawei.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201026115530.304-6-jiangyifei@huawei.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Yifei Jiang authored
In the case of supporting H extension, add H extension description to vmstate_riscv_cpu. Signed-off-by:
Yifei Jiang <jiangyifei@huawei.com> Signed-off-by:
Yipeng Yin <yinyipeng1@huawei.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201026115530.304-5-jiangyifei@huawei.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Yifei Jiang authored
In the case of supporting PMP feature, add PMP state description to vmstate_riscv_cpu. 'vmstate_pmp_addr' and 'num_rules' could be regenerated by pmp_update_rule(). But there exists the problem of updating num_rules repeatedly in pmp_update_rule(). So here extracts pmp_update_rule_addr() and pmp_update_rule_nums() to update 'vmstate_pmp_addr' and 'num_rules' respectively. Signed-off-by:
Yifei Jiang <jiangyifei@huawei.com> Signed-off-by:
Yipeng Yin <yinyipeng1@huawei.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201026115530.304-4-jiangyifei@huawei.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Yifei Jiang authored
Add basic CPU state description to the newly created machine.c Signed-off-by:
Yifei Jiang <jiangyifei@huawei.com> Signed-off-by:
Yipeng Yin <yinyipeng1@huawei.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201026115530.304-3-jiangyifei@huawei.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Yifei Jiang authored
mstatus/mstatush and vsstatus/vsstatush are two halved for RISCV32. This patch expands mstatus and vsstatus to uint64_t instead of target_ulong so that it can be saved as one unit and reduce some ifdefs in the code. Signed-off-by:
Yifei Jiang <jiangyifei@huawei.com> Signed-off-by:
Yipeng Yin <yinyipeng1@huawei.com> Signed-off-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201026115530.304-2-jiangyifei@huawei.com
-
- Nov 02, 2020
-
-
Peter Maydell authored
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el(). This is incorrect when the security state being queried is not the current one, because arm_current_el() uses the current security state to determine which of the banked CONTROL.nPRIV bits to look at. The effect was that if (for instance) Secure state was in privileged mode but Non-Secure was not then we would return the wrong MMU index. The only places where we are using this function in a way that could trigger this bug are for the stack loads during a v8M function-return and for the instruction fetch of a v8M SG insn. Fix the bug by expanding out the M-profile version of the arm_current_el() logic inline so it can use the passed in secstate rather than env->v7m.secure. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
-
Rémi Denis-Courmont authored
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the future HCR_EL2.TLOR when S-EL2 is enabled. Signed-off-by:
Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Rémi Denis-Courmont authored
HCR should be applied when NS is set, not when it is cleared. Signed-off-by:
Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
The helper functions for performing the udot/sdot operations against a scalar were not using an address-swizzling macro when converting the index of the scalar element into a pointer into the vm array. This had no effect on little-endian hosts but meant we generated incorrect results on big-endian hosts. For these insns, the index is indexing over group of 4 8-bit values, so 32 bits per indexed entity, and H4() is therefore what we want. (For Neon the only possible input indexes are 0 and 1.) Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
-
Peter Maydell authored
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error meant we were using the H4() address swizzler macro rather than the H2() which is required for 2-byte data. This had no effect on little-endian hosts but meant we put the result data into the destination Dreg in the wrong order on big-endian hosts. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
-
Richard Henderson authored
We can use proper widening loads to extend 32-bit inputs, and skip the "widenfn" step. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-12-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
In both cases, we can sink the write-back and perform the accumulate into the normal destination temps. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-11-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
The only uses of this function are for loading VFP double-precision values, and nothing to do with NEON. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-10-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-9-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
The only uses of this function are for loading VFP single-precision values, and nothing to do with NEON. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-8-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
We can then use this to improve VMOV (scalar to gp) and VMOV (gp to scalar) so that we simply perform the memory operation that we wanted, rather than inserting or extracting from a 32-bit quantity. These were the last uses of neon_load/store_reg, so remove them. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-7-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Model these off the aa64 read/write_vec_element functions. Use it within translate-neon.c.inc. The new functions do not allocate or free temps, so this rearranges the calling code a bit. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-6-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
This seems a bit more readable than using offsetof CPU_DoubleU. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-5-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
These are the only users of neon_reg_offset, so remove that. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-4-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
This will shortly have users outside of translate-neon.c.inc. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-3-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
This function makes it clear that we're talking about the whole register, and not the 32-bit piece at index 0. This fixes a bug when running on a big-endian host. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-2-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Oct 27, 2020
-
-
zhaolichang authored
I found that there are many spelling errors in the comments of qemu/target/ppc. I used spellcheck to check the spelling errors and found some errors in the folder. Signed-off-by:
zhaolichang <zhaolichang@huawei.com> Reviewed-by:
David Edmondson <david.edmondson@oracle.com> Message-Id: <20201009064449.2336-3-zhaolichang@huawei.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Greg Kurz <groug@kaod.org> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Greg Kurz authored
If kvmppc_load_htab_chunk() fails, its return value is propagated up to vmstate_load(). It should thus be a negative errno, not -1 (which maps to EPERM and would lure the user into thinking that the problem is necessarily related to a lack of privilege). Return the error reported by KVM or ENOSPC in case of short write. While here, propagate the error message through an @errp argument and have the caller to print it with error_report_err() instead of relying on fprintf(). Signed-off-by:
Greg Kurz <groug@kaod.org> Message-Id: <160371604713.305923.5264900354159029580.stgit@bahia.lan> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Greg Kurz authored
Since we introduced CPU hot-unplug in sPAPR, we don't unrealize the vCPU objects explicitly. Instead, we let QOM handle that for us under object_property_del_all() when the CPU core object is finalized. The only thing we do is calling cpu_remove_sync() to tear the vCPU thread down. This happens to work but it is ugly because: - we call qdev_realize() but the corresponding qdev_unrealize() is buried deep in the QOM code - we call cpu_remove_sync() to undo qemu_init_vcpu() called by ppc_cpu_realize() in target/ppc/translate_init.c.inc - the CPU init and teardown paths aren't really symmetrical The latter didn't bite us so far but a future patch that greatly simplifies the CPU core realize path needs it to avoid a crash in QOM. For all these reasons, have ppc_cpu_unrealize() to undo the changes of ppc_cpu_realize() by calling cpu_remove_sync() at the right place, and have the sPAPR CPU core code to call qdev_unrealize(). This requires to add a missing stub because translate_init.c.inc is also compiled for user mode. Signed-off-by:
Greg Kurz <groug@kaod.org> Message-Id: <160279671236.1808373.14732005038172874990.stgit@bahia.lan> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Richard Henderson authored
Transform the prot bit to a qemu internal page bit, and save it in the page tables. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201021173749.111103-3-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Oct 26, 2020
-
-
Chetan Pant authored
There is no "version 2" of the "Lesser" General Public License. It is either "GPL version 2.0" or "Lesser GPL version 2.1". This patch replaces all occurrences of "Lesser GPL version 2" with "Lesser GPL version 2.1" in comment section. Signed-off-by:
Chetan Pant <chetan4windows@gmail.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20201023123840.19988-1-chetan4windows@gmail.com> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org>
-
Lichang Zhao authored
There are many spelling errors in the comments of target/rx. Use spellcheck to check the spelling errors, then fix them. Signed-off-by:
zhaolichang <zhaolichang@huawei.com> Reviewed-by:
David Edmondson <david.edmondson@oracle.com> Reviewed-by:
Philippe <Mathieu-Daude<f4bug@amsat.org> Message-Id: <20201009064449.2336-5-zhaolichang@huawei.com> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org>
-
Lichang Zhao authored
There are many spelling errors in the comments of target/sh4. Use spellcheck to check the spelling errors, then fix them. Signed-off-by:
zhaolichang <zhaolichang@huawei.com> Reviewed-by:
David Edmondson <david.edmondson@oracle.com> Reviewed-by:
Philippe <Mathieu-Daude<f4bug@amsat.org> Message-Id: <20201009064449.2336-10-zhaolichang@huawei.com> Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org>
-
Philippe Mathieu-Daudé authored
Avoid checkpatch.pl warnings in the next commit. Signed-off-by:
Philippe Mathieu-Daudé <f4bug@amsat.org>
-
Max Filippov authored
Linux userspace always sees coprocessors as enabled. CPENABLE register and coprocessor exceptions are used internally by the kernel to manage lazy coprocessor context switch. None of it is needed for linux-user. Always enable all coprocessors for user emulation. Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20200829104758.22337-1-jcmvbkbc@gmail.com> Signed-off-by:
Laurent Vivier <laurent@vivier.eu>
-
- Oct 22, 2020
-
-
Yifei Jiang authored
VS-stage translation at get_physical_address needs to translate pte address by G-stage translation. But the G-stage translation error can not be distinguished from VS-stage translation error in riscv_cpu_tlb_fill. On migration, destination needs to rebuild pte, and this G-stage translation error must be handled by HS-mode. So introduce TRANSLATE_STAGE2_FAIL so that riscv_cpu_tlb_fill could distinguish and raise it to HS-mode. Signed-off-by:
Yifei Jiang <jiangyifei@huawei.com> Signed-off-by:
Yipeng Yin <yinyipeng1@huawei.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201014101728.848-1-jiangyifei@huawei.com [ Change by AF: - Clarify the fault_pte_addr shift ] Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Georg Kotheimer authored
The HLVX.WU instruction is supposed to read a machine word, but prior to this change it read a byte instead. Fixes: 8c5362ac ("target/riscv: Allow generating hlv/hlvx/hsv instructions") Signed-off-by:
Georg Kotheimer <georg.kotheimer@kernkonzept.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201013172223.443645-1-georg.kotheimer@kernkonzept.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Georg Kotheimer authored
The hstatus.GVA bit was not set if the faulting guest virtual address was zero. Signed-off-by:
Georg Kotheimer <georg.kotheimer@kernkonzept.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201013173054.451135-1-georg.kotheimer@kernkonzept.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Georg Kotheimer authored
When trapping from virt into HS mode, hstatus.SPVP was set to the value of sstatus.SPP, as according to the specification both flags should be set to the same value. However, the assignment of SPVP takes place before SPP itself is updated, which results in SPVP having an outdated value. Signed-off-by:
Georg Kotheimer <georg.kotheimer@kernkonzept.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-id: 20201013151054.396481-1-georg.kotheimer@kernkonzept.com Signed-off-by:
Alistair Francis <alistair.francis@wdc.com>
-
Alistair Francis authored
Currently we log interrupts and exceptions using the trace backend in riscv_cpu_do_interrupt(). We also log exceptions using the interrupt log mask (-d int) in riscv_raise_exception(). This patch converts riscv_cpu_do_interrupt() to log both interrupts and exceptions with the interrupt log mask, so that both are printed when a user runs QEMU with -d int. Signed-off-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 29a8c766c7c4748d0f2711c3a0abb81208138c5e.1601652179.git.alistair.francis@wdc.com
-