Skip to content
Snippets Groups Projects
  1. Jan 27, 2024
    • Peter Maydell's avatar
      target/arm: Fix incorrect aa64_tidcp1 feature check · 45b3ce5e
      Peter Maydell authored
      A typo in the implementation of isar_feature_aa64_tidcp1() means we
      were checking the field in the wrong ID register, so we might have
      provided the feature on CPUs that don't have it and not provided
      it on CPUs that should have it. Correct this bug.
      
      Cc: qemu-stable@nongnu.org
      Fixes: 9cd0c0de "target/arm: Implement FEAT_TIDCP1"
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2120
      
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20240123160333.958841-1-peter.maydell@linaro.org
      (cherry picked from commit ee0a2e3c9d2991a11c13ffadb15e4d0add43c257)
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      45b3ce5e
    • Peter Maydell's avatar
      target/arm: Fix A64 scalar SQSHRN and SQRSHRN · 570e6244
      Peter Maydell authored
      In commit 1b7bc9b5 we changed handle_vec_simd_sqshrn() so
      that instead of starting with a 0 value and depositing in each new
      element from the narrowing operation, it instead started with the raw
      result of the narrowing operation of the first element.
      
      This is fine in the vector case, because the deposit operations for
      the second and subsequent elements will always overwrite any higher
      bits that might have been in the first element's result value in
      tcg_rd.  However in the scalar case we only go through this loop
      once.  The effect is that for a signed narrowing operation, if the
      result is negative then we will now return a value where the bits
      above the first element are incorrectly 1 (because the narrowfn
      returns a sign-extended result, not one that is truncated to the
      element size).
      
      Fix this by using an extract operation to get exactly the correct
      bits of the output of the narrowfn for element 1, instead of a
      plain move.
      
      Cc: qemu-stable@nongnu.org
      Fixes: 1b7bc9b5 ("target/arm: Avoid tcg_const_ptr in handle_vec_simd_sqshrn")
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2089
      
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20240123153416.877308-1-peter.maydell@linaro.org
      (cherry picked from commit 6fffc8378562c7fea6290c430b4f653f830a4c1a)
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      570e6244
    • Max Filippov's avatar
      target/xtensa: fix OOB TLB entry access · 553e53b4
      Max Filippov authored
      
      r[id]tlb[01], [iw][id]tlb opcodes use TLB way index passed in a register
      by the guest. The host uses 3 bits of the index for ITLB indexing and 4
      bits for DTLB, but there's only 7 entries in the ITLB array and 10 in
      the DTLB array, so a malicious guest may trigger out-of-bound access to
      these arrays.
      
      Change split_tlb_entry_spec return type to bool to indicate whether TLB
      way passed to it is valid. Change get_tlb_entry to return NULL in case
      invalid TLB way is requested. Add assertion to xtensa_tlb_get_entry that
      requested TLB way and entry indices are valid. Add checks to the
      [rwi]tlb helpers that requested TLB way is valid and return 0 or do
      nothing when it's not.
      
      Cc: qemu-stable@nongnu.org
      Fixes: b67ea0cd ("target-xtensa: implement memory protection options")
      Signed-off-by: default avatarMax Filippov <jcmvbkbc@gmail.com>
      Reviewed-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Message-id: 20231215120307.545381-1-jcmvbkbc@gmail.com
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      (cherry picked from commit 604927e357c2b292c70826e4ce42574ad126ef32)
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      553e53b4
  2. Jan 20, 2024
    • Paolo Bonzini's avatar
      target/i386: pcrel: store low bits of physical address in data[0] · c46f68bd
      Paolo Bonzini authored
      For PC-relative translation blocks, env->eip changes during the
      execution of a translation block, Therefore, QEMU must be able to
      recover an instruction's PC just from the TranslationBlock struct and
      the instruction data with.  Because a TB will not span two pages, QEMU
      stores all the low bits of EIP in the instruction data and replaces them
      in x86_restore_state_to_opc.  Bits 12 and higher (which may vary between
      executions of a PCREL TB, since these only use the physical address in
      the hash key) are kept unmodified from env->eip.  The assumption is that
      these bits of EIP, unlike bits 0-11, will not change as the translation
      block executes.
      
      Unfortunately, this is incorrect when the CS base is not aligned to a page.
      Then the linear address of the instructions (i.e. the one with the
      CS base addred) indeed will never span two pages, but bits 12+ of EIP
      can actually change.  For example, if CS base is 0x80262200 and EIP =
      0x6FF4, the first instruction in the translation block will be at linear
      address 0x802691F4.  Even a very small TB will cross to EIP = 0x7xxx,
      while the linear addresses will remain comfortably within a single page.
      
      The fix is simply to use the low bits of the linear address for data[0],
      since those don't change.  Then x86_restore_state_to_opc uses tb->cs_base
      to compute a temporary linear address (referring to some unknown
      instruction in the TB, but with the correct values of bits 12 and higher);
      the low bits are replaced with data[0], and EIP is obtained by subtracting
      again the CS base.
      
      Huge thanks to Mark Cave-Ayland for the image and initial debugging,
      and to Gitlab user @kjliew for help with bisecting another occurrence
      of (hopefully!) the same bug.
      
      It should be relatively easy to write a testcase that performs MMIO on
      an EIP with different bits 12+ than the first instruction of the translation
      block; any help is welcome.
      
      Fixes: e3a79e0e ("target/i386: Enable TARGET_TB_PCREL", 2022-10-11)
      Cc: qemu-stable@nongnu.org
      Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Cc: Richard Henderson <richard.henderson@linaro.org>
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1759
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1964
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2012
      
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      (cherry picked from commit 729ba8e933f8af5800c3a92b37e630e9bdaa9f1e)
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      c46f68bd
    • guoguangyao's avatar
      target/i386: fix incorrect EIP in PC-relative translation blocks · 652c34cb
      guoguangyao authored
      
      The PCREL patches introduced a bug when updating EIP in the !CF_PCREL case.
      Using s->pc in func gen_update_eip_next() solves the problem.
      
      Cc: qemu-stable@nongnu.org
      Fixes: b5e0d5d2 ("target/i386: Fix 32-bit wrapping of pc/eip computation")
      Signed-off-by: default avatarguoguangyao <guoguangyao18@mails.ucas.ac.cn>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-ID: <20240115020804.30272-1-guoguangyao18@mails.ucas.ac.cn>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      (cherry picked from commit 2926eab8969908bc068629e973062a0fb6ff3759)
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      652c34cb
    • Richard Henderson's avatar
      target/i386: Do not re-compute new pc with CF_PCREL · 6e8e580e
      Richard Henderson authored
      
      With PCREL, we have a page-relative view of EIP, and an
      approximation of PC = EIP+CSBASE that is good enough to
      detect page crossings.  If we try to recompute PC after
      masking EIP, we will mess up that approximation and write
      a corrupt value to EIP.
      
      We already handled masking properly for PCREL, so the
      fix in b5e0d5d2 was only needed for the !PCREL path.
      
      Cc: qemu-stable@nongnu.org
      Fixes: b5e0d5d2 ("target/i386: Fix 32-bit wrapping of pc/eip computation")
      Reported-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      Signed-off-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-ID: <20240101230617.129349-1-richard.henderson@linaro.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      (cherry picked from commit a58506b748b8988a95f4fa1a2420ac5c17038b30)
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      6e8e580e
  3. Jan 17, 2024
  4. Jan 13, 2024
  5. Jan 08, 2024
  6. Dec 26, 2023
  7. Dec 12, 2023
  8. Dec 06, 2023
    • Michael Roth's avatar
      i386/sev: Avoid SEV-ES crash due to missing MSR_EFER_LMA bit · 5746f70d
      Michael Roth authored
      
      Commit 7191f24c ("accel/kvm/kvm-all: Handle register access errors")
      added error checking for KVM_SET_SREGS/KVM_SET_SREGS2. In doing so, it
      exposed a long-running bug in current KVM support for SEV-ES where the
      kernel assumes that MSR_EFER_LMA will be set explicitly by the guest
      kernel, in which case EFER write traps would result in KVM eventually
      seeing MSR_EFER_LMA get set and recording it in such a way that it would
      be subsequently visible when accessing it via KVM_GET_SREGS/etc.
      
      However, guest kernels currently rely on MSR_EFER_LMA getting set
      automatically when MSR_EFER_LME is set and paging is enabled via
      CR0_PG_MASK. As a result, the EFER write traps don't actually expose the
      MSR_EFER_LMA bit, even though it is set internally, and when QEMU
      subsequently tries to pass this EFER value back to KVM via
      KVM_SET_SREGS* it will fail various sanity checks and return -EINVAL,
      which is now considered fatal due to the aforementioned QEMU commit.
      
      This can be addressed by inferring the MSR_EFER_LMA bit being set when
      paging is enabled and MSR_EFER_LME is set, and synthesizing it to ensure
      the expected bits are all present in subsequent handling on the host
      side.
      
      Ultimately, this handling will be implemented in the host kernel, but to
      avoid breaking QEMU's SEV-ES support when using older host kernels, the
      same handling can be done in QEMU just after fetching the register
      values via KVM_GET_SREGS*. Implement that here.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Akihiko Odaki <akihiko.odaki@daynix.com>
      Cc: Philippe Mathieu-Daudé <philmd@linaro.org>
      Cc: Lara Lazier <laramglazier@gmail.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Maxim Levitsky <mlevitsk@redhat.com>
      Cc:  <kvm@vger.kernel.org>
      Fixes: 7191f24c ("accel/kvm/kvm-all: Handle register access errors")
      Signed-off-by: default avatarMichael Roth <michael.roth@amd.com>
      Acked-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Message-ID: <20231206155821.1194551-1-michael.roth@amd.com>
      5746f70d
  9. Dec 04, 2023
    • Daniel Henrique Barboza's avatar
      target/riscv/kvm: fix shadowing in kvm_riscv_(get|put)_regs_csr · 560b8e1d
      Daniel Henrique Barboza authored
      
      KVM_RISCV_GET_CSR() and KVM_RISCV_SET_CSR() use an 'int ret' variable
      that is used to do an early 'return' if ret > 0. Both are being called
      in functions that are also declaring a 'ret' integer, initialized with
      '0', and this integer is used as return of the function.
      
      The result is that the compiler is less than pleased and is pointing
      shadowing errors:
      
      ../target/riscv/kvm/kvm-cpu.c: In function 'kvm_riscv_get_regs_csr':
      ../target/riscv/kvm/kvm-cpu.c:90:13: error: declaration of 'ret' shadows a previous local [-Werror=shadow=compatible-local]
         90 |         int ret = kvm_get_one_reg(cs, RISCV_CSR_REG(env, csr), &reg); \
            |             ^~~
      ../target/riscv/kvm/kvm-cpu.c:539:5: note: in expansion of macro 'KVM_RISCV_GET_CSR'
        539 |     KVM_RISCV_GET_CSR(cs, env, sstatus, env->mstatus);
            |     ^~~~~~~~~~~~~~~~~
      ../target/riscv/kvm/kvm-cpu.c:536:9: note: shadowed declaration is here
        536 |     int ret = 0;
            |         ^~~
      
      ../target/riscv/kvm/kvm-cpu.c: In function 'kvm_riscv_put_regs_csr':
      ../target/riscv/kvm/kvm-cpu.c:98:13: error: declaration of 'ret' shadows a previous local [-Werror=shadow=compatible-local]
         98 |         int ret = kvm_set_one_reg(cs, RISCV_CSR_REG(env, csr), &reg); \
            |             ^~~
      ../target/riscv/kvm/kvm-cpu.c:556:5: note: in expansion of macro 'KVM_RISCV_SET_CSR'
        556 |     KVM_RISCV_SET_CSR(cs, env, sstatus, env->mstatus);
            |     ^~~~~~~~~~~~~~~~~
      ../target/riscv/kvm/kvm-cpu.c:553:9: note: shadowed declaration is here
        553 |     int ret = 0;
            |         ^~~
      
      The macros are doing early returns for non-zero returns and the local
      'ret' variable for both functions is used just to do 'return 0', so
      remove them from kvm_riscv_get_regs_csr() and kvm_riscv_put_regs_csr()
      and do a straight 'return 0' in the end.
      
      For good measure let's also rename the 'ret' variables in
      KVM_RISCV_GET_CSR() and KVM_RISCV_SET_CSR() to '_ret' to make them more
      resilient to these kind of errors.
      
      Fixes: 937f0b45 ("target/riscv: Implement kvm_arch_get_registers")
      Signed-off-by: default avatarDaniel Henrique Barboza <dbarboza@ventanamicro.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Tested-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Reviewed-by: default avatarAlistair Francis <alistair.francis@wdc.com>
      Message-ID: <20231123101338.1040134-1-dbarboza@ventanamicro.com>
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      560b8e1d
    • Yihuan Pan's avatar
      sh4: Coding style: Remove tabs · 55339361
      Yihuan Pan authored
      Replaces TABS with spaces to ensure have a consistent coding
      style with an indentation of 4 spaces in the SH4 subsystem.
      
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/376
      
      
      Signed-off-by: default avatarYihuan Pan <xun794@gmail.com>
      Reviewed-by: default avatarThomas Huth <thuth@redhat.com>
      Message-ID: <20231124044554.513752-1-xun794@gmail.com>
      Signed-off-by: default avatarThomas Huth <thuth@redhat.com>
      55339361
    • Peter Maydell's avatar
      target/arm: Disable SME if SVE is disabled · f7767ca3
      Peter Maydell authored
      There is no architectural requirement that SME implies SVE, but
      our implementation currently assumes it. (FEAT_SME_FA64 does
      imply SVE.) So if you try to run a CPU with eg "-cpu max,sve=off"
      you quickly run into an assert when the guest tries to write to
      SMCR_EL1:
      
      #6  0x00007ffff4b38e96 in __GI___assert_fail
          (assertion=0x5555566e69cb "sm", file=0x5555566e5b24 "../../target/arm/helper.c", line=6865, function=0x5555566e82f0 <__PRETTY_FUNCTION__.31> "sve_vqm1_for_el_sm") at ./assert/assert.c:101
      #7  0x0000555555ee33aa in sve_vqm1_for_el_sm (env=0x555557d291f0, el=2, sm=false) at ../../target/arm/helper.c:6865
      #8  0x0000555555ee3407 in sve_vqm1_for_el (env=0x555557d291f0, el=2) at ../../target/arm/helper.c:6871
      #9  0x0000555555ee3724 in smcr_write (env=0x555557d291f0, ri=0x555557da23b0, value=2147483663) at ../../target/arm/helper.c:6995
      #10 0x0000555555fd1dba in helper_set_cp_reg64 (env=0x555557d291f0, rip=0x555557da23b0, value=2147483663) at ../../target/arm/tcg/op_helper.c:839
      #11 0x00007fff60056781 in code_gen_buffer ()
      
      Avoid this unsupported and slightly odd combination by
      disabling SME when SVE is not present.
      
      Cc: qemu-stable@nongnu.org
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2005
      
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20231127173318.674758-1-peter.maydell@linaro.org
      f7767ca3
  10. Nov 28, 2023
  11. Nov 27, 2023
    • Peter Maydell's avatar
      target/arm: Handle overflow in calculation of next timer tick · 8d37a142
      Peter Maydell authored
      In commit edac4d8a back in 2015 when we added support for
      the virtual timer offset CNTVOFF_EL2, we didn't correctly update
      the timer-recalculation code that figures out when the timer
      interrupt is next going to change state. We got it wrong in
      two ways:
       * for the 0->1 transition, we didn't notice that gt->cval + offset
         can overflow a uint64_t
       * for the 1->0 transition, we didn't notice that the transition
         might now happen before the count rolls over, if offset > count
      
      In the former case, we end up trying to set the next interrupt
      for a time in the past, which results in QEMU hanging as the
      timer fires continuously.
      
      In the latter case, we would fail to update the interrupt
      status when we are supposed to.
      
      Fix the calculations in both cases.
      
      The test case is Alex Bennée's from the bug report, and tests
      the 0->1 transition overflow case.
      
      Fixes: edac4d8a ("target-arm: Add CNTVOFF_EL2")
      Cc: qemu-stable@nongnu.org
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/60
      
      
      Signed-off-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20231120173506.3729884-1-peter.maydell@linaro.org
      Reviewed-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      8d37a142
    • Peter Maydell's avatar
      target/arm: Set IL bit for pauth, SVE access, BTI trap syndromes · 11a3c4a2
      Peter Maydell authored
      
      The syndrome register value always has an IL field at bit 25, which
      is 0 for a trap on a 16 bit instruction, and 1 for a trap on a 32
      bit instruction (or for exceptions which aren't traps on a known
      instruction, like PC alignment faults). This means that our
      syn_*() functions should always either take an is_16bit argument to
      determine whether to set the IL bit, or else unconditionally set it.
      
      We missed setting the IL bit for the syndrome for three kinds of trap:
       * an SVE access exception
       * a pointer authentication check failure
       * a BTI (branch target identification) check failure
      
      All of these traps are AArch64 only, and so the instruction causing
      the trap is always 64 bit. This means we can unconditionally set
      the IL bit in the syn_*() function.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20231120150121.3458408-1-peter.maydell@linaro.org
      Cc: qemu-stable@nongnu.org
      Reviewed-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      11a3c4a2
  12. Nov 22, 2023
  13. Nov 21, 2023
  14. Nov 20, 2023
  15. Nov 17, 2023
  16. Nov 15, 2023
  17. Nov 14, 2023
  18. Nov 13, 2023
Loading