Skip to content
Snippets Groups Projects
  1. Oct 10, 2017
  2. Oct 06, 2017
    • Peter Maydell's avatar
      Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging · 530049bc
      Peter Maydell authored
      
      Block layer patches
      
      # gpg: Signature made Fri 06 Oct 2017 16:52:59 BST
      # gpg:                using RSA key 0x7F09B272C88F2FD6
      # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
      # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6
      
      * remotes/kevin/tags/for-upstream: (54 commits)
        block/mirror: check backing in bdrv_mirror_top_flush
        qcow2: truncate the tail of the image file after shrinking the image
        qcow2: fix return error code in qcow2_truncate()
        iotests: Fix 195 if IMGFMT is part of TEST_DIR
        block/mirror: check backing in bdrv_mirror_top_refresh_filename
        block: support passthrough of BDRV_REQ_FUA in crypto driver
        block: convert qcrypto_block_encrypt|decrypt to take bytes offset
        block: convert crypto driver to bdrv_co_preadv|pwritev
        block: fix data type casting for crypto payload offset
        crypto: expose encryption sector size in APIs
        block: use 1 MB bounce buffers for crypto instead of 16KB
        iotests: Add test 197 for covering copy-on-read
        block: Perform copy-on-read in loop
        block: Add blkdebug hook for copy-on-read
        iotests: Restore stty settings on completion
        block: Uniform handling of 0-length bdrv_get_block_status()
        qemu-io: Add -C for opening with copy-on-read
        commit: Remove overlay_bs
        qemu-iotests: Test commit block job where top has two parents
        qemu-iotests: Allow QMP pretty printing in common.qemu
        ...
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      530049bc
    • Peter Maydell's avatar
      Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20171006' into staging · 5121d81e
      Peter Maydell authored
      
      target-arm:
       * v8M: more preparatory work
       * nvic: reset properly rather than leaving the nvic in a weird state
       * xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
       * sd: fix out-of-bounds check for multi block reads
       * arm: Fix SMC reporting to EL2 when QEMU provides PSCI
      
      # gpg: Signature made Fri 06 Oct 2017 16:58:15 BST
      # gpg:                using RSA key 0x3C2525ED14360CDE
      # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
      # gpg:                 aka "Peter Maydell <pmaydell@gmail.com>"
      # gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
      # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE
      
      * remotes/pmaydell/tags/pull-target-arm-20171006:
        nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit
        target/arm: Factor out "get mmuidx for specified security state"
        target/arm: Fix calculation of secure mm_idx values
        target/arm: Implement security attribute lookups for memory accesses
        nvic: Implement Security Attribution Unit registers
        target/arm: Add v8M support to exception entry code
        target/arm: Add support for restoring v8M additional state context
        target/arm: Update excret sanity checks for v8M
        target/arm: Add new-in-v8M SFSR and SFAR
        target/arm: Don't warn about exception return with PC low bit set for v8M
        target/arm: Warn about restoring to unaligned stack
        target/arm: Check for xPSR mismatch usage faults earlier for v8M
        target/arm: Restore SPSEL to correct CONTROL register on exception return
        target/arm: Restore security state on exception return
        target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode
        target/arm: Don't switch to target stack early in v7M exception return
        nvic: Clear the vector arrays and prigroup on reset
        hw/arm/xlnx-zynqmp: Mark the "xlnx, zynqmp" device with user_creatable = false
        hw/sd: fix out-of-bounds check for multi block reads
        arm: Fix SMC reporting to EL2 when QEMU provides PSCI
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      5121d81e
    • Peter Maydell's avatar
      nvic: Add missing code for writing SHCSR.HARDFAULTPENDED bit · 04829ce3
      Peter Maydell authored
      
      When we added support for the new SHCSR bits in v8M in commit
      437d59c1 the code to support writing to the new HARDFAULTPENDED
      bit was accidentally only added for non-secure writes; the
      secure banked version of the bit should also be writable.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-21-git-send-email-peter.maydell@linaro.org
      04829ce3
    • Peter Maydell's avatar
      target/arm: Factor out "get mmuidx for specified security state" · b81ac0eb
      Peter Maydell authored
      
      For the SG instruction and secure function return we are going
      to want to do memory accesses using the MMU index of the CPU
      in secure state, even though the CPU is currently in non-secure
      state. Write arm_v7m_mmu_idx_for_secstate() to do this job,
      and use it in cpu_mmu_index().
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org
      b81ac0eb
    • Peter Maydell's avatar
      target/arm: Fix calculation of secure mm_idx values · fe768788
      Peter Maydell authored
      
      In cpu_mmu_index() we try to do this:
              if (env->v7m.secure) {
                  mmu_idx += ARMMMUIdx_MSUser;
              }
      but it will give the wrong answer, because ARMMMUIdx_MSUser
      includes the 0x40 ARM_MMU_IDX_M field, and so does the
      mmu_idx we're adding to, and we'll end up with 0x8n rather
      than 0x4n. This error is then nullified by the call to
      arm_to_core_mmu_idx() which masks out the high part, but
      we're about to factor out the code that calculates the
      ARMMMUIdx values so it can be used without passing it through
      arm_to_core_mmu_idx(), so fix this bug first.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
      fe768788
    • Peter Maydell's avatar
      target/arm: Implement security attribute lookups for memory accesses · 35337cc3
      Peter Maydell authored
      
      Implement the security attribute lookups for memory accesses
      in the get_phys_addr() functions, causing these to generate
      various kinds of SecureFault for bad accesses.
      
      The major subtlety in this code relates to handling of the
      case when the security attributes the SAU assigns to the
      address don't match the current security state of the CPU.
      
      In the ARM ARM pseudocode for validating instruction
      accesses, the security attributes of the address determine
      whether the Secure or NonSecure MPU state is used. At face
      value, handling this would require us to encode the relevant
      bits of state into mmu_idx for both S and NS at once, which
      would result in our needing 16 mmu indexes. Fortunately we
      don't actually need to do this because a mismatch between
      address attributes and CPU state means either:
       * some kind of fault (usually a SecureFault, but in theory
         perhaps a UserFault for unaligned access to Device memory)
       * execution of the SG instruction in NS state from a
         Secure & NonSecure code region
      
      The purpose of SG is simply to flip the CPU into Secure
      state, so we can handle it by emulating execution of that
      instruction directly in arm_v7m_cpu_do_interrupt(), which
      means we can treat all the mismatch cases as "throw an
      exception" and we don't need to encode the state of the
      other MPU bank into our mmu_idx values.
      
      This commit doesn't include the actual emulation of SG;
      it also doesn't include implementation of the IDAU, which
      is a per-board way to specify hard-coded memory attributes
      for addresses, which override the CPU-internal SAU if they
      specify a more secure setting than the SAU is programmed to.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
      35337cc3
    • Peter Maydell's avatar
      nvic: Implement Security Attribution Unit registers · 9901c576
      Peter Maydell authored
      
      Implement the register interface for the SAU: SAU_CTRL,
      SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the
      actual behaviour is implemented here; registers just
      read back as written.
      
      When the CPU definition for Cortex-M33 is eventually
      added, its initfn will set cpu->sau_sregion, in the same
      way that we currently set cpu->pmsav7_dregion for the
      M3 and M4.
      
      Number of SAU regions is typically a configurable
      CPU parameter, but this patch doesn't provide a
      QEMU CPU property for it. We can easily add one when
      we have a board that requires it.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
      9901c576
    • Peter Maydell's avatar
      target/arm: Add v8M support to exception entry code · d3392718
      Peter Maydell authored
      
      Add support for v8M and in particular the security extension
      to the exception entry code. This requires changes to:
       * calculation of the exception-return magic LR value
       * push the callee-saves registers in certain cases
       * clear registers when taking non-secure exceptions to avoid
         leaking information from the interrupted secure code
       * switch to the correct security state on entry
       * use the vector table for the security state we're targeting
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
      d3392718
    • Peter Maydell's avatar
      target/arm: Add support for restoring v8M additional state context · 907bedb3
      Peter Maydell authored
      
      For v8M, exceptions from Secure to Non-Secure state will save
      callee-saved registers to the exception frame as well as the
      caller-saved registers. Add support for unstacking these
      registers in exception exit when necessary.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
      907bedb3
    • Peter Maydell's avatar
      target/arm: Update excret sanity checks for v8M · bfb2eb52
      Peter Maydell authored
      
      In v8M, more bits are defined in the exception-return magic
      values; update the code that checks these so we accept
      the v8M values when the CPU permits them.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
      bfb2eb52
    • Peter Maydell's avatar
      target/arm: Add new-in-v8M SFSR and SFAR · bed079da
      Peter Maydell authored
      
      Add the new M profile Secure Fault Status Register
      and Secure Fault Address Register.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
      bed079da
    • Peter Maydell's avatar
      target/arm: Don't warn about exception return with PC low bit set for v8M · 4e4259d3
      Peter Maydell authored
      
      In the v8M architecture, return from an exception to a PC which
      has bit 0 set is not UNPREDICTABLE; it is defined that bit 0
      is discarded [R_HRJH]. Restrict our complaint about this to v7M.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
      4e4259d3
    • Peter Maydell's avatar
      target/arm: Warn about restoring to unaligned stack · cb484f9a
      Peter Maydell authored
      
      Attempting to do an exception return with an exception frame that
      is not 8-aligned is UNPREDICTABLE in v8M; warn about this.
      (It is not UNPREDICTABLE in v7M, and our implementation can
      handle the merely-4-aligned case fine, so we don't need to
      do anything except warn.)
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
      cb484f9a
    • Peter Maydell's avatar
      target/arm: Check for xPSR mismatch usage faults earlier for v8M · 224e0c30
      Peter Maydell authored
      
      ARM v8M specifies that the INVPC usage fault for mismatched
      xPSR exception field and handler mode bit should be checked
      before updating the PSR and SP, so that the fault is taken
      with the existing stack frame rather than by pushing a new one.
      Perform this check in the right place for v8M.
      
      Since v7M specifies in its pseudocode that this usage fault
      check should happen later, we have to retain the original
      code for that check rather than being able to merge the two.
      (The distinction is architecturally visible but only in
      very obscure corner cases like attempting an invalid exception
      return with an exception frame in read only memory.)
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
      224e0c30
    • Peter Maydell's avatar
      target/arm: Restore SPSEL to correct CONTROL register on exception return · 3f0cddee
      Peter Maydell authored
      
      On exception return for v8M, the SPSEL bit in the EXC_RETURN magic
      value should be restored to the SPSEL bit in the CONTROL register
      banked specified by the EXC_RETURN.ES bit.
      
      Add write_v7m_control_spsel_for_secstate() which behaves like
      write_v7m_control_spsel() but allows the caller to specify which
      CONTROL bank to use, reimplement write_v7m_control_spsel() in
      terms of it, and use it in exception return.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
      3f0cddee
    • Peter Maydell's avatar
      target/arm: Restore security state on exception return · 3919e60b
      Peter Maydell authored
      
      Now that we can handle the CONTROL.SPSEL bit not necessarily being
      in sync with the current stack pointer, we can restore the correct
      security state on exception return. This happens before we start
      to read registers off the stack frame, but after we have taken
      possible usage faults for bad exception return magic values and
      updated CONTROL.SPSEL.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
      3919e60b
    • Peter Maydell's avatar
      target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler mode · de2db7ec
      Peter Maydell authored
      
      In the v7M architecture, there is an invariant that if the CPU is
      in Handler mode then the CONTROL.SPSEL bit cannot be nonzero.
      This in turn means that the current stack pointer is always
      indicated by CONTROL.SPSEL, even though Handler mode always uses
      the Main stack pointer.
      
      In v8M, this invariant is removed, and CONTROL.SPSEL may now
      be nonzero in Handler mode (though Handler mode still always
      uses the Main stack pointer). In preparation for this change,
      change how we handle this bit: rename switch_v7m_sp() to
      the now more accurate write_v7m_control_spsel(), and make it
      check both the handler mode state and the SPSEL bit.
      
      Note that this implicitly changes the point at which we switch
      active SP on exception exit from before we pop the exception
      frame to after it.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
      de2db7ec
    • Peter Maydell's avatar
      target/arm: Don't switch to target stack early in v7M exception return · 5b522399
      Peter Maydell authored
      
      Currently our M profile exception return code switches to the
      target stack pointer relatively early in the process, before
      it tries to pop the exception frame off the stack. This is
      awkward for v8M for two reasons:
       * in v8M the process vs main stack pointer is not selected
         purely by the value of CONTROL.SPSEL, so updating SPSEL
         and relying on that to switch to the right stack pointer
         won't work
       * the stack we should be reading the stack frame from and
         the stack we will eventually switch to might not be the
         same if the guest is doing strange things
      
      Change our exception return code to use a 'frame pointer'
      to read the exception frame rather than assuming that we
      can switch the live stack pointer this early.
      
      Signed-off-by: default avatarPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
      5b522399
Loading