- Nov 02, 2020
-
-
Peter Maydell authored
In arm_v7m_mmu_idx_for_secstate() we get the 'priv' level to pass to armv7m_mmu_idx_for_secstate_and_priv() by calling arm_current_el(). This is incorrect when the security state being queried is not the current one, because arm_current_el() uses the current security state to determine which of the banked CONTROL.nPRIV bits to look at. The effect was that if (for instance) Secure state was in privileged mode but Non-Secure was not then we would return the wrong MMU index. The only places where we are using this function in a way that could trigger this bug are for the stack loads during a v8M function-return and for the instruction fetch of a v8M SG insn. Fix the bug by expanding out the M-profile version of the arm_current_el() logic inline so it can use the passed in secstate rather than env->v7m.secure. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201022164408.13214-1-peter.maydell@linaro.org
-
Alex Chen authored
In exynos4210_fimd_update(), the pointer s is dereferinced before being check if it is valid, which may lead to NULL pointer dereference. So move the assignment to global_width after checking that the s is valid. Reported-by:
Euler Robot <euler.robot@huawei.com> Signed-off-by:
Alex Chen <alex.chen@huawei.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 5F9F8D88.9030102@huawei.com Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Alex Chen authored
In omap_lcd_interrupts(), the pointer omap_lcd is dereferinced before being check if it is valid, which may lead to NULL pointer dereference. So move the assignment to surface after checking that the omap_lcd is valid and move surface_bits_per_pixel(surface) to after the surface assignment. Reported-by:
Euler Robot <euler.robot@huawei.com> Signed-off-by:
AlexChen <alex.chen@huawei.com> Message-id: 5F9CDB8A.9000001@huawei.com Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Rémi Denis-Courmont authored
When booting a CPU with EL3 using the -kernel flag, set up CPTR_EL3 so that SVE will not trap to EL3. Signed-off-by:
Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030151541.11976-1-remi@remlab.net Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Philippe Mathieu-Daudé authored
Use the BIT_ULL() macro to ensure we use 64-bit arithmetic. This fixes the following Coverity issue (OVERFLOW_BEFORE_WIDEN): CID 1432363 (#1 of 1): Unintentional integer overflow: overflow_before_widen: Potentially overflowing expression 1 << scale with type int (32 bits, signed) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type hwaddr (64 bits, unsigned). Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Acked-by:
Eric Auger <eric.auger@redhat.com> Message-id: 20201030144617.1535064-1-philmd@redhat.com Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
If we're using the capstone disassembler, disassembly of a run of instructions more than 32 bytes long disassembles the wrong data for instructions beyond the 32 byte mark: (qemu) xp /16x 0x100 0000000000000100: 0x00000005 0x54410001 0x00000001 0x00001000 0000000000000110: 0x00000000 0x00000004 0x54410002 0x3c000000 0000000000000120: 0x00000000 0x00000004 0x54410009 0x74736574 0000000000000130: 0x00000000 0x00000000 0x00000000 0x00000000 (qemu) xp /16i 0x100 0x00000100: 00000005 andeq r0, r0, r5 0x00000104: 54410001 strbpl r0, [r1], #-1 0x00000108: 00000001 andeq r0, r0, r1 0x0000010c: 00001000 andeq r1, r0, r0 0x00000110: 00000000 andeq r0, r0, r0 0x00000114: 00000004 andeq r0, r0, r4 0x00000118: 54410002 strbpl r0, [r1], #-2 0x0000011c: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c 0x00000120: 54410001 strbpl r0, [r1], #-1 0x00000124: 00000001 andeq r0, r0, r1 0x00000128: 00001000 andeq r1, r0, r0 0x0000012c: 00000000 andeq r0, r0, r0 0x00000130: 00000004 andeq r0, r0, r4 0x00000134: 54410002 strbpl r0, [r1], #-2 0x00000138: 3c000000 .byte 0x00, 0x00, 0x00, 0x3c 0x0000013c: 00000000 andeq r0, r0, r0 Here the disassembly of 0x120..0x13f is using the data that is in 0x104..0x123. This is caused by passing the wrong value to the read_memory_func(). The intention is that at this point in the loop the 'cap_buf' buffer already contains 'csize' bytes of data for the instruction at guest addr 'pc', and we want to read in an extra 'tsize' bytes. Those extra bytes are therefore at 'pc + csize', not 'pc'. On the first time through the loop 'csize' happens to be zero, so the initial read of 32 bytes into cap_buf is correct and as long as the disassembly never needs to read more data we return the correct information. Use the correct guest address in the call to read_memory_func(). Cc: qemu-stable@nongnu.org Fixes: https://bugs.launchpad.net/qemu/+bug/1900779 Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201022132445.25039-1-peter.maydell@linaro.org
-
Rémi Denis-Courmont authored
Secure mode is not exempted from checking SCR_EL3.TLOR, and in the future HCR_EL2.TLOR when S-EL2 is enabled. Signed-off-by:
Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Rémi Denis-Courmont authored
HCR should be applied when NS is set, not when it is cleared. Signed-off-by:
Rémi Denis-Courmont <remi.denis.courmont@huawei.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
The helper functions for performing the udot/sdot operations against a scalar were not using an address-swizzling macro when converting the index of the scalar element into a pointer into the vm array. This had no effect on little-endian hosts but meant we generated incorrect results on big-endian hosts. For these insns, the index is indexing over group of 4 8-bit values, so 32 bits per indexed entity, and H4() is therefore what we want. (For Neon the only possible input indexes are 0 and 1.) Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-3-peter.maydell@linaro.org
-
Peter Maydell authored
In the neon_padd/pmax/pmin helpers for float16, a cut-and-paste error meant we were using the H4() address swizzler macro rather than the H2() which is required for 2-byte data. This had no effect on little-endian hosts but meant we put the result data into the destination Dreg in the wrong order on big-endian hosts. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20201028191712.4910-2-peter.maydell@linaro.org
-
Richard Henderson authored
We can use proper widening loads to extend 32-bit inputs, and skip the "widenfn" step. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-12-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
In both cases, we can sink the write-back and perform the accumulate into the normal destination temps. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-11-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
The only uses of this function are for loading VFP double-precision values, and nothing to do with NEON. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-10-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Replace all uses of neon_load/store_reg64 within translate-neon.c.inc. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-9-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
The only uses of this function are for loading VFP single-precision values, and nothing to do with NEON. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-8-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
We can then use this to improve VMOV (scalar to gp) and VMOV (gp to scalar) so that we simply perform the memory operation that we wanted, rather than inserting or extracting from a 32-bit quantity. These were the last uses of neon_load/store_reg, so remove them. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-7-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
Model these off the aa64 read/write_vec_element functions. Use it within translate-neon.c.inc. The new functions do not allocate or free temps, so this rearranges the calling code a bit. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-6-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
This seems a bit more readable than using offsetof CPU_DoubleU. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-5-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
These are the only users of neon_reg_offset, so remove that. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-4-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
This will shortly have users outside of translate-neon.c.inc. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-3-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Richard Henderson authored
This function makes it clear that we're talking about the whole register, and not the 32-bit piece at index 0. This fixes a bug when running on a big-endian host. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20201030022618.785675-2-richard.henderson@linaro.org Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
9pfs: only test case changes this time * Fix occasional test failures with parallel tests. * Fix coverity error in test code. * Avoid error when auto removing test directory if it disappeared for some reason. * Refactor: Rename functions to make top-level test functions fs_*() easily distinguishable from utility test functions do_*(). * Refactor: Drop unnecessary function arguments in utility test functions. * More test cases using the 9pfs 'local' filesystem driver backend, namely for the following 9p requests: Tunlinkat, Tlcreate, Tsymlink and Tlink. # gpg: Signature made Mon 02 Nov 2020 09:31:35 GMT # gpg: using RSA key 96D8D110CF7AF8084F88590134C2B58765A47395 # gpg: issuer "qemu_oss@crudebyte.com" # gpg: Good signature from "Christian Schoenebeck <qemu_oss@crudebyte.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: ECAB 1A45 4014 1413 BA38 4926 30DB 47C3 A012 D5F4 # Subkey fingerprint: 96D8 D110 CF7A F808 4F88 5901 34C2 B587 65A4 7395 * remotes/cschoenebeck/tags/pull-9p-20201102: tests/9pfs: add local Tunlinkat hard link test tests/9pfs: add local Tlink test tests/9pfs: add local Tunlinkat symlink test tests/9pfs: add local Tsymlink test tests/9pfs: add local Tunlinkat file test tests/9pfs: add local Tlcreate test tests/9pfs: add local Tunlinkat directory test tests/9pfs: simplify do_mkdir() tests/9pfs: Turn fs_mkdir() into a helper tests/9pfs: Turn fs_readdir_split() into a helper tests/9pfs: Factor out do_attach() helper tests/9pfs: Set alloc in fs_create_dir() tests/9pfs: Factor out do_version() helper tests/9pfs: Force removing of local 9pfs test directory tests/9pfs: fix coverity error in create_local_test_dir() tests/9pfs: fix test dir for parallel tests tests/9pfs: make create/remove test dir public Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
VFIO update 2020-11-01 * Migration support (Kirti Wankhede) * s390 DMA limiting (Matthew Rosato) * zPCI hardware info (Matthew Rosato) * Lock guard (Amey Narkhede) * Print fixes (Zhengui li) * Warning/build fixes # gpg: Signature made Sun 01 Nov 2020 20:38:10 GMT # gpg: using RSA key 239B9B6E3BB08B22 # gpg: Good signature from "Alex Williamson <alex.williamson@redhat.com>" [full] # gpg: aka "Alex Williamson <alex@shazbot.org>" [full] # gpg: aka "Alex Williamson <alwillia@redhat.com>" [full] # gpg: aka "Alex Williamson <alex.l.williamson@gmail.com>" [full] # Primary key fingerprint: 42F6 C04E 540B D1A9 9E7B 8A90 239B 9B6E 3BB0 8B22 * remotes/awilliam/tags/vfio-update-20201101.0: (32 commits) vfio: fix incorrect print type hw/vfio: Use lock guard macros s390x/pci: get zPCI function info from host vfio: Add routine for finding VFIO_DEVICE_GET_INFO capabilities s390x/pci: use a PCI Function structure s390x/pci: clean up s390 PCI groups s390x/pci: use a PCI Group structure s390x/pci: create a header dedicated to PCI CLP s390x/pci: Honor DMA limits set by vfio s390x/pci: Add routine to get the vfio dma available count vfio: Find DMA available capability vfio: Create shared routine for scanning info capabilities s390x/pci: Move header files to include/hw/s390x linux-headers: update against 5.10-rc1 update-linux-headers: Add vfio_zdev.h qapi: Add VFIO devices migration stats in Migration stats vfio: Make vfio-pci device migration capable vfio: Add ioctl to get dirty pages bitmap during dma unmap vfio: Dirty page tracking when vIOMMU is enabled vfio: Add vfio_listener_log_sync to mark dirty pages ... Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Nov 01, 2020
-
-
Zhengui Li authored
The type of input variable is unsigned int while the printer type is int. So fix incorrect print type. Signed-off-by:
Zhengui li <lizhengui@huawei.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Amey Narkhede authored
Use qemu LOCK_GUARD macros in hw/vfio. Saves manual unlock calls Signed-off-by:
Amey Narkhede <ameynarkhede03@gmail.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
We use the capability chains of the VFIO_DEVICE_GET_INFO ioctl to retrieve the CLP information that the kernel exports. To be compatible with previous kernel versions we fall back on previous predefined values, same as the emulation values, when the ioctl is found to not support capability chains. If individual CLP capabilities are not found, we fall back on default values for only those capabilities missing from the chain. This patch is based on work previously done by Pierre Morel. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> [aw: non-Linux build fixes] Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
Now that VFIO_DEVICE_GET_INFO supports capability chains, add a helper function to find specific capabilities in the chain. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Pierre Morel authored
We use a ClpRspQueryPci structure to hold the information related to a zPCI Function. This allows us to be ready to support different zPCI functions and to retrieve the zPCI function information from the host. Signed-off-by:
Pierre Morel <pmorel@linux.ibm.com> Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
Add a step to remove all stashed PCI groups to avoid stale data between machine resets. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Pierre Morel authored
We use a S390PCIGroup structure to hold the information related to a zPCI Function group. This allows us to be ready to support multiple groups and to retrieve the group information from the host. Signed-off-by:
Pierre Morel <pmorel@linux.ibm.com> Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Pierre Morel authored
To have a clean separation between s390-pci-bus.h and s390-pci-inst.h headers we export the PCI CLP instructions in a dedicated header. Signed-off-by:
Pierre Morel <pmorel@linux.ibm.com> Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
When an s390 guest is using lazy unmapping, it can result in a very large number of oustanding DMA requests, far beyond the default limit configured for vfio. Let's track DMA usage similar to vfio in the host, and trigger the guest to flush their DMA mappings before vfio runs out. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> [aw: non-Linux build fixes] Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
Create new files for separating out vfio-specific work for s390 pci. Add the first such routine, which issues VFIO_IOMMU_GET_INFO ioctl to collect the current dma available count. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> [aw: Fix non-Linux build with CONFIG_LINUX] Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
The underlying host may be limiting the number of outstanding DMA requests for type 1 IOMMU. Add helper functions to check for the DMA available capability and retrieve the current number of DMA mappings allowed. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> [aw: vfio_get_info_dma_avail moved inside CONFIG_LINUX] Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
Rather than duplicating the same loop in multiple locations, create a static function to do the work. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
Seems a more appropriate location for them. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
commit 3650b228f83adda7e5ee532e2b90429c03f7b9ec Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> [aw: drop pvrdma_ring.h changes to avoid revert of d73415a3 ("qemu/atomic.h: rename atomic_ to qatomic_")] Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Matthew Rosato authored
vfio_zdev.h is used by s390x zPCI support to pass device-specific CLP information between host and userspace. Signed-off-by:
Matthew Rosato <mjrosato@linux.ibm.com> Acked-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Kirti Wankhede authored
Added amount of bytes transferred to the VM at destination by all VFIO devices Signed-off-by:
Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-
Kirti Wankhede authored
If the device is not a failover primary device, call vfio_migration_probe() and vfio_migration_finalize() to enable migration support for those devices that support it respectively to tear it down again. Removed migration blocker from VFIO PCI device specific structure and use migration blocker from generic structure of VFIO device. Signed-off-by:
Kirti Wankhede <kwankhede@nvidia.com> Reviewed-by:
Neo Jia <cjia@nvidia.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com>
-