- Jun 25, 2021
-
-
Paolo Bonzini authored
In order to make SMP configuration a Machine property, we need a getter as well as a setter. To simplify the implementation put everything that the getter needs in the CpuTopology struct. Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20210617155308.928754-7-pbonzini@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Similar to other handle_aiocb_* functions, handle_aiocb_ioctl needs to cater for the possibility that ioctl is interrupted by a signal. Otherwise, the I/O is incorrectly reported as a failure to the guest. Reported-by:
Gordon Watson <gwatson@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Joelle van Dyne authored
iOS hosts do not have these defined so we fallback to the default behaviour. Co-authored-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Try all the possible ioctls for disk size as long as they are supported, to keep the #if ladder simple. Extracted and cleaned up from a patch by Joelle van Dyne and Warner Losh. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Joelle van Dyne authored
Some BSD platforms do not have this header. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by:
Joelle van Dyne <j@getutm.app> Message-Id: <20210315180341.31638-3-j@getutm.app> Reviewed-by:
Max Reitz <mreitz@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Joelle van Dyne authored
On Darwin (iOS), there are no system level APIs for directly accessing host block devices. We detect this at configure time. Signed-off-by:
Joelle van Dyne <j@getutm.app> Message-Id: <20210315180341.31638-2-j@getutm.app> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
bs->sg is only true for character devices, but block devices can also be used with scsi-block and scsi-generic. Unfortunately BLKSECTGET returns bytes in an int for /dev/sgN devices, and sectors in a short for block devices, so account for that in the code. The maximum transfer also need not be a power of 2 (for example I have seen disks with 1280 KiB maximum transfer) so there's no need to pass the result through pow2floor. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
For block host devices, I/O can happen through either the kernel file descriptor I/O system calls (preadv/pwritev, io_submit, io_uring) or the SCSI passthrough ioctl SG_IO. In the latter case, the size of each transfer can be limited by the HBA, while for file descriptor I/O the kernel is able to split and merge I/O in smaller pieces as needed. Applying the HBA limits to file descriptor I/O results in more system calls and suboptimal performance, so this patch splits the max_transfer limit in two: max_transfer remains valid and is used in general, while max_hw_transfer is limited to the maximum hardware size. max_hw_transfer can then be included by the scsi-generic driver in the block limits page, to ensure that the stricter hardware limit is used. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Block device requests must be aligned to bs->bl.request_alignment. It makes sense for drivers to align bs->bl.max_transfer the same way; however when there is no specified limit, blk_get_max_transfer just returns INT_MAX. Since the contract of the function does not specify that INT_MAX means "no maximum", just align the outcome of the function (whether INT_MAX or bs->bl.max_transfer) before returning it. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
osdep.h provides a ROUND_UP macro to hide bitwise operations for the purpose of rounding a number up to a power of two; add a ROUND_DOWN macro that does the same with truncation towards zero. While at it, change the formatting of some comments. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
I/O to a disk via read/write is not limited by the number of segments allowed by the host adapter; the kernel can split requests if needed, and the limit imposed by the host adapter can be very low (256k or so) to avoid that SG_IO returns EINVAL if memory is heavily fragmented. Since this value is only interesting for SG_IO-based I/O, do not include it in the max_transfer and only take it into account when patching the block limits VPD page in the scsi-generic device. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com>
-
Paolo Bonzini authored
Even though it was only called for devices that have bs->sg set (which must be character devices), sg_get_max_segments looked at /sys/dev/block which only works for block devices. On Linux the sg driver has its own way to provide the maximum number of iovecs in a scatter/gather list, so add support for it. The block device path is kept because it will be reinstated in the next patches. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com>
-
Peter Xu authored
Found this when I wanted to try the per-vcpu dirty rate series out, then I found that it's not really working and it can quickly hang death a guest. I found strange errors (e.g. guest crash after migration) happens even without the per-vcpu dirty rate series. When merging dirty ring, probably no one notice that the trivial renaming diff [1] missed two existing references of kvm_dirty_ring_sizes; they do matter since otherwise we'll mmap() a shorter range of memory after the renaming. I think it didn't SIGBUS for me easily simply because some other stuff within qemu mmap()ed right after the dirty rings (e.g. when testing 4096 slots, it aligned with one small page on x86), so when we access the rings we've been reading/writting to random memory elsewhere of qemu. Fix the two sizes when map/unmap the shared dirty gfn memory. [1] https://lore.kernel.org/qemu-devel/dac5f0c6-1bca-3daf-e5d2-6451dbbaca93@redhat.com/ Cc: Hyman Huang <huangy81@chinatelecom.cn> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Peter Xu <peterx@redhat.com> Message-Id: <20210609014355.217110-1-peterx@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Make it depend on gnutls too, since it is only used as part of gnutls tests. Reviewed-by:
Richard Henderson <richard.henderson@liaro.org> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Richard Henderson <richard.henderson@liaro.org> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
meson.build already decides whether it is possible to build the TLS test suite. There is no need to include that in the source as well. The dummy tests in fact are broken because they do not produce valid TAP output (empty output is rejected by scripts/tap-driver.pl). Cc: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Meson is more verbose than the configure script; the outcome of the preadv test can be found in its output and it is not worth including it again in the summary. Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
All XTS configuration uses qemu_private_xts. Drop other variables as they have only ever been used to generate the summary (which has since been moved to meson.build). Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by:
Richard Henderson <richard.henderson@liaro.org> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
CONFIG_GCRYPT_HMAC has been removed now that all supported distros have it. Reviewed-by:
Richard Henderson <richard.henderson@liaro.org> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Linux 5.14 will add support for nested TSC scaling. Add the corresponding feature in QEMU; to keep support for existing kernels, do not add it to any processor yet. The handling of the VMCS enumeration MSR is ugly; once we have more than one case, we may want to add a table to check VMX features against. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Jun 24, 2021
-
-
Peter Maydell authored
target-arm queue: * Don't require 'virt' board to be compiled in for ACPI GHES code * docs: Document which architecture extensions we emulate * Fix bugs in M-profile FPCXT_NS accesses * First slice of MVE patches * Implement MTE3 * docs/system: arm: Add nRF boards description # gpg: Signature made Thu 24 Jun 2021 14:59:16 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20210624: (57 commits) docs/system: arm: Add nRF boards description target/arm: Implement MTE3 target/arm: Make VMOV scalar <-> gpreg beatwise for MVE target/arm: Implement MVE VADDV target/arm: Implement MVE VHCADD target/arm: Implement MVE VCADD target/arm: Implement MVE VADC, VSBC target/arm: Implement MVE VRHADD target/arm: Implement MVE VQDMULL (vector) target/arm: Implement MVE VQDMLSDH and VQRDMLSDH target/arm: Implement MVE VQDMLADH and VQRDMLADH target/arm: Implement MVE VRSHL target/arm: Implement MVE VSHL insn target/arm: Implement MVE VQRSHL target/arm: Implement MVE VQSHL (vector) target/arm: Implement MVE VQADD, VQSUB (vector) target/arm: Implement MVE VQDMULH, VQRDMULH (vector) target/arm: Implement MVE VQDMULL scalar target/arm: Implement MVE VQDMULH and VQRDMULH (scalar) target/arm: Implement MVE VQADD and VQSUB ... Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Alexandre Iooss authored
This adds the target guide for BBC Micro:bit. Information is taken from https://wiki.qemu.org/Features/MicroBit and from hw/arm/nrf51_soc.c. Signed-off-by:
Alexandre Iooss <erdnaxe@crans.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Joel Stanley <joel@jms.id.au> Message-id: 20210621075625.540471-1-erdnaxe@crans.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Collingbourne authored
MTE3 introduces an asymmetric tag checking mode, in which loads are checked synchronously and stores are checked asynchronously. Add support for it. Signed-off-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210616195614.11785-1-pcc@google.com [PMM: Add line to emulation.rst] Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
Peter Maydell authored
In a CPU with MVE, the VMOV (vector lane to general-purpose register) and VMOV (general-purpose register to vector lane) insns are not predicated, but they are subject to beatwise execution if they are not in an IT block. Since our implementation always executes all 4 beats in one tick, this means only that we need to handle PSR.ECI: * we must do the usual check for bad ECI state * we must advance ECI state if the insn succeeds * if ECI says we should not be executing the beat corresponding to the lane of the vector register being accessed then we should skip performing the move Note that if PSR.ECI is non-zero then we cannot be in an IT block. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-45-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VADDV insn, which performs an addition across vector lanes. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-44-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VHCADD insn, which is similar to VCADD but performs a halving step. This one overlaps with VADC. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-43-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VCADD insn, which performs a complex add with rotate. Note that the size=0b11 encoding is VSBC. The architecture grants some leeway for the "destination and Vm source overlap" case for the size MO_32 case, but we choose not to make use of it, instead always calculating all 16 bytes worth of results before setting the destination register. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-42-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VADC and VSBC insns. These perform an add-with-carry or subtract-with-carry of the 32-bit elements in each lane of the input vectors, where the carry-out of each add is the carry-in of the next. The initial carry input is either 1 or is from FPSCR.C; the carry out at the end is written back to FPSCR.C. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-41-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VRHADD insn, which performs a rounded halving addition. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-40-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the vector form of the MVE VQDMULL insn. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-39-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VQDMLSDH and VQRDMLSDH insns, which are like VQDMLADH and VQRDMLADH except that products are subtracted rather than added. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-38-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VQDMLADH and VQRDMLADH insns. These multiply elements, and then add pairs of products, double, possibly round, saturate and return the high half of the result. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-37-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VRSHL insn (vector form). Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-36-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VSHL insn (vector form). Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-35-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MV VQRSHL (vector) insn. Again, the code to perform the actual shifts is borrowed from neon_helper.c. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-34-peter.maydell@linaro.org
-
Peter Maydell authored
Implement the MVE VQSHL insn (encoding T4, which is the vector-shift-by-vector version). The DO_SQSHL_OP and DO_UQSHL_OP macros here are derived from the neon_helper.c code for qshl_u{8,16,32} and qshl_s{8,16,32}. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-id: 20210617121628.20116-33-peter.maydell@linaro.org
-