- Dec 10, 2021
-
-
Maxim Levitsky authored
Use the KVM_GUESTDBG_BLOCKIRQ debug flag if supported. Signed-off-by:
Maxim Levitsky <mlevitsk@redhat.com> [Extracted from Maxim's patch into a separate commit. - Paolo] Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20211111110604.207376-6-pbonzini@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Maxim Levitsky authored
Signed-off-by:
Maxim Levitsky <mlevitsk@redhat.com> [Extracted from Maxim's patch into a separate commit. - Paolo] Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Message-Id: <20211111110604.207376-5-pbonzini@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Nov 29, 2021
-
-
Alex Bennée authored
When we set cpu->cflags_next_tb it is because we want to carefully control the execution of the next TB. Currently there is a race that causes the second stage of watchpoint handling to get ignored if an IRQ is processed before we finish executing the instruction that triggers the watchpoint. Use the new CF_NOIRQ facility to avoid the race. We also suppress IRQs when handling precise self modifying code to avoid unnecessary bouncing. Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Cc: Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru> Fixes: https://gitlab.com/qemu-project/qemu/-/issues/245 Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211129140932.4115115-3-alex.bennee@linaro.org>
-
- Nov 16, 2021
-
-
Paolo Bonzini authored
dlopen is never used after it is sought via cc.find_library, because plugins use gmodule instead; remove the test. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20211110092454.30916-1-pbonzini@redhat.com> Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211115142915.3797652-5-alex.bennee@linaro.org>
-
- Nov 10, 2021
-
-
Greg Kurz authored
A TCG vCPU doing a busy loop systematicaly hangs the QEMU monitor if the user passes 'device_add' without argument. This is because drain_cpu_all() which is called from qmp_device_add() cannot return if readers don't exit read-side critical sections. That is typically what busy-looping TCG vCPUs do: int cpu_exec(CPUState *cpu) { [...] rcu_read_lock(); [...] while (!cpu_handle_exception(cpu, &ret)) { // Busy loop keeps vCPU here } [...] rcu_read_unlock(); return ret; } For MTTCG, have all vCPU threads register a force_rcu notifier that will kick them out of the loop using async_run_on_cpu(). The notifier is called with the rcu_registry_lock mutex held, using async_run_on_cpu() ensures there are no deadlocks. For RR, a single thread runs all vCPUs. Just register a single notifier that kicks the current vCPU to the next one. For MTTCG: Suggested-by:
Paolo Bonzini <pbonzini@redhat.com> For RR: Suggested-by:
Richard Henderson <richard.henderson@linaro.org> Fixes: 7bed8995 ("device_core: use drain_call_rcu in in qmp_device_add") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/650 Signed-off-by:
Greg Kurz <groug@kaod.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211109183523.47726-3-groug@kaod.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Nov 04, 2021
-
-
Alex Bennée authored
Currently we make the assumption that the guest frontend loads all op code bytes sequentially. This mostly holds up for regular fixed encodings but some architectures like s390x like to re-read the instruction which causes weirdness to occur. Rather than changing the frontends make the plugin API a little more ergonomic and able to handle the re-read case. Stuff will still get strange if we read ahead of the opcode but so far no front ends have done that and this patch asserts the case so we can catch it early if they do. Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Suggested-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211026102234.3961636-21-alex.bennee@linaro.org>
-
- Nov 02, 2021
-
-
Daniel P. Berrangé authored
This is a counterpart to the HMP "info opcount" command. It is being added with an "x-" prefix because this QMP command is intended as an ad hoc debugging tool and will thus not be modelled in QAPI as fully structured data, nor will it have long term guaranteed stability. The existing HMP command is rewritten to call the QMP command. Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by:
Daniel P. Berrangé <berrange@redhat.com>
-
Daniel P. Berrangé authored
This is a counterpart to the HMP "info jit" command. It is being added with an "x-" prefix because this QMP command is intended as an ad hoc debugging tool and will thus not be modelled in QAPI as fully structured data, nor will it have long term guaranteed stability. The existing HMP command is rewritten to call the QMP command. Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by:
Daniel P. Berrangé <berrange@redhat.com>
-
Alexander Graf authored
HVF has generic memory listener code that adds all RAM regions as HVF RAM regions. However, HVF can only handle page aligned, page granule regions. So let's ignore regions that are not page aligned and sized. They will be trapped as MMIO instead. Signed-off-by:
Alexander Graf <agraf@csgraf.de> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20211025132147.28308-1-agraf@csgraf.de> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Richard Henderson authored
To be called from tcg generated code on hosts that support unaligned accesses natively, in response to an access that is supposed to be aligned. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Use the new cpu_loop_exit_sigbus for cpu_mmu_lookup. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Use the new cpu_loop_exit_sigbus for atomic_mmu_lookup, which has access to complete alignment info from the TCGMemOpIdx arg. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
This is a new interface to be provided by the os emulator for raising SIGSEGV on fault. Use the new record_sigsegv target hook. Reviewed by: Warner Losh <imp@bsdimp.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Drop the *BSD code, to be re-created under bsd-user/ later. Reviewed-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Drop the *BSD code, to be re-created under bsd-user/ later. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Drop the *BSD code, to be re-created under bsd-user/ later. Drop the Solaris code as completely unused. Reviewed-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Drop the *BSD code, to be re-created under bsd-user/ later. Reviewed-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split host_signal_pc and host_signal_write out of user-exec.c. Drop the *BSD code, to be re-created under bsd-user/ later. Reviewed-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Nov 01, 2021
-
-
Hyman Huang authored
dirty_pages is used to calculate dirtyrate via dirty ring, when enabled, kvm-reaper will increase the dirty pages after gfns being dirtied. kvm_dirty_ring_enabled shows if kvm-reaper is working. dirtyrate thread could use it to check if measurement can base on dirty ring feature. Signed-off-by:
Hyman Huang(黄勇) <huangy81@chinatelecom.cn> Message-Id: <fee5fb2ab17ec2159405fc54a3cff8e02322f816.1624040308.git.huangy81@chinatelecom.cn> Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Oct 30, 2021
-
-
Richard Henderson authored
Remove the comment about siglongjmp. We do use sigsetjmp in the main cpu loop, but we do not save the signal mask as most exits from the cpu loop do not require them. Reviewed-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
This is the major portion of handle_cpu_signal which is specific to tcg, handling the page protections for the translations. Most of the rest will migrate to linux-user/ shortly. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> --- v2: Pass guest address to handle_sigsegv_accerr_write.
-
Richard Henderson authored
Currently there are only two places that require we reset this value before exiting to the main loop, but that will change. Reviewed-by:
Warner Losh <imp@bsdimp.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Split out a function to adjust the raw signal pc into a value that could be passed to cpu_restore_state. Reviewed-by:
Warner Losh <imp@bsdimp.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> --- v2: Adjust pc in place; return MMUAccessType.
-
- Oct 15, 2021
-
-
Richard Henderson authored
Currently the change in cpu_tb_exec is masked by the debug exception being raised by the translators. But this allows us to remove that code. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Oct 13, 2021
-
-
Richard Henderson authored
These functions have been replaced by cpu_*_mmu as the most proper interface to use from target code. Hide these declarations from code that should not use them. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
These functions are much closer to the softmmu helper functions, in that they take the complete MemOpIdx, and from that they may enforce required alignment. The previous cpu_ldst.h functions did not have alignment info, and so did not enforce it. Retain this by adding MO_UNALN to the MemOp that we create in calling the new functions. Note that we are not yet enforcing alignment for user-only, but we now have the information with which to do so. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Alexander Graf authored
We can handle up to a static amount of memory slots, capped by the size of an internal array. Let's make sure that array size is the only source of truth for the number of elements in that array. Signed-off-by:
Alexander Graf <agraf@csgraf.de> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20211008054616.43828-1-agraf@csgraf.de> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Philippe Mathieu-Daudé authored
SEV is x86-specific, no need to add its stub to other architectures. Move the stub file to target/i386/kvm/. Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20211007161716.453984-5-philmd@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Oct 12, 2021
-
-
Alex Bennée authored
Coverity doesn't know enough about how we have arranged our plugin TCG ops to know we will always have incremented insn_idx before injecting the callback. Let us assert it for the benefit of Coverity and protect ourselves from accidentally breaking the assumption and triggering harder to grok errors deeper in the code if we attempt a negative indexed array lookup. However to get to this point we re-factor the code and remove the second hand instruction boundary detection in favour of scanning the full set of ops and using the existing INDEX_op_insn_start to cleanly detect when the instruction has started. As we no longer need the plugin specific list of ops we delete that. My initial benchmarks shows no discernible impact of dropping the plugin specific ops list. Fixes: Coverity 1459509 Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20210917162332.3511179-12-alex.bennee@linaro.org>
-
- Oct 05, 2021
-
-
Richard Henderson authored
There is no point in encoding load/store within a bit of the memory trace info operand. Represent atomic operations as a single read-modify-write tracepoint. Use MemOpIdx instead of inventing a form specifically for traces. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Use the MemOpIdx directly, rather than the rearrangement of the same bits currently done by the trace infrastructure. Pass in enum qemu_plugin_mem_rw so that we are able to treat read-modify-write operations as a single operation. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We will shortly use the MemOpIdx directly, but in the meantime re-compute the trace meminfo. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We (will) often have the complete MemOpIdx handy, so use that. Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We're about to move this out of tcg.h, so rename it as we did when moving MemOp. Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We are already inconsistent about whether or not MO_SIGN is set in trace_mem_get_info. Dropping it entirely allows some simplification. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Sep 30, 2021
-
-
Peter Xu authored
Provide a name field for all the memory listeners. It can be used to identify which memory listener is which. Signed-off-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Message-Id: <20210817013553.30584-2-peterx@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-