- Jan 07, 2021
-
-
Richard Henderson authored
Now that all native tcg hosts support splitwx, remove the define. Replace the one use with a test for CONFIG_TCG_INTERPRETER. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Re-use the 256MiB region handling from alloc_code_gen_buffer_anon, and replace that with the shared file mapping. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
This produces a small pc-relative displacement within the generated code to the TB structure that preceeds it. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Cribbed from code posted by Joelle van Dyne <j@getutm.app>, and rearranged to a cleaner structure. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
We cannot use a real temp file, because we would need to find a filesystem that does not have noexec enabled. However, a memfd is not associated with any filesystem. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Plumb the value through to alloc_code_gen_buffer. This is not supported by any os or tcg backend, so for now enabling it will result in an error. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Report better error messages than just "could not allocate". Let alloc_code_gen_buffer set ctx->code_gen_buffer_size and ctx->code_gen_buffer, and simply return bool. Reviewed-by:
Joelle van Dyne <j@getutm.app> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
There is nothing within the translators that ought to be changing the TranslationBlock data, so make it const. This does not actually use the read-only copy of the data structure that exists within the rx region. Reviewed-by:
Joelle van Dyne <j@getutm.app> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Pass both rx and rw addresses to tb_target_set_jmp_target. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Add two helper functions, using a global variable to hold the displacement. The displacement is currently always 0, so no change in behaviour. Begin using the functions in tcg common code only. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
Create a function to determine if a pointer is within the buffer. Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Richard Henderson authored
This value is constant across all thread-local copies of TCGContext, so we might as well move it out of thread-local storage. Reviewed-by:
Joelle van Dyne <j@getutm.app> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jan 04, 2021
-
-
Richard Henderson authored
In f47db80c, we handled odd-sized tail clearing for the case of hosts that have vector operations, but did not handle the case of hosts that do not have vector ops. This was ok until e2e7168a, which changed the encoding of simd_desc such that the odd sizes are impossible. Add memset as a tcg helper, and use that for all out-of-line byte stores to vectors. This includes, but is not limited to, the tail clearing operation in question. Cc: qemu-stable@nongnu.org Buglink: https://bugs.launchpad.net/bugs/1907817 Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jan 02, 2021
-
-
Paolo Bonzini authored
Build the array of command line arguments coming from config_host once for all targets. Add all accelerators to accel/Kconfig so that the command line arguments for accelerators can be computed easily in the existing "foreach sym: accelerators" loop. Reviewed-by:
Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Enable removing tcg/$tcg_arch from the include path when TCG is disabled. Move translate-all.h to include/exec, since stubs exist for the functions defined therein. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Daniele Buono authored
LLVM/Clang, supports runtime checks for forward-edge Control-Flow Integrity (CFI). CFI on indirect function calls (cfi-icall) ensures that, in indirect function calls, the function called is of the right signature for the pointer type defined at compile time. For this check to work, the code must always respect the function signature when using function pointer, the function must be defined at compile time, and be compiled with link-time optimization. This rules out, for example, shared libraries that are dynamically loaded (given that functions are not known at compile time), and code that is dynamically generated at run-time. This patch: 1) Introduces the CONFIG_CFI flag to support cfi in QEMU 2) Introduces a decorator to allow the definition of "sensitive" functions, where a non-instrumented function may be called at runtime through a pointer. The decorator will take care of disabling cfi-icall checks on such functions, when cfi is enabled. 3) Marks functions currently in QEMU that exhibit such behavior, in particular: - The function in TCG that calls pre-compiled TBs - The function in TCI that interprets instructions - Functions in the plugin infrastructures that jump to callbacks - Functions in util that directly call a signal handler Signed-off-by:
Daniele Buono <dbuono@linux.vnet.ibm.com> Acked-by:
Alex Bennée <alex.bennee@linaro.org> Message-Id: <20201204230615.2392-3-dbuono@linux.vnet.ibm.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Dec 18, 2020
-
-
Chen Qun authored
When using -Wimplicit-fallthrough in our CFLAGS, the compiler showed warning: ../accel/tcg/user-exec.c: In function ‘handle_cpu_signal’: ../accel/tcg/user-exec.c:169:13: warning: this statement may fall through [-Wimplicit-fallthrough=] 169 | cpu_exit_tb_from_sighandler(cpu, old_set); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../accel/tcg/user-exec.c:172:9: note: here 172 | default: Mark the cpu_exit_tb_from_sighandler() function with QEMU_NORETURN to fix it. Reported-by:
Euler Robot <euler.robot@huawei.com> Signed-off-by:
Chen Qun <kuhn.chenqun@huawei.com> Signed-off-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Message-Id: <20201211152426.350966-8-thuth@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Dec 16, 2020
-
-
Eduardo Habkost authored
Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Claudio Fontana <cfontana@suse.de> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Message-Id: <20201212155530.23098-12-cfontana@suse.de> Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com>
-
Eduardo Habkost authored
This will let us simplify the code that initializes CPU class methods, when we move cpu_exec_*() to a separate struct. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Claudio Fontana <cfontana@suse.de> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Message-Id: <20201212155530.23098-11-cfontana@suse.de> Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com>
-
Eduardo Habkost authored
Move invocation of CPUClass.cpu_exec_*() to separate helpers, to make it easier to refactor that code later. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Claudio Fontana <cfontana@suse.de> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20201212155530.23098-10-cfontana@suse.de> Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com>
-
- Dec 15, 2020
-
-
Philippe Mathieu-Daudé authored
Since commit efc6c070 ("configure: Add a test for the minimum compiler version") the minimum compiler version required for GCC is 4.8. We can safely remove the special case for GCC 4.6 introduced in commit 0448f5f8 ("cpu-exec: Fix compiler warning (-Werror=clobbered)"). No change for Clang as we don't know. Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20201210134752.780923-3-marcandre.lureau@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Zenghui Yu authored
The kernel KVM_CLEAR_DIRTY_LOG interface has align requirement on both the start and the size of the given range of pages. We have been careful to handle the unaligned cases when performing CLEAR on one slot. But it seems that we forget to take the unaligned *size* case into account when preparing bitmap for the interface, and we may end up clearing dirty status for pages outside of [start, start + size). If the size is unaligned, let's go through the slow path to manipulate a temp bitmap for the interface so that we won't bother with those unaligned bits at the end of bitmap. I don't think this can happen in practice since the upper layer would provide us with the alignment guarantee. I'm not sure if kvm-all could rely on it. And this patch is mainly intended to address correctness of the specific algorithm used inside kvm_log_clear_one_slot(). Signed-off-by:
Zenghui Yu <yuzenghui@huawei.com> Message-Id: <20201208114013.875-1-yuzenghui@huawei.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Pavel Dovgalyuk authored
cpu-exec tries to execute TB without caching when current icount budget is over. But sometimes refilled budget is big enough to try executing cached blocks. This patch checks that instruction budget is big enough for next block execution instead of just running cpu_exec_nocache. It halves the number of calls of cpu_exec_nocache function during tested OS boot scenario. Signed-off-by:
Pavel Dovgalyuk <pavel.dovgalyuk@ispras.ru> Message-Id: <160741865825.348476.7169239332367828943.stgit@pasha-ThinkPad-X280> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Philippe Mathieu-Daudé authored
The '-tb-size' option (replaced by '-accel tcg,tb-size') is deprecated since 5.0 (commit fe174132). Remove it. Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20201202112714.1223783-1-philmd@redhat.com> Reviewed-by:
Ján Tomko <jtomko@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com> Message-Id: <20201210155808.233895-2-thuth@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Machine options can be retrieved as properties of the machine object. Encourage that by removing the "easy" accessor to machine options. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Dec 10, 2020
-
-
Claudio Fontana authored
Signed-off-by:
Claudio Fontana <cfontana@suse.de> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20201015143217.29337-4-cfontana@suse.de> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Claudio Fontana authored
after the initial split into 3 tcg variants, we proceed to also split tcg_start_vcpu_thread. We actually split it in 2 this time, since the icount variant just uses the round robin function. Suggested-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Claudio Fontana <cfontana@suse.de> Message-Id: <20201015143217.29337-3-cfontana@suse.de> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Claudio Fontana authored
split up the CpusAccel tcg_cpus into three TCG variants: tcg_cpus_rr (single threaded, round robin cpus) tcg_cpus_icount (same as rr, but with instruction counting enabled) tcg_cpus_mttcg (multi-threaded cpus) Suggested-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Claudio Fontana <cfontana@suse.de> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20201015143217.29337-2-cfontana@suse.de> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Nov 16, 2020
-
-
Alex Bennée authored
Signed-off-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20201110192316.26397-7-alex.bennee@linaro.org>
-
- Nov 03, 2020
-
-
Elena Afanasova authored
Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Elena Afanasova <eafanasova@gmail.com> Message-Id: <20201017210102.26036-1-eafanasova@gmail.com> Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-
- Oct 27, 2020
-
-
Peter Maydell authored
When using -icount, it's useful for the CPU_LOG_EXEC logging to include information about when cpu_io_recompile() was called, because it alerts the reader of the log that the tracing of a previous TB execution may not actually correspond to an actually executed instruction. For instance if you're using -icount and also -singlestep then a guest instruction that makes an IO access appears in two "Trace" lines, once in a TB that triggers the cpu_io_recompile() and then again in the TB that actually executes. (This is a similar reason to why the "Stopped execution of TB chain before..." logging in cpu_tb_exec() is helpful when trying to track execution flow in the logs.) Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Message-Id: <20201013122658.4620-1-peter.maydell@linaro.org> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Greg Kurz authored
Since we introduced CPU hot-unplug in sPAPR, we don't unrealize the vCPU objects explicitly. Instead, we let QOM handle that for us under object_property_del_all() when the CPU core object is finalized. The only thing we do is calling cpu_remove_sync() to tear the vCPU thread down. This happens to work but it is ugly because: - we call qdev_realize() but the corresponding qdev_unrealize() is buried deep in the QOM code - we call cpu_remove_sync() to undo qemu_init_vcpu() called by ppc_cpu_realize() in target/ppc/translate_init.c.inc - the CPU init and teardown paths aren't really symmetrical The latter didn't bite us so far but a future patch that greatly simplifies the CPU core realize path needs it to avoid a crash in QOM. For all these reasons, have ppc_cpu_unrealize() to undo the changes of ppc_cpu_realize() by calling cpu_remove_sync() at the right place, and have the sPAPR CPU core code to call qdev_unrealize(). This requires to add a missing stub because translate_init.c.inc is also compiled for user mode. Signed-off-by:
Greg Kurz <groug@kaod.org> Message-Id: <160279671236.1808373.14732005038172874990.stgit@bahia.lan> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
- Oct 24, 2020
-
-
Jason Andryuk authored
Xen was broken by commit 1583a389 ("cpus: extract out qtest-specific code to accel/qtest"). Xen relied on qemu_init_vcpu() calling qemu_dummy_start_vcpu() in the default case, but that was replaced by g_assert_not_reached(). Add a minimal "CpusAccel" for Xen using the dummy-cpus implementation used by qtest. Signed-off-by:
Jason Andryuk <jandryuk@gmail.com> Message-Id: <20201013140511.5681-4-jandryuk@gmail.com> Acked-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Claudio Fontana <cfontana@suse.de> Acked-by:
Anthony PERARD <anthony.perard@citrix.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Jason Andryuk authored
Move and rename accel/qtest/qtest-cpus.c files to accel/dummy-cpus.c so it can be re-used by Xen. Signed-off-by:
Jason Andryuk <jandryuk@gmail.com> Message-Id: <20201013140511.5681-3-jandryuk@gmail.com> Reviewed-by:
Claudio Fontana <cfontana@suse.de> Acked-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
Jason Andryuk authored
dummy-cpus.c is only compiled with CONFIG_POSIX, so the _WIN32 condition will never evaluate true. Remove it. Signed-off-by:
Jason Andryuk <jandryuk@gmail.com> Message-Id: <20201013140511.5681-2-jandryuk@gmail.com> Acked-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Claudio Fontana <cfontana@suse.de> Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- Oct 21, 2020
-
-
Philippe Mathieu-Daudé authored
Restricting xen-set-global-dirty-log and xen-load-devices-state commands migration.json pulls slightly less QAPI-generated code into user-mode and tools. Acked-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20201012121536.3381997-6-philmd@redhat.com> Reviewed-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
- Oct 20, 2020
-
-
Richard Henderson authored
On ARM, the Top Byte Ignore feature means that only 56 bits of the address are significant in the virtual address. We are required to give the entire 64-bit address to FAR_ELx on fault, which means that we do not "clean" the top byte early in TCG. This new interface allows us to flush all 256 possible aliases for a given page, currently missed by tlb_flush_page*. Signed-off-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Message-id: 20201016210754.818257-2-richard.henderson@linaro.org Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Oct 08, 2020
-
-
Kele Huang authored
Detect all MIPS store instructions in cpu_signal_handler for all available MIPS versions, and set is_write if encountering such store instructions. This fixed the error while dealing with self-modified code for MIPS. Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Kele Huang <kele.hwang@gmail.com> Signed-off-by:
Xu Zou <iwatchnima@gmail.com> Message-Id: <20201002081420.10814-1-kele.hwang@gmail.com> [rth: Use uintptr_t for pc to fix n32 build error.] Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Oct 06, 2020
-
-
Pavel Dovgaluk authored
GDB remote protocol supports two reverse debugging commands: reverse step and reverse continue. This patch adds support of the first one to the gdbstub. Reverse step is intended to step one instruction in the backwards direction. This is not possible in regular execution. But replayed execution is deterministic, therefore we can load one of the prior snapshots and proceed to the desired step. It is equivalent to stepping one instruction back. There should be at least one snapshot preceding the debugged part of the replay log. Signed-off-by:
Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> -- v4 changes: - inverted condition in cpu_handle_guest_debug (suggested by Alex Bennée) Message-Id: <160174522341.12451.1498758422543765253.stgit@pasha-ThinkPad-X280> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Pavel Dovgalyuk authored
Interrupt poll is not a real interrupt event. It is needed only for thread safety. This interrupt is used for i386 and converted to hardware interrupt by cpu_handle_interrupt function. Therefore it is not needed to be recorded, because hardware interrupt will be recorded after converting. Signed-off-by:
Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Philippe Mathieu-Daudé <philmd@redhat.com> -- v4 changes: - Condition check refactoring (suggested by Alex Bennée) Message-Id: <160174517124.12451.12983410242461131737.stgit@pasha-ThinkPad-X280> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-