- Jun 05, 2016
-
-
Richard Henderson authored
The arm target was handled by 06486077, but other targets were ignored. This handles all the rest which actually support disassembly (that is, skipping moxie and tilegx). Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <rth@twiddle.net>
-
- May 30, 2016
-
-
Benjamin Herrenschmidt authored
This will enable decoding of hrfid Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
Otherwise tight loops at smt_low for example, which OPAL does, eat so much CPU that we can't boot a kernel anymore. With that, I can boot 8 CPUs just fine with powernv. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Michael Neuling authored
Signed-off-by:
Michael Neuling <mikey@neuling.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
Otherwise it will trip on the forms used in recent architecture. Ideally, we should have different handlers for different architecture levels but our current implementation of TLB flushing is dumb enough that this will do for now. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
Not that anything remotely recent supports tlbia but ... Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
On ppc64 especially, we flush the tlb on any slbie or tlbie instruction. However, those instructions often come in bursts of 3 or more (context switch will favor a series of slbie's for example to an slbia if the SLB has less than a certain number of entries in it, and tlbie's can happen in a series, with PAPR, H_BULK_REMOVE can remove up to 4 entries at a time. Doing a tlb_flush() each time is a waste of time. We end up doing a memset of the whole TLB, reloading it for the next instruction, memset'ing again, etc... Those instructions don't have to take effect immediately. For slbie, they can wait for the next context synchronizing event. For tlbie, the next tlbsync. This implements batching by keeping a flag that indicates that we have a TLB in need of flushing. We check it on interrupts, rfi's, isync's and tlbsync and flush the TLB if needed. This reduces the number of tlb_flush() on a boot to a ubuntu installer first dialog screen from roughly 360K down to 36K. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: added a 'CPUPPCState *' variable in h_remove() and h_bulk_remove() ] Signed-off-by:
Cédric Le Goater <clg@kaod.org> [dwg: removed spurious whitespace change, use 0/1 not true/false consistently, since tlb_need_flush has int type] Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
We rework the way the MMU indices are calculated, providing separate indices for I and D side based on MSR:IR and MSR:DR respectively, and thus no longer need to flush the TLB on context changes. This also adds correct support for HV as a separate address space. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
We don't use the resulting accessors and this gets in the way of the split I/D TLB work. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
- May 26, 2016
-
-
Alexey Kardashevskiy authored
6a81dd17 "spapr_iommu: Rename vfio_accel parameter" renamed vfio_accel flag everywhere but one spot was missed. Signed-off-by:
Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Greg Kurz authored
The KVM API restricts vcpu ids to be < KVM_CAP_MAX_VCPUS. On PowerPC targets, depending on the number of threads per core in the host and in the guest, some topologies do generate higher vcpu ids actually. When this happens, QEMU bails out with the following error: kvm_init_vcpu failed: Invalid argument The KVM_CREATE_VCPU ioctl has several EINVAL return paths, so it is not possible to fully disambiguate. This patch adds a check in the code that computes vcpu ids, so that we can detect the error earlier, and print a friendlier message instead of calling KVM_CREATE_VCPU with an obviously bogus vcpu id. Signed-off-by:
Greg Kurz <gkurz@linux.vnet.ibm.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Richard Henderson authored
Mirror the cleanups just done to rlwinm, rlwnm and rlwimi. This adds use of deposit to rldimi. Signed-off-by:
Richard Henderson <rth@twiddle.net> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Richard Henderson authored
A 32-bit rotate insn is more common on hosts than a deposit insn, and if the host has neither the result is truely horrific. At the same time, tidy up the temporaries within these functions, drop the over-use of "likely", drop some checks for identity that will also be checked by tcg-op.c functions, and special case mask without rotate within rlwinm. Signed-off-by:
Richard Henderson <rth@twiddle.net> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Richard Henderson authored
Signed-off-by:
Richard Henderson <rth@twiddle.net> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
David Gibson authored
ppc_hash64_set_external_hpt() was added in e5c0d3ce "target-ppc: Add helpers for updating a CPU's SDR1 and external HPT". This helper contains a cpu_synchronize_state() since it may need to push state back to KVM afterwards. This turns out to break things when it is used in the reset path, which is the only current user. It appears that kvm_vcpu_dirty is not being set early in the reset path, so the cpu_synchronize_state() is clobbering state set up by the early part of the cpu reset path with stale state from KVM. This may require some changes to the generic cpu reset path to fix properly, but as a short term fix we can just remove the cpu_synchronize_state() from ppc_hash64_set_external_hpt(), and require any non-reset path callers to do that manually. Reported-by:
Laurent Vivier <lvivier@redhat.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
- May 19, 2016
-
-
Paolo Bonzini authored
exec-all.h contains TCG-specific definitions. It is not needed outside TCG-specific files such as translate.c, exec.c or *helper.c. One generic function had snuck into include/exec/exec-all.h; move it to include/qom/cpu.h. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Remove usage of NEED_CPU_H from hw/hw.h. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
This changes a cpu.h dependency for hw/ppc/ppc.h into a cpu-qom.h dependency. For it to compile we also need to clean up a few unused definitions. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Make PowerPCCPU an opaque type within cpu-qom.h, and move all definitions of private methods, as well as all type definitions that require knowledge of the layout to cpu.h. Conversely, move all definitions needed to define a class to cpu-qom.h. This helps making files independent of NEED_CPU_H if they only need to pass around CPU pointers. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Just leave some members in even if they are unused on e.g. 32-bit PPC or user-mode emulation. This avoids complications when using PowerPCCPUClass in code that is compiled just once (because it applies to both 32-bit and 64-bit PPC for example) but still needs to peek at PPC-specific members. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Bring the PowerPCCPUClass handle_mmu_fault method type into line with the one in CPUClass. Using vaddr also makes the cpu-qom.h file target independent. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Make cpu-qom.h so that it is only included from cpu.h. Then there is no need for it to include cpu.h again. Later we will make cpu-qom.h target independent and we will _want_ to include it from elsewhere, but for now reduce the number of cases to handle. Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- May 13, 2016
-
-
Sergey Fedorov authored
In user mode, there's only a static address translation, TBs are always invalidated properly and direct jumps are reset when mapping change. Thus the destination address is always valid for direct jumps and there's no need to restrict it to the pages the TB resides in. Signed-off-by:
Sergey Fedorov <serge.fdrv@gmail.com> Signed-off-by:
Sergey Fedorov <sergey.fedorov@linaro.org> Cc: Riku Voipio <riku.voipio@iki.fi> Cc: Blue Swirl <blauwirbel@gmail.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Signed-off-by:
Richard Henderson <rth@twiddle.net>
-
Emilio G. Cota authored
We are inconsistent with the type of tb->flags: usage varies loosely between int and uint64_t. Settle to uint32_t everywhere, which is superior to both: at least one target (aarch64) uses the most significant bit in the u32, and uint64_t is wasteful. Compile-tested for all targets. Suggested-by:
Laurent Desnogues <laurent.desnogues@gmail.com> Suggested-by:
Richard Henderson <rth@twiddle.net> Tested-by:
Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by:
Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by:
Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by:
Emilio G. Cota <cota@braap.org> Signed-off-by:
Richard Henderson <rth@twiddle.net> Message-Id: <1460049562-23517-1-git-send-email-cota@braap.org>
-
- Apr 18, 2016
-
-
Thomas Huth authored
env->xer only holds the lower bits of the XER register nowadays, the SO, OV and CA bits are stored in separate variables (see the function cpu_write_xer() for details). Since the migration code currently only reads the "xer" variable, the upper bits are lost during migration. Fix it by using cpu_read_xer() instead. Signed-off-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Thomas Huth authored
The range checks in the LSWX instruction are completely insufficient: They do not take the wrap-around case into account, and the check "reg < rx" should be "reg <= rx" instead. Fix it by using the new lsw_reg_in_range() helper function that is already used for LSWI, too. Then there is a second problem: In case the INVAL exception is generated, the NIP value is wrong, it currently points to the instruction before the LSWX instruction. This is because gen_lswx() already decreases the NIP value by 4 (to be prepared for page fault exceptions), and powerpc_excp() later decreases it again by 4 while handling the program exception. So to get this right, we've got to undo the "- 4" from gen_lswx() here before calling helper_raise_exception_err(). Signed-off-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Thomas Huth authored
There are two issues: First, the number of registers that are used has to be calculated with "(nb + 3) / 4" (i.e. round always up, not down). Second, the "start <= ra && (start + nr - 32) > ra" condition for the wrap-around case is wrong: It has to be tested with "||" instead of "&&". Since we can reuse this check later for the LSWX instruction, let's place the fixed code into a helper function, too. Signed-off-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
- Apr 05, 2016
-
-
Cédric Le Goater authored
From: Benjamin Herrenschmidt <benh@kernel.crashing.org> This patch fixes the current AIL implementation for POWER8. The interrupt vector address can be calculated directly from LPCR when the exception is handled. The excp_prefix update becomes useless and we can cleanup the H_SET_MODE hcall. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: Removed LPES0/1 handling for HV vs. !HV Fixed LPCR_ILE case for POWERPC_EXCP_POWER8 ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> [dwg: This was written as a cleanup, but it also fixes a real bug where setting an alternative interrupt location would not be correctly migrated] Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
- Mar 24, 2016
-
-
Cédric Le Goater authored
commit fce55481360d "ppc: A couple more dummy POWER8 Book4 regs" squashed in to rapidly a set of POWER8 Book4 regs in the wrong routine. This patch introduces the missing gen_spr_power8_book4() routine to fix their location. Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: squashed in patch 'ppc: Add dummy ACOP SPR' ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
We should implement HW breakpoint/watchpoint, qemu supports them... Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
With appropriate AMR-like masks. Not actually used by the translation logic at that point Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: changed spr_register_hv(SPR_IAMR) to spr_register_kvm_hv(SPR_IAMR) changed gen_spr_amr() prototype ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
The masks weren't chosen nor applied properly. The architecture specifies that writes to AMR are masked by UAMOR for PR=1, otherwise AMOR for HV=0. The writes to UAMOR are masked by AMOR for HV=0 Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: moved gen_spr_amr() prototype change to next patch ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
Make sure we give the guest full authorization Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
It's supposed to be an instruction counter. For now make us not crash when accessing it. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
And move the code adjusting the MSR mask and calling kvmppc_set_papr() to it. This allows us to add a few more things such as disabling setting of MSR:HV and appropriate LPCR bits which will be used when fixing the exception model. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> [clg: removed LPCR setting ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
We don't give them a KVM reg number to most of the registers yet as no current KVM version supports HV mode. For DAWR and DAWRX, the KVM reg number is needed since this register can be set by the guest via the H_SET_MODE hypercall. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> [clg: squashed in patch 'ppc: Add KVM numbers to some P8 SPRs' changed the commit log with a proposal of Thomas Huth removed all hunks except those related to AMOR and DAWR* ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
The current set of spr_register_* macros only take the user and supervisor function pointers. To make the transition easy, we don't change that but we add "_hv" variants that can be used to register all 3 sets. To simplify the transition, users of the "old" macro will set the hypervisor callback to be the same as the supervisor one. The new registration function only needs to be used for registers that are either hypervisor only or behave differently in HV mode. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> [clg: fixed else if condition in gen_op_mfspr() ] Signed-off-by:
Cédric Le Goater <clg@fr.ibm.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-
Benjamin Herrenschmidt authored
Add definitions for additional SPR numbers and SPR bit definitions that will be relevant for subsequent improvements to POWER8 emulation Also fix the definition of LPIDR which was incorrect (and is different for server and embedded). Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
David Gibson <david@gibson.dropbear.id.au>
-