[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 08/45] target/ppc: Correct handling of real mode accesses with vhy
From: |
David Gibson |
Subject: |
[PULL 08/45] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU |
Date: |
Tue, 17 Mar 2020 21:03:46 +1100 |
On ppc we have the concept of virtual hypervisor ("vhyp") mode, where we
only model the non-hypervisor-privileged parts of the cpu. Essentially we
model the hypervisor's behaviour from the point of view of a guest OS, but
we don't model the hypervisor's execution.
In particular, in this mode, qemu's notion of target physical address is
a guest physical address from the vcpu's point of view. So accesses in
guest real mode don't require translation. If we were modelling the
hypervisor mode, we'd need to translate the guest physical address into
a host physical address.
Currently, we handle this sloppily: we rely on setting up the virtual LPCR
and RMOR registers so that GPAs are simply HPAs plus an offset, which we
set to zero. This is already conceptually dubious, since the LPCR and RMOR
registers don't exist in the non-hypervisor portion of the CPU. It gets
worse with POWER9, where RMOR and LPCR[VPM0] no longer exist at all.
Clean this up by explicitly handling the vhyp case. While we're there,
remove some unnecessary nesting of if statements that made the logic to
select the correct real mode behaviour a bit less clear than it could be.
Signed-off-by: David Gibson <address@hidden>
Reviewed-by: Cédric Le Goater <address@hidden>
Reviewed-by: Greg Kurz <address@hidden>
---
target/ppc/mmu-hash64.c | 60 ++++++++++++++++++++++++-----------------
1 file changed, 35 insertions(+), 25 deletions(-)
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 3e0be4d55f..392f90e0ae 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -789,27 +789,30 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr
eaddr,
*/
raddr = eaddr & 0x0FFFFFFFFFFFFFFFULL;
- /* In HV mode, add HRMOR if top EA bit is clear */
- if (msr_hv || !env->has_hv_mode) {
+ if (cpu->vhyp) {
+ /*
+ * In virtual hypervisor mode, there's nothing to do:
+ * EA == GPA == qemu guest address
+ */
+ } else if (msr_hv || !env->has_hv_mode) {
+ /* In HV mode, add HRMOR if top EA bit is clear */
if (!(eaddr >> 63)) {
raddr |= env->spr[SPR_HRMOR];
}
- } else {
- /* Otherwise, check VPM for RMA vs VRMA */
- if (env->spr[SPR_LPCR] & LPCR_VPM0) {
- slb = &env->vrma_slb;
- if (slb->sps) {
- goto skip_slb_search;
- }
- /* Not much else to do here */
+ } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+ /* Emulated VRMA mode */
+ slb = &env->vrma_slb;
+ if (!slb->sps) {
+ /* Invalid VRMA setup, machine check */
cs->exception_index = POWERPC_EXCP_MCHECK;
env->error_code = 0;
return 1;
- } else if (raddr < env->rmls) {
- /* RMA. Check bounds in RMLS */
- raddr |= env->spr[SPR_RMOR];
- } else {
- /* The access failed, generate the approriate interrupt */
+ }
+
+ goto skip_slb_search;
+ } else {
+ /* Emulated old-style RMO mode, bounds check against RMLS */
+ if (raddr >= env->rmls) {
if (rwx == 2) {
ppc_hash64_set_isi(cs, SRR1_PROTFAULT);
} else {
@@ -821,6 +824,8 @@ int ppc_hash64_handle_mmu_fault(PowerPCCPU *cpu, vaddr
eaddr,
}
return 1;
}
+
+ raddr |= env->spr[SPR_RMOR];
}
tlb_set_page(cs, eaddr & TARGET_PAGE_MASK, raddr & TARGET_PAGE_MASK,
PAGE_READ | PAGE_WRITE | PAGE_EXEC, mmu_idx,
@@ -953,22 +958,27 @@ hwaddr ppc_hash64_get_phys_page_debug(PowerPCCPU *cpu,
target_ulong addr)
/* In real mode the top 4 effective address bits are ignored */
raddr = addr & 0x0FFFFFFFFFFFFFFFULL;
- /* In HV mode, add HRMOR if top EA bit is clear */
- if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
+ if (cpu->vhyp) {
+ /*
+ * In virtual hypervisor mode, there's nothing to do:
+ * EA == GPA == qemu guest address
+ */
+ return raddr;
+ } else if ((msr_hv || !env->has_hv_mode) && !(addr >> 63)) {
+ /* In HV mode, add HRMOR if top EA bit is clear */
return raddr | env->spr[SPR_HRMOR];
- }
-
- /* Otherwise, check VPM for RMA vs VRMA */
- if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+ } else if (env->spr[SPR_LPCR] & LPCR_VPM0) {
+ /* Emulated VRMA mode */
slb = &env->vrma_slb;
if (!slb->sps) {
return -1;
}
- } else if (raddr < env->rmls) {
- /* RMA. Check bounds in RMLS */
- return raddr | env->spr[SPR_RMOR];
} else {
- return -1;
+ /* Emulated old-style RMO mode, bounds check against RMLS */
+ if (raddr >= env->rmls) {
+ return -1;
+ }
+ return raddr | env->spr[SPR_RMOR];
}
} else {
slb = slb_lookup(cpu, addr);
--
2.24.1
- [PULL 00/45] ppc-for-5.0 queue 20200317, David Gibson, 2020/03/17
- [PULL 04/45] spapr: Fix Coverity warning while validating nvdimm options, David Gibson, 2020/03/17
- [PULL 02/45] spapr: Handle pending hot plug/unplug requests at CAS, David Gibson, 2020/03/17
- [PULL 03/45] ppc: Officially deprecate the CPU "compat" property, David Gibson, 2020/03/17
- [PULL 05/45] hw/ppc/pnv: Fix typo in comment, David Gibson, 2020/03/17
- [PULL 10/45] spapr, ppc: Remove VPM0/RMLS hacks for POWER9, David Gibson, 2020/03/17
- [PULL 06/45] ppc: Remove stub support for 32-bit hypervisor mode, David Gibson, 2020/03/17
- [PULL 08/45] target/ppc: Correct handling of real mode accesses with vhyp on hash MMU,
David Gibson <=
- [PULL 07/45] ppc: Remove stub of PPC970 HID4 implementation, David Gibson, 2020/03/17
- [PULL 12/45] target/ppc: Use class fields to simplify LPCR masking, David Gibson, 2020/03/17
- [PULL 09/45] target/ppc: Introduce ppc_hash64_use_vrma() helper, David Gibson, 2020/03/17
- [PULL 13/45] target/ppc: Streamline calculation of RMA limit from LPCR[RMLS], David Gibson, 2020/03/17
- [PULL 11/45] target/ppc: Remove RMOR register from POWER9 & POWER10, David Gibson, 2020/03/17
- [PULL 14/45] target/ppc: Correct RMLS table, David Gibson, 2020/03/17
- [PULL 23/45] hw/scsi/spapr_vscsi: Use SRP_MAX_IU_LEN instead of sizeof flexible array, David Gibson, 2020/03/17
- [PULL 15/45] target/ppc: Only calculate RMLS derived RMA limit on demand, David Gibson, 2020/03/17
- [PULL 18/45] spapr,ppc: Simplify signature of kvmppc_rma_size(), David Gibson, 2020/03/17
- [PULL 16/45] target/ppc: Don't store VRMA SLBE persistently, David Gibson, 2020/03/17