[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
From: |
Cédric Le Goater |
Subject: |
Re: [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap |
Date: |
Wed, 18 Mar 2020 17:45:50 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 |
On 3/18/20 5:41 AM, Nicholas Piggin wrote:
> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
>
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
>
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace
> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.
Did you hit the bug on the other OS ?
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
>
> Signed-off-by: Nicholas Piggin <address@hidden>
Looks correct.
Reviewed-by: Cédric Le Goater <address@hidden>
Thanks,
C.
> ---
> target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> PowerPCCPU *cpu = env_archcpu(env);
> int n;
>
> + /*
> + * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> + * architecture). Matching on SLB_ESID_V is not good enough, because
> slbmte
> + * can overwrite a valid SLB without flushing its lookaside information.
> + *
> + * It would be possible to keep the TLB in synch with the SLB by flushing
> + * when a valid entry is overwritten by slbmte, and therefore slbia would
> + * not have to flush unless it evicts a valid SLB entry. However it is
> + * expected that slbmte is more common than slbia, and slbia is usually
> + * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> + * good one.
> + */
> +
> /* XXX: Warning: slbia never invalidates the first segment */
> for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> ppc_slb_t *slb = &env->slb[n];
>
> if (slb->esid & SLB_ESID_V) {
> slb->esid &= ~SLB_ESID_V;
> - /*
> - * XXX: given the fact that segment size is 256 MB or 1TB,
> - * and we still don't have a tlb_flush_mask(env, n, mask)
> - * in QEMU, we just invalidate all TLBs
> - */
> - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
> }
> +
> + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
>
> static void __helper_slbie(CPUPPCState *env, target_ulong addr,
>