qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 22/36] cputlb: Fold TLB_RECHECK into TLB_INVALID


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH 22/36] cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
Date: Fri, 6 Sep 2019 10:58:00 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0

On 9/6/19 7:02 AM, Peter Maydell wrote:
> On Tue, 3 Sep 2019 at 17:09, Richard Henderson
> <address@hidden> wrote:
>>
>> We had two different mechanisms to force a recheck of the tlb.
>>
>> Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
>> that would immediate set TLB_INVALID_MASK, which automatically
>> means that a second check of the tlb entry fails.
>>
>> We can use the same mechanism to handle small pages.
>> Conserve TLB_* bits by removing TLB_RECHECK.
>>
>> Reviewed-by: David Hildenbrand <address@hidden>
>> Signed-off-by: Richard Henderson <address@hidden>
>> ---
> 
>> @@ -1265,27 +1269,6 @@ load_helper(CPUArchState *env, target_ulong addr, 
>> TCGMemOpIdx oi,
>>          if ((addr & (size - 1)) != 0) {
>>              goto do_unaligned_access;
>>          }
>> -
>> -        if (tlb_addr & TLB_RECHECK) {
>> -            /*
>> -             * This is a TLB_RECHECK access, where the MMU protection
>> -             * covers a smaller range than a target page, and we must
>> -             * repeat the MMU check here. This tlb_fill() call might
>> -             * longjump out if this access should cause a guest exception.
>> -             */
>> -            tlb_fill(env_cpu(env), addr, size,
>> -                     access_type, mmu_idx, retaddr);
>> -            index = tlb_index(env, mmu_idx, addr);
>> -            entry = tlb_entry(env, mmu_idx, addr);
>> -
>> -            tlb_addr = code_read ? entry->addr_code : entry->addr_read;
>> -            tlb_addr &= ~TLB_RECHECK;
>> -            if (!(tlb_addr & ~TARGET_PAGE_MASK)) {
>> -                /* RAM access */
>> -                goto do_aligned_access;
>> -            }
>> -        }
>> -
>>          return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
>>                          mmu_idx, addr, retaddr, access_type, op);
>>      }
> 
> In the old version of this code, we do the "tlb fill if TLB_RECHECK
> is set", and then we say "now we've done the refill have we actually
> got RAM", and we avoid calling io_readx() if that is the case.


I don't think that's the case, since,

        if (!victim_tlb_hit(env, mmu_idx, index, tlb_off,
                            addr & TARGET_PAGE_MASK)) {
            tlb_fill(env_cpu(env), addr, size,
                     access_type, mmu_idx, retaddr);
            index = tlb_index(env, mmu_idx, addr);
            entry = tlb_entry(env, mmu_idx, addr);
        }
        tlb_addr = code_read ? entry->addr_code : entry->addr_read;
        tlb_addr &= ~TLB_INVALID_MASK;
    }

the last line here clears INVALID.  The only bits that could remain should be
WATCHPOINT and MMIO.  (NOTDIRTY can only be set for entry->addr_write, not for
addr_read/addr_code.)

And for that matter, once we've processed the watchpoint we remove
TLB_WATCHPOINT as well, so that we only enter io_readx() if MMIO is set.

> This is necessary because io_readx() will misbehave if you try to
> call it on RAM (notably if what we have is notdirty-mem then we
> need to do the read-from-actual-host-ram because the IO ops backing
> notdirty-mem are intended for writes only).
> 
> With this patch applied, we seem to have lost the handling for
> if the tlb_fill in a TLB_RECHECK case gives us back some real RAM.
> (Similarly for store_helper().)

Again, I disagree.  I think there must be some other explanation.

> More generally, I don't really understand why this merging
> is correct -- "TLB needs a recheck" is not the same thing as
> "TLB is invalid" and I don't think we can merge the two
> bits.

"TLB is invalid" means that we cannot use an existing tlb entry, therefore we
must go back to tlb_fill.  "TLB needs a recheck" means we must go back to
tlb_fill -- exactly the same.

The only odd bit about "TLB is invalid" is that it applies to the *next*
lookup.  If we have just returned from tlb_fill, then the tlb entry *must* be
valid.  If it were not valid, then tlb_fill would not return at all.

So, on the paths that use tlb_fill, we clear TLB_INVALID_MASK, indicating that
the lookup has just been done.

Which, honestly, ought to have happened with TLB_RECHECK because it was not
uncommon to perform two tlb_fill in a row -- the first because of a true tlb
miss and the second because the entry supplied by the fill has TLB_RECHECK set.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]