qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-ppc] 答复: [qemu-ppc] quesstion ab out kvm e500 tlb search method


From: Wangkai (Kevin,C)
Subject: [Qemu-ppc] 答复: [qemu-ppc] quesstion ab out kvm e500 tlb search method
Date: Thu, 24 Jan 2013 14:36:28 +0000

Hi Alex,

In reply your mail:
Could you please point me to the respective part of the documentation?
Oh, I am sorry I just see the figure 12-4 of the L2mmu lookup (PowerPC e500 
Core Family Reference Manual .pdf page 12-8)

And I check it again, it says:
Additionally, Figure 12-4 shows that when the L2 MMU is checked for a TLB 
entry, both TLB1
and TLB0 are checked in parallel.

Let me try to understand what you're trying to fix. Are you seeing performance 
issues? What kernel are you running on? Are you using hugetlbfs? There are a 
lot of things that improve performance a lot more than this change would.

Yes, I used a very old version, which is linux 2.6.34, and I find the 
performance is not good enough,
which have no support hugetlbfs, And the code of l2mmu lookup is just walk 
through all the entry of TLB, and later we will
Try to merge the new fetcher to this version.


Hi Kevin,



On 24.01.2013, at 14:15, Wangkai (Kevin,C) wrote:

> Dear,
>  
> I check e500 core reference when the e500 core lookup for L2mmu entries,
> TLB1 is preferred to TLB0.

Could you please point me to the respective part of the documentation?



> And for e500 KVM l2mmu lookup, I find that TLB0 is searched first, and it
> Was done by software.
>  
> And I suggest we can search TLB1 first, because TLB0 have more entries
> Than TLB1, and this can improve the guest kernel performance very much.

Actually TLB1 has more entries, because we only search one set at a time. For 
TLB1, one set means "all entries" which on e500mc would mean 64. For TLB0, we 
only need to check 4 entries. So looking at TLB0 is a lot faster.

Also, most misses in "normal" real world workloads should happen in user space.

> Can I do this change to the kvm code? Is there some other affect?

Let me try to understand what you're trying to fix. Are you seeing performance 
issues? What kernel are you running on? Are you using hugetlbfs? There are a 
lot of things that improve performance a lot more than this change would.

yes


Also, let me CC address@hidden, since you're really asking a KVM question here 
:).


Alex

>  
> Thanks!
> Wangkai
>  
>  
> int kvmppc_e500_tlb_search(struct kvm_vcpu *vcpu,
>                             gva_t eaddr, unsigned int pid, int as)
> {
>        struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu);
>        int esel, tlbsel;
>  
>        for (tlbsel = 0; tlbsel < 2; tlbsel++) {  // first tlb0, and then tlb1
>               esel = kvmppc_e500_tlb_index(vcpu_e500, eaddr, tlbsel, pid, as);
>               if (esel >= 0)
>                      return index_of(tlbsel, esel);
>        }
>  
>        return -1;
> }
>  
>  


reply via email to

[Prev in Thread] Current Thread [Next in Thread]