qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vhost, iova, and dirty page tracking


From: Jason Wang
Subject: Re: [Qemu-devel] vhost, iova, and dirty page tracking
Date: Wed, 25 Sep 2019 11:46:13 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0


On 2019/9/24 上午10:02, Tian, Kevin wrote:
From: Jason Wang [mailto:address@hidden]
Sent: Friday, September 20, 2019 9:19 AM

On 2019/9/20 上午6:54, Tian, Kevin wrote:
From: Paolo Bonzini [mailto:address@hidden]
Sent: Thursday, September 19, 2019 7:14 PM

On 19/09/19 09:16, Tian, Kevin wrote:
why GPA1 and GPA2 should be both dirty?
even they have the same HVA due to overlaping virtual address
space
in
two processes, they still correspond to two physical pages.
don't get what's your meaning :)
The point is not leave any corner case that is hard to debug or fix in
the future.

Let's just start by a single process, the API allows userspace to maps
HVA to both GPA1 and GPA2. Since it knows GPA1 and GPA2 are
equivalent,
it's ok to sync just through GPA1. That means if you only log GPA2, it
won't work.
I noted KVM itself doesn't consider such situation (one HVA is mapped
to multiple GPAs), when doing its dirty page tracking. If you look at
kvm_vcpu_mark_page_dirty, it simply finds the unique memslot which
contains the dirty gfn and then set the dirty bit within that slot. It
doesn't attempt to walk all memslots to find out any other GPA which
may be mapped to the same HVA.

So there must be some disconnect here. let's hear from Paolo first and
understand the rationale behind such situation.
In general, userspace cannot assume that it's okay to sync just through
GPA1.  It must sync the host page if *either* GPA1 or GPA2 are marked
dirty.
Agree. In this case the kernel only needs to track whether GPA1 or
GPA2 is dirtied by guest operations.

Not necessarily guest operations.


   The reason why vhost has to
set both GPA1 and GPA2 is due to its own design - it maintains
IOVA->HVA and GPA->HVA mappings thus given a IOVA you have
to reverse lookup GPA->HVA memTable which gives multiple possible
GPAs.

So if userspace need to track both GPA1 and GPA2, vhost can just stop
when it found a one HVA->GPA mapping there.


   But in concept if vhost can maintain a IOVA->GPA mapping,
then it is straightforward to set the right GPA every time when a IOVA
is tracked.

That means, the translation is done twice by software, IOVA->GPA and
GPA->HVA for each packet.

Thanks

yes, it's not necessary if we care about only the content of the dirty GPA,
as seen in live migration. In that case, just setting the first GPA in the loop
is sufficient as you pointed out. However there is one corner case which I'm
not sure. What about an usage (e.g. VM introspection) which cares only
about the guest access pattern i.e. which GPA is dirtied instead of poking
its content? Neither setting the first GPA nor setting all the aliasing GPAs
can provide the accurate info, if no explicit IOVA->GPA mapping is maintained
inside vhost. But I cannot tell whether maintaining such accuracy for aliasing
GPAs is really necessary. +VM introspection guys if they have some opinions.


Interesting, for vhost, vIOMMU can pass IOVA->GPA actually and vhost can keep it and just do the translation from GPA->HVA in the map command. So it can have both IOVA->GPA and IOVA->HVA mapping.

Thanks



Thanks
Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]