qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v2 1/1] memory: Delete assertion in memory_region_unregister_io


From: Jason Wang
Subject: Re: [RFC v2 1/1] memory: Delete assertion in memory_region_unregister_iommu_notifier
Date: Thu, 9 Jul 2020 13:58:33 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0


On 2020/7/8 下午10:16, Peter Xu wrote:
On Wed, Jul 08, 2020 at 01:42:30PM +0800, Jason Wang wrote:
So it should be functional equivalent to vtd_as_has_notifier().
For example: in vtd_iommu_replay() we'll skip the replay if vhost has
registered the iommu notifier because vtd_as_has_map_notifier() will return
false.

Two questions:

- Do we care the performance here? If not, vhost may just ignore the MAP
event?
I think we care, because vtd_page_walk() can be expensive.


Ok.



- If we care the performance, it's better to implement the MAP event for
vhost, otherwise it could be a lot of IOTLB miss
I feel like these are two things.

So far what we are talking about is whether vt-d should have knowledge about
what kind of events one iommu notifier is interested in.  I still think we
should keep this as answered in question 1.

The other question is whether we want to switch vhost from UNMAP to MAP/UNMAP
events even without vDMA, so that vhost can establish the mapping even before
IO starts.  IMHO it's doable, but only if the guest runs DPDK workloads.  When
the guest is using dynamic iommu page mappings, I feel like that can be even
slower, because then the worst case is for each IO we'll need to vmexit twice:

   - The first vmexit caused by an invalidation to MAP the page tables, so vhost
     will setup the page table before IO starts

   - IO/DMA triggers and completes

   - The second vmexit caused by another invalidation to UNMAP the page tables

So it seems to be worse than when vhost only uses UNMAP like right now.  At
least we only have one vmexit (when UNMAP).  We'll have a vhost translate()
request from kernel to userspace, but IMHO that's cheaper than the vmexit.


Right but then I would still prefer to have another notifier.

Since vtd_page_walk has nothing to do with device IOTLB. IOMMU have a dedicated command for flushing device IOTLB. But the check for vtd_as_has_map_notifier is used to skip the device which can do demand paging via ATS or device specific way. If we have two different notifiers, vhost will be on the device iotlb notifier so we don't need it at all?

Thanks







reply via email to

[Prev in Thread] Current Thread [Next in Thread]