qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask


From: Jean-Philippe Brucker
Subject: Re: [PATCH v10 09/10] virtio-iommu: Set supported page size mask
Date: Fri, 23 Oct 2020 09:48:58 +0200

On Thu, Oct 22, 2020 at 04:56:16PM -0400, Peter Xu wrote:
> On Thu, Oct 22, 2020 at 06:39:37PM +0200, Jean-Philippe Brucker wrote:
> > So what I'd like to do for next version:
> > 
> > * Set qemu_real_host_page_mask as the default page mask, instead of the
> >   rather arbitrary TARGET_PAGE_MASK.
> 
> Oh, I thought TARGET_PAGE_MASK was intended - kernel committ 39b3b3c9cac1
> ("iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE", 2020-03-27)
> explicitly introduced a check that virtio-iommu kernel driver will fail
> directly if this psize is bigger than PAGE_SIZE in the guest.  So it sounds
> reasonable to have the default value as PAGE_SIZE (if it's the same as
> TARGET_PAGE_SIZE in QEMU, which seems true?).
> 
> For example, I'm thinking whether qemu_real_host_page_mask could be bigger 
> than
> PAGE_SIZE in the guest in some environments, then it seems virtio-iommu won't
> boot anymore without assigned devices, because that extra check above will
> always fail.

Right, I missed this problem again. Switching to qemu_real_host_page_mask
is probably not the best idea until we solve the host64k-guest4k problem.

> 
> >   Otherwise we cannot hotplug assigned
> >   devices on a 64kB host, since TARGET_PAGE_MASK is pretty much always
> >   4kB.
> > 
> > * Disallow changing the page size. It's simpler and works in
> >   practice if we default to qemu_real_host_page_mask.
> > 
> > * For non-hotplug devices, allow changing the rest of the mask. For
> >   hotplug devices, only warn about it.
> 
> Could I ask what's "the rest of the mask"?

The LSB in the mask defines the page size. The other bits define which
block sizes are supported, for example 2MB and 1GB blocks with a 4k page
size. These are only for optimization, the upper bits of the mask could
also be all 1s. If the guest aligns its mappings on those block sizes,
then the host can use intermediate levels in the page tables resulting in
fewer IOTLB entries.

> On the driver side, I see that
> viommu_domain_finalise() will pick the largest supported page size to use, if
> so then we seem to be quite restricted on what page size we can use.

In Linux iommu_dma_alloc_remap() tries to allocate blocks based on the
page mask (copied by viommu_domain_finalise() into domain->pgsize_bitmap)

> I'm also a bit curious about what scenario we plan to support in this initial
> version, especially for ARM.  For x86, I think it's probably always 4k
> everywhere so it's fairly simple.  Know little on ARM side...

Arm CPUs and SMMU support 4k, 16k and 64k page sizes. I don't think 16k is
used anywhere but some distributions chose 64k (RHEL, I think?), others
4k, so we need to support both.

Unfortunately as noted above host64k-guest4k is not possible without
adding a negotiation mechanism to virtio-iommu, host VFIO and IOMMU
driver.

Thanks,
Jean



reply via email to

[Prev in Thread] Current Thread [Next in Thread]