qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v16 00/10] VIRTIO-IOMMU device


From: Auger Eric
Subject: Re: [PATCH v16 00/10] VIRTIO-IOMMU device
Date: Tue, 3 Mar 2020 10:40:59 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

Hi Zhangfei,
On 3/3/20 4:23 AM, Zhangfei Gao wrote:
> Hi Eric
> 
> On Thu, Feb 27, 2020 at 9:50 PM Auger Eric <address@hidden> wrote:
>>
>> Hi Daniel,
>>
>> On 2/27/20 12:17 PM, Daniel P. Berrangé wrote:
>>> On Fri, Feb 14, 2020 at 02:27:35PM +0100, Eric Auger wrote:
>>>> This series implements the QEMU virtio-iommu device.
>>>>
>>>> This matches the v0.12 spec (voted) and the corresponding
>>>> virtio-iommu driver upstreamed in 5.3. All kernel dependencies
>>>> are resolved for DT integration. The virtio-iommu can be
>>>> instantiated in ARM virt using:
>>>>
>>>> "-device virtio-iommu-pci".
>>>
>>> Is there any more documentation besides this ?
>>
>> not yet in qemu.
>>>
>>> I'm wondering on the intended usage of this, and its relation
>>> or pros/cons vs other iommu devices
>>
>> Maybe if you want to catch up on the topic, looking at the very first
>> kernel RFC may be a good starting point. Motivation, pros & cons were
>> discussed in that thread (hey, April 2017!)
>> https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021217.html
>>
>> on ARM we have SMMUv3 emulation. But the VFIO integration is not
>> possible because SMMU does not have any "caching mode" and my nested
>> paging kernel series is blocked. So the only solution to integrate with
>> VFIO is looming virtio-iommu.
>>
>> In general the pros that were put forward are: virtio-iommu is
>> architecture agnostic, removes the burden to accurately model complex
>> device states, driver can support virtualization specific optimizations
>> without being constrained by production driver maintenance. Cons is perf
>> and mem footprint if we do not consider any optimization.
>>
>> You can have a look at
>>
>> http://events17.linuxfoundation.org/sites/events/files/slides/viommu_arm.pdf
>>
> Thanks for the patches.
> 
> Could I ask one question?
> To support vSVA and pasid in guest, which direction you recommend,
> virtio-iommu or vSMMU (your nested paging)
> 
> Do we still have any obstacles?
you can ask the question but not sure I can answer ;-)

1) SMMUv3 2stage implementation is blocked by Will at kernel level.

Despite this situation I may/can respin as Marvell said they were
interested in this effort. If you are also interested in (I know you
tested it several times and I am grateful to you for that), please reply
to:
[PATCH v9 00/14] SMMUv3 Nested Stage Setup (IOMMU part)
(https://patchwork.kernel.org/cover/11039871/) and say you are
interested in that work so that maintainers are aware there are
potential users.

At the moment I have not supported multiple CDs because it introduced
other dependencies.

2) virtio-iommu

So only virtio-iommu dt boot on machvirt is currently supported. For non
DT, Jean respinned his kernel series
"[PATCH v2 0/3] virtio-iommu on x86 and non-devicetree platforms" as you
may know. However non DT integration still is controversial. Michael is
pushing for putting the binding info the PCI config space. Joerg
yesterday challenged this solution and said he would prefer ACPI
integration. ACPI support depends on ACPI spec update & vote anyway.

To support PASID at virtio-iommu level you also need virtio-iommu API
extensions to be proposed and written + kernel works. So that's a long
road. I will let Jean-Philippe comment on that.

I would just say that Intel is working on nested paging solution with
their emulated intel-iommu. We can help them getting that upstream and
partly benefit from this work.

> Would you mind give some breakdown.
> Jean mentioned PASID still not supported in QEMU.
Do you mean support of multiple CDs in the emulated SMMU? That's a thing
I could implement quite easily. What is more tricky is how to test it.

Thanks

Eric
> 
> Thanks
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]