qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 12/12] vfio: Remove 64-bit IOVA address space assumption


From: Eric Auger
Subject: Re: [PATCH v2 12/12] vfio: Remove 64-bit IOVA address space assumption
Date: Wed, 11 Oct 2023 19:32:16 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0

Hi Alex,

On 9/20/23 22:02, Alex Williamson wrote:
> On Wed, 13 Sep 2023 10:01:47 +0200
> Eric Auger <eric.auger@redhat.com> wrote:
>
>> Now we retrieve the usable IOVA ranges from the host,
>> we now the physical IOMMU aperture and we can remove
>> the assumption of 64b IOVA space when calling
>> vfio_host_win_add().
>>
>> This works fine in general but in case of an IOMMU memory
>> region this becomes more tricky. For instance the virtio-iommu
>> MR has a 64b aperture by default. If the physical IOMMU has a
>> smaller aperture (typically the case for VTD), this means we
>> would need to resize the IOMMU MR when this latter is linked
>> to a container. However this happens on vfio_listener_region_add()
>> when calling the IOMMU MR set_iova_ranges() callback and this
>> would mean we would have a recursive call the
>> vfio_listener_region_add(). This looks like a wrong usage of
>> the memory API causing duplicate IOMMU MR notifier registration
>> for instance.
>>
>> Until we find a better solution, make sure the vfio_find_hostwin()
>> is not called anymore for IOMMU region.
> Thanks for your encouragement to double check this, it does seem like
> there are some gaps in the host window support.  First I guess I don't
> understand why the last chunk here assumes a contiguous range.
> Shouldn't we call vfio_host_win_add() for each IOVA range?
I aknowledge am not very familiar with those VFIOHostDMAWindow's
semantic. If it matches the avail. IOVA windows I am collecting in this
series - which sounds sensible - let's create as many of them as valid
IOVA ranges. 
>
> But then we have a problem that we don't necessarily get positive
> feedback from memory_region_iommu_set_iova_ranges().  Did the vIOMMU
if memory_region_iommu_set_iova_ranges() fails,
vfio_listener_region_add() does enter the error handling path and we
will never reach the vfio_connect_container
> accept the ranges or not?  Only one vIOMMU implements the callback.
> Should we only call memory_region_iommu_set_iova_ranges() if the range
> doesn't align to a host window and should the wrapper return -ENOTSUP
> if there is no vIOMMU support to poke holes in the range?  Thanks,
On one end, there is the capability to retrieve from the host the usable
IOVA ranges.
On the other end, there is capability for the viommu to use the info and
also propagate them to the driver.

Currently if the viommu is not able to use that info,
memory_region_iommu_set_iova_ranges() returns 0 meaning we won't regress
the current code.
Then shall we use the info retrieved from the host to build the
VFIOHostDMAWindow and check the IOVAs even in that case? I guess yes
because anyway mapping IOVAs within reserved regions is wrong. This
would fail latter on on the dma_map().

Besides won't the host windows be built on top of available IOVA regions
instead?

Thanks

Eric
>
> Alex
>
>  
>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>>
>> ---
>>
>> I have not found any working solution to the IOMMU MR resizing.
>> So I can remove this patch or remove the check for IOMMU MR. Maybe
>> this is an issue which can be handled separately?
>> ---
>>  hw/vfio/common.c | 25 ++++++++++++-------------
>>  1 file changed, 12 insertions(+), 13 deletions(-)
>>
>> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
>> index 26da38de05..40cac1ca91 100644
>> --- a/hw/vfio/common.c
>> +++ b/hw/vfio/common.c
>> @@ -1112,13 +1112,6 @@ static void vfio_listener_region_add(MemoryListener 
>> *listener,
>>  #endif
>>      }
>>  
>> -    hostwin = vfio_find_hostwin(container, iova, end);
>> -    if (!hostwin) {
>> -        error_setg(&err, "Container %p can't map guest IOVA region"
>> -                   " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx, container, iova, 
>> end);
>> -        goto fail;
>> -    }
>> -
>>      memory_region_ref(section->mr);
>>  
>>      if (memory_region_is_iommu(section->mr)) {
>> @@ -1177,6 +1170,14 @@ static void vfio_listener_region_add(MemoryListener 
>> *listener,
>>          return;
>>      }
>>  
>> +    hostwin = vfio_find_hostwin(container, iova, end);
>> +    if (!hostwin) {
>> +        error_setg(&err, "Container %p can't map guest IOVA region"
>> +                   " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx, container, iova, 
>> end);
>> +        goto fail;
>> +    }
>> +
>> +
>>      /* Here we assume that memory_region_is_ram(section->mr)==true */
>>  
>>      /*
>> @@ -2594,12 +2595,10 @@ static int vfio_connect_container(VFIOGroup *group, 
>> AddressSpace *as,
>>          vfio_get_iommu_info_migration(container, info);
>>          g_free(info);
>>  
>> -        /*
>> -         * FIXME: We should parse VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
>> -         * information to get the actual window extent rather than assume
>> -         * a 64-bit IOVA address space.
>> -         */
>> -        vfio_host_win_add(container, 0, (hwaddr)-1, container->pgsizes);
>> +        g_assert(container->nr_iovas);
>> +        vfio_host_win_add(container, 0,
>> +                          container->iova_ranges[container->nr_iovas - 
>> 1].end,
>> +                          container->pgsizes);
>>  
>>          break;
>>      }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]