qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/1] Skip flatview_simplify() for cpu vendor zhaoxin


From: Paolo Bonzini
Subject: Re: [PATCH 1/1] Skip flatview_simplify() for cpu vendor zhaoxin
Date: Tue, 20 Oct 2020 11:24:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1

On 19/10/20 21:02, Alex Williamson wrote:
>> For KVM we were thinking of changing the whole
>> memory map with a single ioctl, but that's much easier because KVM
>> builds its page tables lazily. It would be possible for the IOMMU too
>> but it would require a relatively complicated comparison of the old and
>> new memory maps in the kernel.
>
> We can only build IOMMU page tables lazily if we get faults, which we
> generally don't.  We also cannot atomically update IOMMU page tables
> relative to a device,

Yeah, I didn't mean building IOMMU page tables lazily, rather replacing
the whole VFIO memory map with a single ioctl.

I don't think that requires atomic updates of the IOMMU page table root,
but it would require atomic updates of IOMMU page table entires; VFIO
would compare the old and new memory map and modify the page tables when
it sees a difference.  Is that possible?

Paolo

> so "housekeeping" updates of mappings to (I
> assume) consolidate KVM memory slots doesn't work so well when the
> device is still running.  Stopping the device via something like the
> bus-master enable bit also sounds like a whole set of problems itself.
> I assume these simplified mappings also reduce our resolution for later
> unmaps, which isn't necessarily a win for an assigned device either if
> it exposes the race again each boot.
> 
> Maybe the question is why we don't see these errors more regularly, is
> there something unique about the memory layout of this platform versus
> others that causes larger memory regions to be coalesced together only
> to be later unmapped and provide more exposure to this issue?  Thanks,




reply via email to

[Prev in Thread] Current Thread [Next in Thread]