qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] s390x/pci: add support for guests that request direct ma


From: David Hildenbrand
Subject: Re: [PATCH 1/2] s390x/pci: add support for guests that request direct mapping
Date: Mon, 9 Dec 2024 23:09:01 +0100
User-agent: Mozilla Thunderbird

On 09.12.24 22:45, Matthew Rosato wrote:
On 12/9/24 4:01 PM, David Hildenbrand wrote:
On 09.12.24 20:29, Matthew Rosato wrote:

Hi,

Trying to wrap my head around that ... you mention that "pin the entirety of guest 
memory".

Do you mean that we will actually end up longterm pinning all guest RAM in the 
kernel, similar to what vfio ends up doing?

Yes.  Actually, the usecase here is specifically PCI passthrough via vfio-pci 
on s390.  Unlike other platforms, the default s390 approach only pins on-demand 
and doesn't longterm pin all guest RAM, which is nice from a memory footprint 
perspective but pays a price via all those guest-2 RPCIT instructions.  The 
goal here is now provide the optional alternative to longterm pin like other 
platforms.

Okay, thanks for confirming. One more question: who will trigger this longterm-pinning? Is it vfio?

(the code flow from your code to the pinning code would be nice)



In that case, it would be incompatible with virtio-balloon (and without 
modifications with upcoming virtio-mem). Is there already a mechanism in place 
to handle that -- a call  to ram_block_discard_disable() -- or even a way to 
support coordinated discarding of RAM (e.g., virtio-mem + vfio)?

Good point, should be calling add ram_block_discard_disable(true) when set 
register + a corresponding (false) during deregister...  Will add for v2.

As for supporting coordinated discard, I was hoping to subsequently look at 
virtio-mem for this.

As long as discarding is blocked for now, we're good. To support it, the RAMDiscardManager would have to be wired up, similar to vfio.

I think the current way of handling it via

+    IOMMUTLBEvent event = {
+        .type = IOMMU_NOTIFIER_MAP,
+        .entry = {
+            .target_as = &address_space_memory,
+            .translated_addr = 0,
+            .perm = IOMMU_RW,
+        },
+    };


Is probably not ideal: it cannot cope with memory holes (which virtio-mem would create).

Likely, you'd instead want an address space notifier, and really only map the memory region sections you get notified about.

There, you can test for RAMDiscardManager and handle it like vfio does.

--
Cheers,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]