qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1] vhost-vdpa: Set discarding of RAM broken when initializin


From: Jason Wang
Subject: Re: [PATCH v1] vhost-vdpa: Set discarding of RAM broken when initializing the backend
Date: Thu, 4 Mar 2021 17:32:09 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.16; rv:78.0) Gecko/20100101 Thunderbird/78.8.0


On 2021/3/3 6:26 下午, David Hildenbrand wrote:
On 03.03.21 03:53, Jason Wang wrote:

On 2021/3/3 12:21 上午, David Hildenbrand wrote:
Similar to VFIO, vDPA will go ahead an map+pin all guest memory. Memory
that used to be discarded will get re-populated and if we
discard+re-access memory after mapping+pinning, the pages mapped into the vDPA IOMMU will go out of sync with the actual pages mapped into the user
space page tables.

Set discarding of RAM broken such that:
- virtio-mem and vhost-vdpa run mutually exclusive
- virtio-balloon is inhibited and no memory discards will get issued

In the future, we might be able to support coordinated discarding of RAM
as used by virtio-mem and as planned for VFIO.

Cc: Jason Wang <jasowang@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Cindy Lu <lulu@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>


Acked-by: Jason Wang <jasowang@redhat.com>


---

Note: I was not actually able to reproduce/test as I fail to get the
vdpa_sim/vdpa_sim_net running on upstream Linux (whetever vdpa, vhost_vdpa, vdpa_sim, vdpa_sim_net modules I probe, and in which order, no vdpa devices
appear under /sys/bus/vdpa/devices/ or /dev/).


The device creation was switched to use vdpa tool that is integrated
with iproue2[1].

[1]
https://git.kernel.org/pub/scm/network/iproute2/iproute2-next.git/commit/?id=143610383da51e1f868c6d5a2a5e2fb552293d18

It would be great to document that somewhere if not already done. I only found older RH documentations that were not aware of that. I'll give it a try - thanks!


Will think about this. Which RH doc do you refer here? Is this the redhat blog?






---
   hw/virtio/vhost-vdpa.c | 13 +++++++++++++
   1 file changed, 13 insertions(+)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 01d2101d09..86058d4041 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -278,6 +278,17 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque)
       uint64_t features;
       assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
       trace_vhost_vdpa_init(dev, opaque);
+    int ret;
+
+    /*
+     * Similar to VFIO, we end up pinning all guest memory and have to
+     * disable discarding of RAM.
+     */
+    ret = ram_block_discard_disable(true);
+    if (ret) {
+        error_report("Cannot set discarding of RAM broken");
+        return ret;
+    }


vDPA will support non pinning (shared VM) backend soon[2]. So I guess we
need a flag to be advertised to usersapce then we can conditionly enable
the discard here.

I thought that was already the default (because I stumbled over enforcing guest IOMMU) but was surprised when I had a look at the implementation.

Having a flag sounds good.

BTW: I assume iommu support is not fully working yet, right? I don't see special casing for iommu regions, including registering the listener and updating the mapping.


It's not yet implemented. Yes, it's something like what VFIO did right now, e.g to use IOMMU notifiers.

Thanks





reply via email to

[Prev in Thread] Current Thread [Next in Thread]