qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow vi


From: Eugenio Perez Martin
Subject: Re: [RFC v2 7/7] vhost: Route host->guest notification through shadow virtqueue
Date: Tue, 2 Mar 2021 17:30:08 +0100

On Mon, Mar 1, 2021 at 7:24 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> On 2021/2/9 11:37 下午, Eugenio Pérez wrote:
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >   hw/virtio/vhost-shadow-virtqueue.h |  2 ++
> >   hw/virtio/vhost-shadow-virtqueue.c | 49 ++++++++++++++++++++++++++++++
> >   hw/virtio/vhost.c                  |  5 ++-
> >   3 files changed, 55 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.h 
> > b/hw/virtio/vhost-shadow-virtqueue.h
> > index c45035c4b7..210133434c 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > @@ -17,6 +17,8 @@
> >
> >   typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
> >
> > +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq);
> > +
> >   bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >                                  unsigned idx,
> >                                  VhostShadowVirtqueue *svq);
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > b/hw/virtio/vhost-shadow-virtqueue.c
> > index 01f282d434..61d98ae652 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -24,6 +24,8 @@ typedef struct VhostShadowVirtqueue {
> >
> >       /* Borrowed virtqueue's guest to host notifier. */
> >       EventNotifier host_notifier;
> > +    /* Host to guest notifier */
> > +    EventNotifier *guest_notifier;
> >
> >       /* Virtio queue shadowing */
> >       VirtQueue *vq;
> > @@ -40,6 +42,26 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> >       }
> >   }
> >
> > +/* Forward vhost notifications */
> > +static void vhost_handle_call(EventNotifier *n)
> > +{
> > +    VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> > +                                             call_notifier);
> > +
> > +    if (event_notifier_test_and_clear(n)) {
> > +        event_notifier_set(svq->guest_notifier);
> > +    }
> > +}
>
>
> So I wonder how this is synchonized with virtqueue mask/unmask. Or the
> masking is totally transparent to shadow virtqueue?
>

Hi Jason.

Thanks for pointing it out. Actually, the design proposed in this
series is wrong.

In this series, when the guest masks a virtqueue, qemu still sends the
masked notifier as call fd to the vhost device. While that ioctl acts
as a synchronization point, it disables shadow vq polling and used
buffers forwarding entirely. The guest could be actively polling the
virtqueue used ring and waiting for used buffers before unmasking:
This is valid as far as I can tell.

At unmasking, either guest's notifier or shadow virtqueue call
notifier is set, and masked is checked as usual. Again this is
transparent for the dataplane (vhost_handle_call in this series).

We could have a race in vhost_shadow_vq_stop_rcu, since mask/unmask
status is set by vcpu thread and these are called by QMP here [1]. As
Stefan pointed out, vhost_shadow_vq_start_rcu is bad because it
unmasks unconditionally.

In the next revision the masked status will be also tracked by the
shadow virtqueue, but will only affect qemu->guest used notifications.

> Thanks
>
>
> > +
> > +/*
> > + * Get the vhost call notifier of the shadow vq
> > + * @vq Shadow virtqueue
> > + */
> > +EventNotifier *vhost_shadow_vq_get_call_notifier(VhostShadowVirtqueue *vq)
> > +{
> > +    return &vq->call_notifier;
> > +}
> > +
> >   /*
> >    * Start shadow virtqueue operation.
> >    * @dev vhost device
> > @@ -57,6 +79,10 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >           .index = idx,
> >           .fd = event_notifier_get_fd(&svq->kick_notifier),
> >       };
> > +    struct vhost_vring_file call_file = {
> > +        .index = idx,
> > +        .fd = event_notifier_get_fd(&svq->call_notifier),
> > +    };
> >       int r;
> >
> >       /* Check that notifications are still going directly to vhost dev */
> > @@ -66,6 +92,7 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >                              event_notifier_get_fd(vq_host_notifier));
> >       event_notifier_set_handler(&svq->host_notifier, 
> > vhost_handle_guest_kick);
> >
> > +    svq->guest_notifier = virtio_queue_get_guest_notifier(svq->vq);
> >       r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> >       if (unlikely(r != 0)) {
> >           error_report("Couldn't set kick fd: %s", strerror(errno));
> > @@ -75,8 +102,19 @@ bool vhost_shadow_vq_start_rcu(struct vhost_dev *dev,
> >       /* Check for pending notifications from the guest */
> >       vhost_handle_guest_kick(&svq->host_notifier);
> >
> > +    r = dev->vhost_ops->vhost_set_vring_call(dev, &call_file);
> > +    if (r != 0) {
> > +        error_report("Couldn't set call fd: %s", strerror(errno));
> > +        goto err_set_vring_call;
> > +    }
> > +
> >       return true;
> >
> > +err_set_vring_call:
> > +    kick_file.fd = event_notifier_get_fd(vq_host_notifier);
> > +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &kick_file);
> > +    assert(r == 0);
> > +
> >   err_set_vring_kick:
> >       event_notifier_set_handler(&svq->host_notifier, NULL);
> >
> > @@ -108,6 +146,16 @@ void vhost_shadow_vq_stop_rcu(struct vhost_dev *dev,
> >       assert(r == 0);
> >
> >       event_notifier_set_handler(&svq->host_notifier, NULL);
> > +
> > +    if (!dev->vqs[idx].notifier_is_masked) {
> > +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
 > > +
> > +        /* Restore vhost call */
> > +        vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx, false);
> > +
> > +        /* Check for pending calls */
> > +        vhost_handle_call(e);
> > +    }

[1] We could have a race condition if vcpu mask/unmask just between
reading it and calling vhost_virtqueue_mask: vhost_shadow_vq_stop_rcu
would override whatever guest set. It will be fixed in the next
revision.

> >   }
> >
> >   /*
> > @@ -136,6 +184,7 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct 
> > vhost_dev *dev, int idx)
> >           goto err_init_call_notifier;
> >       }
> >
> > +    event_notifier_set_handler(&svq->call_notifier, vhost_handle_call);
> >       return g_steal_pointer(&svq);
> >
> >   err_init_call_notifier:
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index 9d4728e545..0dc95679e9 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -975,7 +975,6 @@ static int vhost_sw_live_migration_start(struct 
> > vhost_dev *dev)
> >           for (idx = 0; idx < dev->nvqs; ++idx) {
> >               bool ok = vhost_shadow_vq_start_rcu(dev, idx,
> >                                                   dev->shadow_vqs[idx]);
> > -
> >               if (!ok) {
> >                   int stop_idx = idx;
> >
> > @@ -1608,6 +1607,10 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, 
> > VirtIODevice *vdev, int n,
> >       if (mask) {
> >           assert(vdev->use_guest_notifier_mask);
> >           file.fd = 
> > event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
> > +    } else if (hdev->sw_lm_enabled) {
> > +        VhostShadowVirtqueue *svq = hdev->shadow_vqs[n];
> > +        EventNotifier *e = vhost_shadow_vq_get_call_notifier(svq);
> > +        file.fd = event_notifier_get_fd(e);
> >       } else {
> >           file.fd = 
> > event_notifier_get_fd(virtio_queue_get_guest_notifier(vvq));
> >       }
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]