qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RESEND] vhost: fix possible wrap in SVQ descriptor ring


From: Lei Yang
Subject: Re: [PATCH RESEND] vhost: fix possible wrap in SVQ descriptor ring
Date: Wed, 10 May 2023 09:32:54 +0800

QE applied this patch to do sanity testing on vhost-net, there is no
any regression problem.

Tested-by: Lei Yang <leiyang@redhat.com>



On Tue, May 9, 2023 at 1:28 AM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> On Sat, May 6, 2023 at 5:01 PM Hawkins Jiawei <yin31149@gmail.com> wrote:
> >
> > QEMU invokes vhost_svq_add() when adding a guest's element into SVQ.
> > In vhost_svq_add(), it uses vhost_svq_available_slots() to check
> > whether QEMU can add the element into the SVQ. If there is
> > enough space, then QEMU combines some out descriptors and
> > some in descriptors into one descriptor chain, and add it into
> > svq->vring.desc by vhost_svq_vring_write_descs().
> >
> > Yet the problem is that, `svq->shadow_avail_idx - svq->shadow_used_idx`
> > in vhost_svq_available_slots() return the number of occupied elements,
> > or the number of descriptor chains, instead of the number of occupied
> > descriptors, which may cause wrapping in SVQ descriptor ring.
> >
> > Here is an example. In vhost_handle_guest_kick(), QEMU forwards
> > as many available buffers to device by virtqueue_pop() and
> > vhost_svq_add_element(). virtqueue_pop() return a guest's element,
> > and use vhost_svq_add_elemnt(), a wrapper to vhost_svq_add(), to
> > add this element into SVQ. If QEMU invokes virtqueue_pop() and
> > vhost_svq_add_element() `svq->vring.num` times, vhost_svq_available_slots()
> > thinks QEMU just ran out of slots and everything should work fine.
> > But in fact, virtqueue_pop() return `svq-vring.num` elements or
> > descriptor chains, more than `svq->vring.num` descriptors, due to
> > guest memory fragmentation, and this cause wrapping in SVQ descriptor ring.
> >
>
> The bug is valid even before marking the descriptors used. If the
> guest memory is fragmented, SVQ must add chains so it can try to add
> more descriptors than possible.
>
> > Therefore, this patch adds `num_free` field in VhostShadowVirtqueue
> > structure, updates this field in vhost_svq_add() and
> > vhost_svq_get_buf(), to record the number of free descriptors.
> > Then we can avoid wrap in SVQ descriptor ring by refactoring
> > vhost_svq_available_slots().
> >
> > Fixes: 100890f7ca ("vhost: Shadow virtqueue buffers forwarding")
> > Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
> > ---
> >  hw/virtio/vhost-shadow-virtqueue.c | 9 ++++++++-
> >  hw/virtio/vhost-shadow-virtqueue.h | 3 +++
> >  2 files changed, 11 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > b/hw/virtio/vhost-shadow-virtqueue.c
> > index 8361e70d1b..e1c6952b10 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > @@ -68,7 +68,7 @@ bool vhost_svq_valid_features(uint64_t features, Error 
> > **errp)
> >   */
> >  static uint16_t vhost_svq_available_slots(const VhostShadowVirtqueue *svq)
> >  {
> > -    return svq->vring.num - (svq->shadow_avail_idx - svq->shadow_used_idx);
> > +    return svq->num_free;
> >  }
> >
> >  /**
> > @@ -263,6 +263,9 @@ int vhost_svq_add(VhostShadowVirtqueue *svq, const 
> > struct iovec *out_sg,
> >          return -EINVAL;
> >      }
> >
> > +    /* Update the size of SVQ vring free descriptors */
> > +    svq->num_free -= ndescs;
> > +
> >      svq->desc_state[qemu_head].elem = elem;
> >      svq->desc_state[qemu_head].ndescs = ndescs;
> >      vhost_svq_kick(svq);
> > @@ -450,6 +453,9 @@ static VirtQueueElement 
> > *vhost_svq_get_buf(VhostShadowVirtqueue *svq,
> >      svq->desc_next[last_used_chain] = svq->free_head;
> >      svq->free_head = used_elem.id;
> >
> > +    /* Update the size of SVQ vring free descriptors */
>
> No need for this comment.
>
> Apart from that,
>
> Acked-by: Eugenio Pérez <eperezma@redhat.com>
>
> > +    svq->num_free += num;
> > +
> >      *len = used_elem.len;
> >      return g_steal_pointer(&svq->desc_state[used_elem.id].elem);
> >  }
> > @@ -659,6 +665,7 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, 
> > VirtIODevice *vdev,
> >      svq->iova_tree = iova_tree;
> >
> >      svq->vring.num = virtio_queue_get_num(vdev, 
> > virtio_get_queue_index(vq));
> > +    svq->num_free = svq->vring.num;
> >      driver_size = vhost_svq_driver_area_size(svq);
> >      device_size = vhost_svq_device_area_size(svq);
> >      svq->vring.desc = qemu_memalign(qemu_real_host_page_size(), 
> > driver_size);
> > diff --git a/hw/virtio/vhost-shadow-virtqueue.h 
> > b/hw/virtio/vhost-shadow-virtqueue.h
> > index 926a4897b1..6efe051a70 100644
> > --- a/hw/virtio/vhost-shadow-virtqueue.h
> > +++ b/hw/virtio/vhost-shadow-virtqueue.h
> > @@ -107,6 +107,9 @@ typedef struct VhostShadowVirtqueue {
> >
> >      /* Next head to consume from the device */
> >      uint16_t last_used_idx;
> > +
> > +    /* Size of SVQ vring free descriptors */
> > +    uint16_t num_free;
> >  } VhostShadowVirtqueue;
> >
> >  bool vhost_svq_valid_features(uint64_t features, Error **errp);
> > --
> > 2.25.1
> >
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]