qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not en


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
Date: Mon, 18 Feb 2019 11:39:25 -0500

On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <address@hidden> wrote:
> >
> >
> > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > On startup/link-up in multiqueue configuration the virtio-net
> > > tries to starts all the queues, including those that the guest
> > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > If the guest driver does not allocate queues that it will not
> > > use (for example, Windows driver does not) and number of actually
> > > used queues is less that maximal number supported by the device,
> >
> >
> > Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > queues in this case. This is usually what normal Linux driver did.
> >
> >
> > > this causes vhost_net_start to fail and actually disables vhost
> > > for all the queues, reducing the performance.
> > > Current commit fixes this: initially only first queue is started,
> > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > requested by the guest.
> > >
> > > Signed-off-by: Yuri Benditovich <address@hidden>
> > > ---
> > >   hw/net/virtio-net.c | 7 +++++--
> > >   1 file changed, 5 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 3f319ef723..d3b1ac6d3a 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, 
> > > uint8_t status)
> > >   {
> > >       VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > >       NetClientState *nc = qemu_get_queue(n->nic);
> > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > +    int queues = n->multiqueue ? n->curr_queues : 1;
> > >
> > >       if (!get_vhost_net(nc->peer)) {
> > >           return;
> > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, 
> > > uint8_t cmd,
> > >           return VIRTIO_NET_ERR;
> > >       }
> > >
> > > -    n->curr_queues = queues;
> > >       /* stop the backend before changing the number of queues to avoid 
> > > handling a
> > >        * disabled queue */
> > > +    virtio_net_set_status(vdev, 0);
> >
> >
> > Any reason for doing this?
> 
> I think there are 2 reasons:
> 1. The spec does not require guest SW to allocate unused queues.
> 2. We spend guest's physical memory to just make vhost happy when it
> touches queues that it should not use.
> 
> Thanks,
> Yuri Benditovich

The spec also says:
        queue_enable The driver uses this to selectively prevent the device 
from executing requests from this
        virtqueue. 1 - enabled; 0 - disabled.

While this is not a conformance clause this strongly implies that
queues which are not enabled are never accessed by device.

Yuri I am guessing you are not enabling these unused queues right?



> >
> > Thanks
> >
> >
> > > +
> > > +    n->curr_queues = queues;
> > > +
> > >       virtio_net_set_status(vdev, vdev->status);
> > >       virtio_net_set_queues(n);
> > >



reply via email to

[Prev in Thread] Current Thread [Next in Thread]