qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not en


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
Date: Tue, 19 Feb 2019 09:19:45 -0500

On Tue, Feb 19, 2019 at 02:27:35PM +0800, Jason Wang wrote:
> 
> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
> > On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> > > On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <address@hidden> wrote:
> > > > On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > > > > On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <address@hidden> wrote:
> > > > > > 
> > > > > > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > > > > > On startup/link-up in multiqueue configuration the virtio-net
> > > > > > > tries to starts all the queues, including those that the guest
> > > > > > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > > > > > If the guest driver does not allocate queues that it will not
> > > > > > > use (for example, Windows driver does not) and number of actually
> > > > > > > used queues is less that maximal number supported by the device,
> > > > > > 
> > > > > > Is this a requirement of e.g NDIS? If not, could we simply allocate 
> > > > > > all
> > > > > > queues in this case. This is usually what normal Linux driver did.
> > > > > > 
> > > > > > 
> > > > > > > this causes vhost_net_start to fail and actually disables vhost
> > > > > > > for all the queues, reducing the performance.
> > > > > > > Current commit fixes this: initially only first queue is started,
> > > > > > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > > > > > requested by the guest.
> > > > > > > 
> > > > > > > Signed-off-by: Yuri Benditovich <address@hidden>
> > > > > > > ---
> > > > > > >    hw/net/virtio-net.c | 7 +++++--
> > > > > > >    1 file changed, 5 insertions(+), 2 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > > > index 3f319ef723..d3b1ac6d3a 100644
> > > > > > > --- a/hw/net/virtio-net.c
> > > > > > > +++ b/hw/net/virtio-net.c
> > > > > > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet 
> > > > > > > *n, uint8_t status)
> > > > > > >    {
> > > > > > >        VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > > > > >        NetClientState *nc = qemu_get_queue(n->nic);
> > > > > > > -    int queues = n->multiqueue ? n->max_queues : 1;
> > > > > > > +    int queues = n->multiqueue ? n->curr_queues : 1;
> > > > > > > 
> > > > > > >        if (!get_vhost_net(nc->peer)) {
> > > > > > >            return;
> > > > > > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet 
> > > > > > > *n, uint8_t cmd,
> > > > > > >            return VIRTIO_NET_ERR;
> > > > > > >        }
> > > > > > > 
> > > > > > > -    n->curr_queues = queues;
> > > > > > >        /* stop the backend before changing the number of queues 
> > > > > > > to avoid handling a
> > > > > > >         * disabled queue */
> > > > > > > +    virtio_net_set_status(vdev, 0);
> > > > > > 
> > > > > > Any reason for doing this?
> > > > > I think there are 2 reasons:
> > > > > 1. The spec does not require guest SW to allocate unused queues.
> > > > > 2. We spend guest's physical memory to just make vhost happy when it
> > > > > touches queues that it should not use.
> > > > > 
> > > > > Thanks,
> > > > > Yuri Benditovich
> > > > The spec also says:
> > > >          queue_enable The driver uses this to selectively prevent the 
> > > > device from executing requests from this
> > > >          virtqueue. 1 - enabled; 0 - disabled.
> > > > 
> > > > While this is not a conformance clause this strongly implies that
> > > > queues which are not enabled are never accessed by device.
> > > > 
> > > > Yuri I am guessing you are not enabling these unused queues right?
> > > Of course, we (Windows driver) do not.
> > > The code of virtio-net passes max_queues to vhost and this causes
> > > vhost to try accessing all the queues, fail on unused ones and finally
> > > leave vhost disabled at all.
> > 
> > Jason, at least for 1.0 accessing disabled queues looks like a spec
> > violation. What do you think?
> 
> 
> Yes, but there's some issues:
> 
> - How to detect a disabled queue for 0.9x device? Looks like there's no way
> according to the spec, so device must assume all queues was enabled.

Traditionally devices assumed that queue address 0 implies not enabled.

> - For 1.0, if we depends on queue_enable, we should implement the callback
> for vhost I think. Otherwise it's still buggy.
> 
> So it looks tricky to enable and disable queues through set status
> 
> Thanks


Do you agree it's a compliance issue though?

> 
> > 
> > > > 
> > > > 
> > > > > > Thanks
> > > > > > 
> > > > > > 
> > > > > > > +
> > > > > > > +    n->curr_queues = queues;
> > > > > > > +
> > > > > > >        virtio_net_set_status(vdev, vdev->status);
> > > > > > >        virtio_net_set_queues(n);
> > > > > > > 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]