qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH v4 09/11] virtio-net: update the head descriptor in a chain lastly
Date: Wed, 20 Feb 2019 10:34:32 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0


On 2019/2/20 上午9:54, Wei Xu wrote:
On Tue, Feb 19, 2019 at 09:09:33PM +0800, Jason Wang wrote:
On 2019/2/19 下午6:51, Wei Xu wrote:
On Tue, Feb 19, 2019 at 03:23:01PM +0800, Jason Wang wrote:
On 2019/2/14 下午12:26, address@hidden wrote:
From: Wei Xu <address@hidden>

This is a helper for packed ring.

To support packed ring, the head descriptor in a chain should be updated
lastly since no 'avail_idx' like in packed ring to explicitly tell the
driver side that all payload is ready after having done the chain, so
the head is always visible immediately.

This patch fills the header after done all the other ones.

Signed-off-by: Wei Xu <address@hidden>
It's really odd to workaround API issue in the implementation of device.
Please introduce batched used updating helpers instead.
Can you elaborate a bit more? I don't get it as well.

The exact batch as vhost-net or dpdk pmd is not supported by userspace
backend. The change here is to keep the header descriptor updated at
last in case of a chaining descriptors and the helper might not help
too much.

Wei

Of course we can add batching support why not?
It is always good to improve performance with anything, while probably
this could be done in another separate batch, also we need to bear
in mind that usually qemu userspace backend is not the first option for
performance oriented user.


The point is to hide layout specific things from device emulation. If it helps for performance, it could be treated as a good byproduct.



AFAICT, virtqueue_fill() is a generic API for all relevant userspace virtio
devices that do not support batching , without touching virtqueue_fill(),
supporting batching changes the meaning of the parameter 'idx' which should
be kept overall.

To fix it, I got two proposals so far:
1). batching support(two APIs needed to keep compatibility)
2). save a head elem for a vq instead of caching an array of elems like vhost,
     and introduce a new API(virtqueue_chain_fill()) functioning with an
     additional parameter 'more' to the current virtqueue_fill() to indicate if
     there are more descriptor(s) coming in a chain.

Either way it changes the API somehow and it does not seem to be clean and clear
as wanted.


It's as simple as accepting an array of elems in e.g virtqueue_fill_batched()?



Any better idea?

Your code assumes the device know the virtio layout specific assumption
whih breaks the layer. Device should not care about the actual layout.

Good point, but anyway, change to virtio-net receiving code path is
unavoidable to support split and packed rings, and batching is like a new
feature somehow.


It's ok to change the code as a result of introducing of generic helper but it's bad to change to code for working around a bad API.

Thanks



Wei
Thanks


Thanks


---
  hw/net/virtio-net.c | 11 ++++++++++-
  1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 3f319ef..330abea 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1251,6 +1251,8 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, 
const uint8_t *buf,
      struct virtio_net_hdr_mrg_rxbuf mhdr;
      unsigned mhdr_cnt = 0;
      size_t offset, i, guest_offset;
+    VirtQueueElement head;
+    int head_len = 0;
      if (!virtio_net_can_receive(nc)) {
          return -1;
@@ -1328,7 +1330,13 @@ static ssize_t virtio_net_receive_rcu(NetClientState 
*nc, const uint8_t *buf,
          }
          /* signal other side */
-        virtqueue_fill(q->rx_vq, elem, total, i++);
+        if (i == 0) {
+            head_len = total;
+            head = *elem;
+        } else {
+            virtqueue_fill(q->rx_vq, elem, len, i);
+        }
+        i++;
          g_free(elem);
      }
@@ -1339,6 +1347,7 @@ static ssize_t virtio_net_receive_rcu(NetClientState *nc, 
const uint8_t *buf,
                       &mhdr.num_buffers, sizeof mhdr.num_buffers);
      }
+    virtqueue_fill(q->rx_vq, &head, head_len, 0);
      virtqueue_flush(q->rx_vq, i);
      virtio_notify(vdev, q->rx_vq);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]