[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and IN_ORDER feat
From: |
Michael S. Tsirkin |
Subject: |
Re: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and IN_ORDER feature bits to vdpa_feature_bits |
Date: |
Tue, 12 Mar 2024 11:51:17 -0400 |
On Mon, Mar 11, 2024 at 09:32:53AM +0100, Eugenio Perez Martin wrote:
> On Fri, Mar 8, 2024 at 2:39 PM Srujana Challa <schalla@marvell.com> wrote:
> >
> > Hi Michael,
> >
> > VIRTIO_F_NOTIFICATION_DATA needs to be exposed to make Marvell's device
> > works
> > with Qemu. Any other better ways to expose VIRTIO_F_NOTIFICATION_DATA
> > feature
> > bit for vhost vdpa use case?
> >
>
> Hi!
>
> Jonah Palmer is working on implementing notification_data [1]. He's
> implementing it on emulated devices first but the resulting QEMU
> should be able to enable it for vdpa devices too. Would it be possible
> for you to review it, and /or test the series against your devices?
> And checking that everything works on emulated devices too?
>
> Thanks!
>
> [1] https://lists.nongnu.org/archive/html/qemu-devel/2024-03/msg00755.html
Yes, please do. And if anything is missing in his patches you can post
a patch on top.
> > Thanks,
> > Srujana.
> >
> > > Subject: RE: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and
> > > IN_ORDER feature bits to vdpa_feature_bits
> > >
> > > Ping.
> > >
> > > > Subject: RE: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and
> > > > IN_ORDER feature bits to vdpa_feature_bits
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > Sent: Monday, February 19, 2024 3:15 PM
> > > > > To: Srujana Challa <schalla@marvell.com>
> > > > > Cc: qemu-devel@nongnu.org; Vamsi Krishna Attunuru
> > > > > <vattunuru@marvell.com>; Jerin Jacob <jerinj@marvell.com>; Jason
> > > > > Wang <jasowang@redhat.com>
> > > > > Subject: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and
> > > > > IN_ORDER feature bits to vdpa_feature_bits
> > > > >
> > > > > External Email
> > > > >
> > > > > --------------------------------------------------------------------
> > > > > -- On Tue, Jan 02, 2024 at 04:44:32PM +0530, Srujana Challa wrote:
> > > > > > Enables VIRTIO_F_NOTIFICATION_DATA and VIRTIO_F_IN_ORDER
> > > feature
> > > > > bits
> > > > > > for vhost vdpa backend. Also adds code to consider all feature
> > > > > > bits supported by vhost net client type for feature negotiation,
> > > > > > so that vhost backend device supported features can be negotiated
> > > > > > with
> > > guest.
> > > > > >
> > > > > > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > > > > > ---
> > > > > > hw/net/vhost_net.c | 10 ++++++++++
> > > > > > net/vhost-vdpa.c | 2 ++
> > > > > > 2 files changed, 12 insertions(+)
> > > > > >
> > > > > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index
> > > > > > e8e1661646..65ae8bcece 100644
> > > > > > --- a/hw/net/vhost_net.c
> > > > > > +++ b/hw/net/vhost_net.c
> > > > > > @@ -117,6 +117,16 @@ static const int
> > > > > > *vhost_net_get_feature_bits(struct vhost_net *net)
> > > > > >
> > > > > > uint64_t vhost_net_get_features(struct vhost_net *net, uint64_t
> > > > > > features) {
> > > > > > + const int *bit = vhost_net_get_feature_bits(net);
> > > > > > +
> > > > > > + /*
> > > > > > + * Consider all feature bits for feature negotiation with
> > > > > > vhost backend,
> > > > > > + * so that all backend device supported features can be
> > > > > > negotiated.
> > > > > > + */
> > > > > > + while (*bit != VHOST_INVALID_FEATURE_BIT) {
> > > > > > + features |= (1ULL << *bit);
> > > > > > + bit++;
> > > > > > + }
> > > > > > return vhost_get_features(&net->dev,
> > > vhost_net_get_feature_bits(net),
> > > > > > features);
> > > > > > }
> > > > >
> > > > > I don't think we should do this part. With vdpa QEMU is in control
> > > > > of which features are exposed and that is intentional since features
> > > > > are often tied to other behaviour.
> > > >
> > > > Vdpa Qemu can negotiate all the features which vdpa backend device
> > > > supports with the guest right?
> > > > Guest drivers (it could be userspace or kernel drivers) will negotiate
> > > > their own features, so that frontend supported features will get the
> > > > precedence.
> > > >
> > > > >
> > > > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index
> > > > > > 3726ee5d67..51334fcfe2 100644
> > > > > > --- a/net/vhost-vdpa.c
> > > > > > +++ b/net/vhost-vdpa.c
> > > > > > @@ -57,7 +57,9 @@ typedef struct VhostVDPAState {
> > > > > > */
> > > > > > const int vdpa_feature_bits[] = {
> > > > > > VIRTIO_F_ANY_LAYOUT,
> > > > > > + VIRTIO_F_IN_ORDER,
> > > > > > VIRTIO_F_IOMMU_PLATFORM,
> > > > > > + VIRTIO_F_NOTIFICATION_DATA,
> > > > > > VIRTIO_F_NOTIFY_ON_EMPTY,
> > > > > > VIRTIO_F_RING_PACKED,
> > > > > > VIRTIO_F_RING_RESET,
> > > > > > --
> > > > > > 2.25.1
> >
> >