qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and IN_ORDER feat


From: Srujana Challa
Subject: RE: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and IN_ORDER feature bits to vdpa_feature_bits
Date: Fri, 8 Mar 2024 13:37:58 +0000

Hi Michael,

VIRTIO_F_NOTIFICATION_DATA needs to be exposed to make Marvell's device works
with Qemu. Any other better ways to expose VIRTIO_F_NOTIFICATION_DATA feature
bit for vhost vdpa use case?

Thanks,
Srujana.

> Subject: RE: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and
> IN_ORDER feature bits to vdpa_feature_bits
> 
> Ping.
> 
> > Subject: RE: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and
> > IN_ORDER feature bits to vdpa_feature_bits
> >
> >
> >
> > > -----Original Message-----
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: Monday, February 19, 2024 3:15 PM
> > > To: Srujana Challa <schalla@marvell.com>
> > > Cc: qemu-devel@nongnu.org; Vamsi Krishna Attunuru
> > > <vattunuru@marvell.com>; Jerin Jacob <jerinj@marvell.com>; Jason
> > > Wang <jasowang@redhat.com>
> > > Subject: [EXT] Re: [PATCH] vhost_net: add NOTIFICATION_DATA and
> > > IN_ORDER feature bits to vdpa_feature_bits
> > >
> > > External Email
> > >
> > > --------------------------------------------------------------------
> > > -- On Tue, Jan 02, 2024 at 04:44:32PM +0530, Srujana Challa wrote:
> > > > Enables VIRTIO_F_NOTIFICATION_DATA and VIRTIO_F_IN_ORDER
> feature
> > > bits
> > > > for vhost vdpa backend. Also adds code to consider all feature
> > > > bits supported by vhost net client type for feature negotiation,
> > > > so that vhost backend device supported features can be negotiated with
> guest.
> > > >
> > > > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > > > ---
> > > >  hw/net/vhost_net.c | 10 ++++++++++
> > > >  net/vhost-vdpa.c   |  2 ++
> > > >  2 files changed, 12 insertions(+)
> > > >
> > > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index
> > > > e8e1661646..65ae8bcece 100644
> > > > --- a/hw/net/vhost_net.c
> > > > +++ b/hw/net/vhost_net.c
> > > > @@ -117,6 +117,16 @@ static const int
> > > > *vhost_net_get_feature_bits(struct vhost_net *net)
> > > >
> > > >  uint64_t vhost_net_get_features(struct vhost_net *net, uint64_t
> > > > features)  {
> > > > +    const int *bit = vhost_net_get_feature_bits(net);
> > > > +
> > > > +    /*
> > > > +     * Consider all feature bits for feature negotiation with vhost 
> > > > backend,
> > > > +     * so that all backend device supported features can be negotiated.
> > > > +     */
> > > > +    while (*bit != VHOST_INVALID_FEATURE_BIT) {
> > > > +        features |= (1ULL << *bit);
> > > > +        bit++;
> > > > +    }
> > > >      return vhost_get_features(&net->dev,
> vhost_net_get_feature_bits(net),
> > > >              features);
> > > >  }
> > >
> > > I don't think we should do this part. With vdpa QEMU is in control
> > > of which features are exposed and that is intentional since features
> > > are often tied to other behaviour.
> >
> > Vdpa Qemu can negotiate all the features which vdpa backend device
> > supports with the guest right?
> > Guest drivers (it could be userspace or kernel drivers) will negotiate
> > their own features, so that frontend supported features will get the
> > precedence.
> >
> > >
> > > > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index
> > > > 3726ee5d67..51334fcfe2 100644
> > > > --- a/net/vhost-vdpa.c
> > > > +++ b/net/vhost-vdpa.c
> > > > @@ -57,7 +57,9 @@ typedef struct VhostVDPAState {
> > > >   */
> > > >  const int vdpa_feature_bits[] = {
> > > >      VIRTIO_F_ANY_LAYOUT,
> > > > +    VIRTIO_F_IN_ORDER,
> > > >      VIRTIO_F_IOMMU_PLATFORM,
> > > > +    VIRTIO_F_NOTIFICATION_DATA,
> > > >      VIRTIO_F_NOTIFY_ON_EMPTY,
> > > >      VIRTIO_F_RING_PACKED,
> > > >      VIRTIO_F_RING_RESET,
> > > > --
> > > > 2.25.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]