qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vhost-user: add support for VHOST_USER_SET_STATUS


From: Maxime Coquelin
Subject: Re: [PATCH] vhost-user: add support for VHOST_USER_SET_STATUS
Date: Thu, 14 May 2020 17:38:09 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0


On 5/14/20 4:20 PM, Michael S. Tsirkin wrote:
> On Thu, May 14, 2020 at 04:12:32PM +0200, Maxime Coquelin wrote:
>>
>>
>>
>> On 5/14/20 12:44 PM, Michael S. Tsirkin wrote:
>>> On Thu, May 14, 2020 at 09:33:32AM +0200, Maxime Coquelin wrote:
>>>> It is usefull for the Vhost-user backend to know
>>>> about about the Virtio device status updates,
>>>> especially when the driver sets the DRIVER_OK bit.
>>>>
>>>> With that information, no more need to do hazardous
>>>> assumptions on when the driver is done with the
>>>> device configuration.
>>>>
>>>> Signed-off-by: Maxime Coquelin <address@hidden>
>>>> ---
>>>>
>>>> This patch applies on top of Cindy's "vDPA support in qemu"
>>>> series, which introduces the .vhost_set_state vhost-backend
>>>> ops.
>>>>
>>>>  docs/interop/vhost-user.rst | 12 ++++++++++++
>>>>  hw/net/vhost_net.c          | 10 +++++-----
>>>>  hw/virtio/vhost-user.c      | 35 +++++++++++++++++++++++++++++++++++
>>>>  3 files changed, 52 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
>>>> index 3b1b6602c7..f108de7458 100644
>>>> --- a/docs/interop/vhost-user.rst
>>>> +++ b/docs/interop/vhost-user.rst
>>>> @@ -815,6 +815,7 @@ Protocol features
>>>>    #define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD       12
>>>>    #define VHOST_USER_PROTOCOL_F_RESET_DEVICE         13
>>>>    #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
>>>> +  #define VHOST_USER_PROTOCOL_F_STATUS               15
>>>>  
>>>>  Master message types
>>>>  --------------------
>>>> @@ -1263,6 +1264,17 @@ Master message types
>>>>  
>>>>    The state.num field is currently reserved and must be set to 0.
>>>>  
>>>> +``VHOST_USER_SET_STATUS``
>>>> +  :id: 36
>>>> +  :equivalent ioctl: VHOST_VDPA_SET_STATUS
>>>> +  :slave payload: N/A
>>>> +  :master payload: ``u64``
>>>> +
>>>> +  When the ``VHOST_USER_PROTOCOL_F_STATUS`` protocol feature has been
>>>> +  successfully negotiated, this message is submitted by the master to
>>>> +  notify the backend with updated device status as defined in the Virtio
>>>> +  specification.
>>>> +
>>>>  Slave message types
>>>>  -------------------
>>>>  
>>>> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
>>>> index 463e333531..37f3156dbc 100644
>>>> --- a/hw/net/vhost_net.c
>>>> +++ b/hw/net/vhost_net.c
>>>> @@ -517,10 +517,10 @@ int vhost_set_state(NetClientState *nc, int state)
>>>>  {
>>>>      struct vhost_net *net = get_vhost_net(nc);
>>>>      struct vhost_dev *hdev = &net->dev;
>>>> -    if (nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
>>>> -        if (hdev->vhost_ops->vhost_set_state) {
>>>> -                return hdev->vhost_ops->vhost_set_state(hdev, state);
>>>> -             }
>>>> -        }
>>>> +
>>>> +    if (hdev->vhost_ops->vhost_set_state) {
>>>> +        return hdev->vhost_ops->vhost_set_state(hdev, state);
>>>> +    }
>>>> +
>>>>      return 0;
>>>>  }
>>>> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>>>> index ec21e8fbe8..b7e52d97fc 100644
>>>> --- a/hw/virtio/vhost-user.c
>>>> +++ b/hw/virtio/vhost-user.c
>>>> @@ -59,6 +59,7 @@ enum VhostUserProtocolFeature {
>>>>      VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11,
>>>>      VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12,
>>>>      VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13,
>>>> +    VHOST_USER_PROTOCOL_F_STATUS = 15,
>>>>      VHOST_USER_PROTOCOL_F_MAX
>>>>  };
>>>>  
>>>> @@ -100,6 +101,7 @@ typedef enum VhostUserRequest {
>>>>      VHOST_USER_SET_INFLIGHT_FD = 32,
>>>>      VHOST_USER_GPU_SET_SOCKET = 33,
>>>>      VHOST_USER_RESET_DEVICE = 34,
>>>> +    VHOST_USER_SET_STATUS = 36,
>>>>      VHOST_USER_MAX
>>>>  } VhostUserRequest;
>>>>  
>>>> @@ -1886,6 +1888,38 @@ static int vhost_user_set_inflight_fd(struct 
>>>> vhost_dev *dev,
>>>>      return 0;
>>>>  }
>>>>  
>>>> +static int vhost_user_set_state(struct vhost_dev *dev, int state)
>>>> +{
>>>> +    bool reply_supported = virtio_has_feature(dev->protocol_features,
>>>> +                                              
>>>> VHOST_USER_PROTOCOL_F_REPLY_ACK);
>>>> +
>>>> +    VhostUserMsg msg = {
>>>> +        .hdr.request = VHOST_USER_SET_STATUS,
>>>> +        .hdr.flags = VHOST_USER_VERSION,
>>>> +        .hdr.size = sizeof(msg.payload.u64),
>>>> +        .payload.u64 = (uint64_t)state,
>>>> +    };
>>>> +
>>>> +    if (!virtio_has_feature(dev->protocol_features,
>>>> +                VHOST_USER_PROTOCOL_F_STATUS)) {
>>>> +        return -1;
>>>> +    }
>>>> +
>>>> +    if (reply_supported) {
>>>> +        msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>>>> +    }
>>>> +
>>>> +    if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
>>>> +        return -1;
>>>> +    }
>>>> +
>>>> +    if (reply_supported) {
>>>> +        return process_message_reply(dev, &msg);
>>>> +    }
>>>
>>> So since we are doing this anyway, let's give backend the opportunity
>>> to validate features and fail FEATURES_OK?
>>
>> Just to be sure I understand, the feature negotiation happens,
>> then when the backend receives FEATURES_OK, it uses the REPLY_ACK
>> protocol feature to NACK?
> 
> Or ack but clear FEATURES_OK in the response.

Ok, so it means instead of using reply-ack, either the SET_STATUS
request expects the status as reply, or we add a GET_STATUS request as
Jason proposes for the vDPA backend.

Not directly related to this, but we could also think of adding the
SET_STATUS request on the slave channel, so that the backend can set the
DEVICE_NEEDS_RESET bit if something goes wrong on backend side that
cannot be recovered. Not sure this is useful, but if we want to support
it, we may want to do it now, so that it is backed by the same
VHOST_USER_PROTOCOL_F_STATUS bit.

> 
>>
>> In DPDK backend, we already check the feature set by SET_FEATURES are
>> supported by the backend.
> 
> But that assumes all possible combinations of features in the bitmap
> always work. That might not be the case.
Yes, this is a (too) simple check, we should do better.

>> If it is not the case, either the master did
>> set NEED_REPLY flag and error would be reported to the master, or the
>> NEED_REPLY flag isn't set in the message and the backend disconnects.
>>
>> Looking at Qemu side, it does not seem to set the NEED_REPLY flag on
>> SET_FEATURES, so basically the backend close the sockets if it happened.
> 
> That is not the best ay to handle that imho.

I agree, but there's not much the backend can do if the master does not
set NEED_REPLY on SET_FEATURES.

>>
>> I wonder whether it would be better for Qemu to set the NEED_REPLY flag
>> on SET_FEATURES, so that when the backend is running and VHOST_F_LOG_ALL
>> feature bit is set, we are the sure the master starts the live-migration
>> only once the SET_FEATURES is fully handled on backend side.
>>
>> What do you think?
> 
> features are set before backend is started so there's nothing to
> migrate really.

Sorry, I was not clear. I meant why not setting NEED_REPLY in
SET_FEATURES as it would have two advantages:
1. Ability for the backend to nack if features being set aren't
   compatible.
2. On live-migration, reply-ack with SET_FEATURES would ensure that
   VHOST_F_LOG_ALL bit set has been taken into account by the backend
   (SET_FEATURES is send to set VHOST_F_LOG_ALL while the backend is
   started).

>>
>>>> +
>>>> +    return 0;
>>>> +}
>>>> +
>>>>  bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp)
>>>>  {
>>>>      if (user->chr) {
>>>> @@ -1947,4 +1981,5 @@ const VhostOps user_ops = {
>>>>          .vhost_backend_mem_section_filter = vhost_user_mem_section_filter,
>>>>          .vhost_get_inflight_fd = vhost_user_get_inflight_fd,
>>>>          .vhost_set_inflight_fd = vhost_user_set_inflight_fd,
>>>> +        .vhost_set_state = vhost_user_set_state,
>>>>  };
>>>> -- 
>>>> 2.25.4
>>>
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]