qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 08/83] vdpa: Restore hash calculation state


From: Hawkins Jiawei
Subject: Re: [PULL 08/83] vdpa: Restore hash calculation state
Date: Fri, 20 Oct 2023 08:58:26 +0800

在 2023/10/20 02:07, Michael S. Tsirkin 写道:
> On Thu, Oct 19, 2023 at 09:32:28AM -0700, Stefan Hajnoczi wrote:
>> On Wed, 18 Oct 2023 at 08:56, Michael S. Tsirkin <mst@redhat.com> wrote:
>>>
>>> From: Hawkins Jiawei <yin31149@gmail.com>
>>>
>>> This patch introduces vhost_vdpa_net_load_rss() to restore
>>> the hash calculation state at device's startup.
>>>
>>> Note that vhost_vdpa_net_load_rss() has `do_rss` argument,
>>> which allows future code to reuse this function to restore
>>> the receive-side scaling state when the VIRTIO_NET_F_RSS
>>> feature is enabled in SVQ. Currently, vhost_vdpa_net_load_rss()
>>> could only be invoked when `do_rss` is set to false.
>>>
>>> Signed-off-by: Hawkins Jiawei <yin31149@gmail.com>
>>> Message-Id: 
>>> <f5ffad10699001107022851e0560cb394039d6b0.1693297766.git.yin31149@gmail.com>
>>> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
>>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>>> ---
>>>   net/vhost-vdpa.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>   1 file changed, 91 insertions(+)
>>>
>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>> index 4b7c3b81b8..40d0bcbc0b 100644
>>> --- a/net/vhost-vdpa.c
>>> +++ b/net/vhost-vdpa.c
>>> @@ -817,6 +817,88 @@ static int vhost_vdpa_net_load_mac(VhostVDPAState *s, 
>>> const VirtIONet *n,
>>>       return 0;
>>>   }
>>>
>>> +static int vhost_vdpa_net_load_rss(VhostVDPAState *s, const VirtIONet *n,
>>> +                                   struct iovec *out_cursor,
>>> +                                   struct iovec *in_cursor, bool do_rss)
>>> +{
>>> +    struct virtio_net_rss_config cfg;
>>> +    ssize_t r;
>>> +    g_autofree uint16_t *table = NULL;
>>> +
>>> +    /*
>>> +     * According to VirtIO standard, "Initially the device has all hash
>>> +     * types disabled and reports only VIRTIO_NET_HASH_REPORT_NONE.".
>>> +     *
>>> +     * Therefore, there is no need to send this CVQ command if the
>>> +     * driver disable the all hash types, which aligns with
>>> +     * the device's defaults.
>>> +     *
>>> +     * Note that the device's defaults can mismatch the driver's
>>> +     * configuration only at live migration.
>>> +     */
>>> +    if (!n->rss_data.enabled ||
>>> +        n->rss_data.hash_types == VIRTIO_NET_HASH_REPORT_NONE) {
>>> +        return 0;
>>> +    }
>>> +
>>> +    cfg.hash_types = cpu_to_le32(n->rss_data.hash_types);
>>> +
>>> +    /*
>>> +     * According to VirtIO standard, "Field reserved MUST contain zeroes.
>>> +     * It is defined to make the structure to match the layout of
>>> +     * virtio_net_rss_config structure, defined in 5.1.6.5.7.".
>>> +     *
>>> +     * Therefore, we need to zero the fields in struct 
>>> virtio_net_rss_config,
>>> +     * which corresponds the `reserved` field in
>>> +     * struct virtio_net_hash_config.
>>> +     */
>>> +    memset(&cfg.indirection_table_mask, 0,
>>> +           sizeof_field(struct virtio_net_hash_config, reserved));
>>
>> Please take a look at the following CI failure:
>>
>> In file included from /usr/include/string.h:495,
>> from 
>> /home/gitlab-runner/builds/-LCfcJ2T/0/qemu-project/qemu/include/qemu/osdep.h:116,
>> from ../net/vhost-vdpa.c:12:
>> In function ‘memset’,
>> inlined from ‘vhost_vdpa_net_load_rss’ at ../net/vhost-vdpa.c:874:9:
>> /usr/include/s390x-linux-gnu/bits/string_fortified.h:71:10: error:
>> ‘__builtin_memset’ offset [7, 12] from the object at ‘cfg’ is out of
>> the bounds of referenced subobject ‘indirection_table_mask’ with type
>> ‘short unsigned int’ at offset 4 [-Werror=array-bounds]
>> 71 | return __builtin___memset_chk (__dest, __ch, __len, __bos0 (__dest));
>> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> cc1: all warnings being treated as errors
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/5329820077
>
> Hmm yes - the trick it's trying to implement is this:
>
>
> struct virtio_net_rss_config {
>          uint32_t hash_types;
>          uint16_t indirection_table_mask;
>          uint16_t unclassified_queue;
>          uint16_t indirection_table[1/* + indirection_table_mask */];
>          uint16_t max_tx_vq;
>          uint8_t hash_key_length;
>          uint8_t hash_key_data[/* hash_key_length */];
> };
>
>
> ...
>
> struct virtio_net_hash_config {
>          uint32_t hash_types;
>          /* for compatibility with virtio_net_rss_config */
>          uint16_t reserved[4];
>          uint8_t hash_key_length;
>          uint8_t hash_key_data[/* hash_key_length */];
> };
>
>
> as you see layout matches.
>
>
>
> Using a union is probably the right way to address this.

How about manually resetting these fields to zero instead of
using memset()?

This is how it's handled in "virtio_net.c" for the Linux kernel,
perhaps we can maintain consistency this way.

Thanks!


>
>
> Dropped for now, thanks!
>
>
>
>>> +
>>> +    table = g_malloc_n(n->rss_data.indirections_len,
>>> +                       sizeof(n->rss_data.indirections_table[0]));
>>> +    for (int i = 0; i < n->rss_data.indirections_len; ++i) {
>>> +        table[i] = cpu_to_le16(n->rss_data.indirections_table[i]);
>>> +    }
>>> +
>>> +    /*
>>> +     * Consider that virtio_net_handle_rss() currently does not restore the
>>> +     * hash key length parsed from the CVQ command sent from the guest into
>>> +     * n->rss_data and uses the maximum key length in other code, so we 
>>> also
>>> +     * employthe the maxium key length here.
>>> +     */
>>> +    cfg.hash_key_length = sizeof(n->rss_data.key);
>>> +
>>> +    const struct iovec data[] = {
>>> +        {
>>> +            .iov_base = &cfg,
>>> +            .iov_len = offsetof(struct virtio_net_rss_config,
>>> +                                indirection_table),
>>> +        }, {
>>> +            .iov_base = table,
>>> +            .iov_len = n->rss_data.indirections_len *
>>> +                       sizeof(n->rss_data.indirections_table[0]),
>>> +        }, {
>>> +            .iov_base = &cfg.max_tx_vq,
>>> +            .iov_len = offsetof(struct virtio_net_rss_config, 
>>> hash_key_data) -
>>> +                       offsetof(struct virtio_net_rss_config, max_tx_vq),
>>> +        }, {
>>> +            .iov_base = (void *)n->rss_data.key,
>>> +            .iov_len = sizeof(n->rss_data.key),
>>> +        }
>>> +    };
>>> +
>>> +    r = vhost_vdpa_net_load_cmd(s, out_cursor, in_cursor,
>>> +                                VIRTIO_NET_CTRL_MQ,
>>> +                                VIRTIO_NET_CTRL_MQ_HASH_CONFIG,
>>> +                                data, ARRAY_SIZE(data));
>>> +    if (unlikely(r < 0)) {
>>> +        return r;
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>> +
>>>   static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
>>>                                     const VirtIONet *n,
>>>                                     struct iovec *out_cursor,
>>> @@ -842,6 +924,15 @@ static int vhost_vdpa_net_load_mq(VhostVDPAState *s,
>>>           return r;
>>>       }
>>>
>>> +    if (!virtio_vdev_has_feature(&n->parent_obj, 
>>> VIRTIO_NET_F_HASH_REPORT)) {
>>> +        return 0;
>>> +    }
>>> +
>>> +    r = vhost_vdpa_net_load_rss(s, n, out_cursor, in_cursor, false);
>>> +    if (unlikely(r < 0)) {
>>> +        return r;
>>> +    }
>>> +
>>>       return 0;
>>>   }
>>>
>>> --
>>> MST
>>>
>>>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]