[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per it
From: |
Jason Wang |
Subject: |
Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration |
Date: |
Mon, 18 Mar 2024 11:22:17 +0800 |
On Sat, Mar 16, 2024 at 2:45 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
>
>
> On 3/14/2024 9:03 PM, Jason Wang wrote:
> > On Fri, Mar 15, 2024 at 5:39 AM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
> >> On setups with one or more virtio-net devices with vhost on,
> >> dirty tracking iteration increases cost the bigger the number
> >> amount of queues are set up e.g. on idle guests migration the
> >> following is observed with virtio-net with vhost=on:
> >>
> >> 48 queues -> 78.11% [.] vhost_dev_sync_region.isra.13
> >> 8 queues -> 40.50% [.] vhost_dev_sync_region.isra.13
> >> 1 queue -> 6.89% [.] vhost_dev_sync_region.isra.13
> >> 2 devices, 1 queue -> 18.60% [.] vhost_dev_sync_region.isra.14
> >>
> >> With high memory rates the symptom is lack of convergence as soon
> >> as it has a vhost device with a sufficiently high number of queues,
> >> the sufficient number of vhost devices.
> >>
> >> On every migration iteration (every 100msecs) it will redundantly
> >> query the *shared log* the number of queues configured with vhost
> >> that exist in the guest. For the virtqueue data, this is necessary,
> >> but not for the memory sections which are the same. So essentially
> >> we end up scanning the dirty log too often.
> >>
> >> To fix that, select a vhost device responsible for scanning the
> >> log with regards to memory sections dirty tracking. It is selected
> >> when we enable the logger (during migration) and cleared when we
> >> disable the logger. If the vhost logger device goes away for some
> >> reason, the logger will be re-selected from the rest of vhost
> >> devices.
> >>
> >> After making mem-section logger a singleton instance, constant cost
> >> of 7%-9% (like the 1 queue report) will be seen, no matter how many
> >> queues or how many vhost devices are configured:
> >>
> >> 48 queues -> 8.71% [.] vhost_dev_sync_region.isra.13
> >> 2 devices, 8 queues -> 7.97% [.] vhost_dev_sync_region.isra.14
> >>
> >> Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
> >> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> >> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
> >>
> >> ---
> >> v3 -> v4:
> >> - add comment to clarify effect on cache locality and
> >> performance
> >>
> >> v2 -> v3:
> >> - add after-fix benchmark to commit log
> >> - rename vhost_log_dev_enabled to vhost_dev_should_log
> >> - remove unneeded comparisons for backend_type
> >> - use QLIST array instead of single flat list to store vhost
> >> logger devices
> >> - simplify logger election logic
> >> ---
> >> hw/virtio/vhost.c | 67
> >> ++++++++++++++++++++++++++++++++++++++++++-----
> >> include/hw/virtio/vhost.h | 1 +
> >> 2 files changed, 62 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> >> index 612f4db..58522f1 100644
> >> --- a/hw/virtio/vhost.c
> >> +++ b/hw/virtio/vhost.c
> >> @@ -45,6 +45,7 @@
> >>
> >> static struct vhost_log *vhost_log[VHOST_BACKEND_TYPE_MAX];
> >> static struct vhost_log *vhost_log_shm[VHOST_BACKEND_TYPE_MAX];
> >> +static QLIST_HEAD(, vhost_dev) vhost_log_devs[VHOST_BACKEND_TYPE_MAX];
> >>
> >> /* Memslots used by backends that support private memslots (without an
> >> fd). */
> >> static unsigned int used_memslots;
> >> @@ -149,6 +150,47 @@ bool vhost_dev_has_iommu(struct vhost_dev *dev)
> >> }
> >> }
> >>
> >> +static inline bool vhost_dev_should_log(struct vhost_dev *dev)
> >> +{
> >> + assert(dev->vhost_ops);
> >> + assert(dev->vhost_ops->backend_type > VHOST_BACKEND_TYPE_NONE);
> >> + assert(dev->vhost_ops->backend_type < VHOST_BACKEND_TYPE_MAX);
> >> +
> >> + return dev ==
> >> QLIST_FIRST(&vhost_log_devs[dev->vhost_ops->backend_type]);
> > A dumb question, why not simple check
> >
> > dev->log == vhost_log_shm[dev->vhost_ops->backend_type]
> Because we are not sure if the logger comes from vhost_log_shm[] or
> vhost_log[]. Don't want to complicate the check here by calling into
> vhost_dev_log_is_shared() everytime when the .log_sync() is called.
It has very low overhead, isn't it?
static bool vhost_dev_log_is_shared(struct vhost_dev *dev)
{
return dev->vhost_ops->vhost_requires_shm_log &&
dev->vhost_ops->vhost_requires_shm_log(dev);
}
And it helps to simplify the logic.
Thanks
>
> -Siwei
> > ?
> >
> > Thanks
> >
>
- [PATCH v4 1/2] vhost: dirty log should be per backend type, Si-Wei Liu, 2024/03/14
- [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/14
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Jason Wang, 2024/03/15
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/15
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration,
Jason Wang <=
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/18
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Jason Wang, 2024/03/19
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/20
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Jason Wang, 2024/03/20
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/21
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Jason Wang, 2024/03/22
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/22
- Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Jason Wang, 2024/03/25
- Re: [External] : Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Si-Wei Liu, 2024/03/25
- Re: [External] : Re: [PATCH v4 2/2] vhost: Perform memory section dirty scans once per iteration, Jason Wang, 2024/03/26