[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] vhost: Perform memory section dirty scans once per iteration
From: |
Michael S. Tsirkin |
Subject: |
Re: [PATCH] vhost: Perform memory section dirty scans once per iteration |
Date: |
Wed, 18 Oct 2023 01:55:46 -0400 |
On Tue, Oct 17, 2023 at 05:32:34PM -0700, Si-Wei Liu wrote:
>
>
> On 10/6/2023 2:48 AM, Michael S. Tsirkin wrote:
> > On Fri, Oct 06, 2023 at 09:58:30AM +0100, Joao Martins wrote:
> > > On 03/10/2023 15:01, Michael S. Tsirkin wrote:
> > > > On Wed, Sep 27, 2023 at 12:14:28PM +0100, Joao Martins wrote:
> > > > > On setups with one or more virtio-net devices with vhost on,
> > > > > dirty tracking iteration increases cost the bigger the number
> > > > > amount of queues are set up e.g. on idle guests migration the
> > > > > following is observed with virtio-net with vhost=on:
> > > > >
> > > > > 48 queues -> 78.11% [.] vhost_dev_sync_region.isra.13
> > > > > 8 queues -> 40.50% [.] vhost_dev_sync_region.isra.13
> > > > > 1 queue -> 6.89% [.] vhost_dev_sync_region.isra.13
> > > > > 2 devices, 1 queue -> 18.60% [.] vhost_dev_sync_region.isra.14
> > > > >
> > > > > With high memory rates the symptom is lack of convergence as soon
> > > > > as it has a vhost device with a sufficiently high number of queues,
> > > > > the sufficient number of vhost devices.
> > > > >
> > > > > On every migration iteration (every 100msecs) it will redundantly
> > > > > query the *shared log* the number of queues configured with vhost
> > > > > that exist in the guest. For the virtqueue data, this is necessary,
> > > > > but not for the memory sections which are the same. So
> > > > > essentially we end up scanning the dirty log too often.
> > > > >
> > > > > To fix that, select a vhost device responsible for scanning the
> > > > > log with regards to memory sections dirty tracking. It is selected
> > > > > when we enable the logger (during migration) and cleared when we
> > > > > disable the logger.
> > > > >
> > > > > The real problem, however, is exactly that: a device per vhost
> > > > > worker/qp,
> > > > > when there should be a device representing a netdev (for N vhost
> > > > > workers).
> > > > > Given this problem exists for any Qemu these days, figured a simpler
> > > > > solution is better to increase stable tree's coverage; thus don't
> > > > > change the device model of sw vhost to fix this "over log scan" issue.
> > > > >
> > > > > Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> > > > > ---
> > > > > I am not fully sure the heuristic captures the myriad of different
> > > > > vhost
> > > > > devices -- I think so. IIUC, the log is always shared, it's just
> > > > > whether
> > > > > it's qemu head memory or via /dev/shm when other processes want to
> > > > > access it.
> > > > Thanks for working on this.
> > > >
> > > > I don't think this works like this because different types of different
> > > > vhost devices have different regions - see e.g. vhost_region_add_section
> > > > I am also not sure all devices are running at the same time - e.g.
> > > > some could be disconnected, and vhost_sync_dirty_bitmap takes this
> > > > into account.
> > > >
> > > Good point. But this all means logic in selecting the 'logger' to take
> > > into
> > > considering whether vhost_dev::log_enabled or vhost_dev::started right?
> > >
> > > With respect to regions it seems like this can only change depending on
> > > whether
> > > one of the vhost devices, backend_type is VHOST_BACKEND_TYPE_USER *and*
> > > whether
> > > the backend sets vhost_backend_can_merge?
> > >
> > > With respect to 'could be disconnected' during migration not devices can
> > > be
> > > added or removed during migration, so might not be something that occurs
> > > during
> > > migration.
> > > I placed this in log_sync exactly to just cover migration, unless
> > > there's some other way that disconnects the vhost and changes these
> > > variables
> > > during migration.
> > The *frontend* can't be added or removed (ATM - this is just because we lack
> > good ways to describe devices that can be migrated, so all we
> > came up with is passing same command line on both sides,
> > and this breaks if you add/remove things in the process).
> > We really shouldn't bake this assumption into code if we can
> > help it though.
> >
> > But I digress.
> >
> > The *backend* can disconnect at any time as this is not guest visible.
> >
> > > > But the idea is I think a good one - I just feel more refactoring is
> > > > needed.
> > > Can you expand on what refactoring you were thinking for this fix?
> > Better separate the idea of logging from device. then we can
> > have a single logger that collects data from devices to decide
> > what needs to be logged.
> Discussion. I think the troublemaker here is the vhost-user clients that
> attempt to round down&up to (huge) page boundary and then has to merge
> adjacent sections, leading to differing views between vhost devices. While I
> agree it is a great idea to separate logging from device, it isn't clear to
> me how that can help the case where there could be a mix of both vhost-user
> and vhost-kernel clients in the same qemu process, in which case it would
> need at least 2 separate vhost loggers for the specific vhost type? Or you
> would think there's value to unify the two distinct subsystems with one
> single vhost logger facility?
Yes - I think we need a logger per backend type. Reference-count them, too.
> Noted the vhost logging interface (vhost
> kernel or vhost userspace) doesn't support the notion of separate logging of
> memory buffer sections against those for VQs, all QEMU can rely on is
> various sections in the memory table and basically a single dirty bitmap for
> both guest buffers and VQs are indistinctively shared by all vhost devices.
> How does it help to just refactor QEMU part of code using today's vhost
> backend interface, I am not sure.
>
> Regardless, IMHO for fixing stable p.o.v it might be less risky and valuable
> to just limit the fix to vhost-kernel case (to be more precise,
> non-vhost-user type and without vhost_backend_can_merge defined), my 2c.
>
>
> Regards,
> -Siwei
> >
> > > My thinking on this bug was mostly to address the inneficiency with the
> > > smallest
> > > intrusive fix (if at all possible!) given that virtually all multiqueue
> > > vhost
> > > supported QEMU have this problem. And then move into a 'vhost-device for
> > > all
> > > queues' as it feels like the problem here is the 'device per queue pair'
> > > doesn't
> > > scale.
> > >
> > > At the end of the day the problem on this is the vhost object model in
> > > log_sync
> > > not scaling to amount of queues. But you could also argue that if the log
> > > is
> > > shared that you can just log once for all, plus another one for each
> > > deviation
> > > of normal behaviour, like the points you made in the earlier paragraph,
> > > and thus
> > > the thinking behind this patch would still apply?
> > The thinking is good, but not the implementation.
> >