[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 0/3] migration: Downtime tracepoints
From: |
Joao Martins |
Subject: |
Re: [PATCH 0/3] migration: Downtime tracepoints |
Date: |
Thu, 26 Oct 2023 17:06:37 +0100 |
User-agent: |
Mozilla Thunderbird |
On 26/10/2023 16:53, Peter Xu wrote:
> This small series (actually only the last patch; first two are cleanups)
> wants to improve ability of QEMU downtime analysis similarly to what Joao
> used to propose here:
>
> https://lore.kernel.org/r/20230926161841.98464-1-joao.m.martins@oracle.com
>
Thanks for following up on the idea; It's been hard to have enough bandwidth for
everything on the past set of weeks :(
> But with a few differences:
>
> - Nothing exported yet to qapi, all tracepoints so far
>
> - Instead of major checkpoints (stop, iterable, non-iterable, resume-rp),
> finer granule by providing downtime measurements for each vmstate (I
> made microsecond to be the unit to be accurate). So far it seems
> iterable / non-iterable is the core of the problem, and I want to nail
> it to per-device.
>
> - Trace dest QEMU too
>
> For the last bullet: consider the case where a device save() can be super
> fast, while load() can actually be super slow. Both of them will
> contribute to the ultimate downtime, but not a simple summary: when src
> QEMU is save()ing on device1, dst QEMU can be load()ing on device2. So
> they can run in parallel. However the only way to figure all components of
> the downtime is to record both.
>
> Please have a look, thanks.
>
I like your series, as it allows a user to pinpoint one particular bad device,
while covering the load side too. The checkpoints of migration on the other hand
were useful -- while also a bit ugly -- for the sort of big picture of how
downtime breaks down. Perhaps we could add that /also/ as tracepoitns without
specifically commiting to be exposed in QAPI.
More fundamentally, how can one capture the 'stop' part? There's also time spent
there like e.g. quiescing/stopping vhost-net workers, or suspending the VF
device. All likely as bad to those tracepoints pertaining device-state/ram
related stuff (iterable and non-iterable portions).
> Peter Xu (3):
> migration: Set downtime_start even for postcopy
> migration: Add migration_downtime_start|end() helpers
> migration: Add per vmstate downtime tracepoints
>
> migration/migration.c | 38 +++++++++++++++++++++-----------
> migration/savevm.c | 49 ++++++++++++++++++++++++++++++++++++++----
> migration/trace-events | 2 ++
> 3 files changed, 72 insertions(+), 17 deletions(-)
>
- [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/26
- [PATCH 2/3] migration: Add migration_downtime_start|end() helpers, Peter Xu, 2023/10/26
- [PATCH 3/3] migration: Add per vmstate downtime tracepoints, Peter Xu, 2023/10/26
- [PATCH 1/3] migration: Set downtime_start even for postcopy, Peter Xu, 2023/10/26
- Re: [PATCH 0/3] migration: Downtime tracepoints,
Joao Martins <=
- Re: [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/26
- Re: [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/26
- Re: [PATCH 0/3] migration: Downtime tracepoints, Joao Martins, 2023/10/26
- Re: [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/26
- Re: [PATCH 0/3] migration: Downtime tracepoints, Joao Martins, 2023/10/27
- Re: [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/27
- Re: [PATCH 0/3] migration: Downtime tracepoints, Joao Martins, 2023/10/27
- Re: [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/30
- Re: [PATCH 0/3] migration: Downtime tracepoints, Peter Xu, 2023/10/30
- Re: [PATCH 0/3] migration: Downtime tracepoints, Joao Martins, 2023/10/30