qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation


From: Cornelia Huck
Subject: Re: [PATCH v1] docs/devel: Add VFIO device migration documentation
Date: Thu, 29 Oct 2020 12:52:21 +0100

On Thu, 29 Oct 2020 11:23:11 +0530
Kirti Wankhede <kwankhede@nvidia.com> wrote:

> Document interfaces used for VFIO device migration. Added flow of state
> changes during live migration with VFIO device.
> 
> Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> ---
>  MAINTAINERS                   |   1 +
>  docs/devel/vfio-migration.rst | 119 
> ++++++++++++++++++++++++++++++++++++++++++

You probably want to include this into the Developer's Guide via
index.rst.

>  2 files changed, 120 insertions(+)
>  create mode 100644 docs/devel/vfio-migration.rst
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6a197bd358d6..6f3fcffc6b3d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1728,6 +1728,7 @@ M: Alex Williamson <alex.williamson@redhat.com>
>  S: Supported
>  F: hw/vfio/*
>  F: include/hw/vfio/
> +F: docs/devel/vfio-migration.rst
>  
>  vfio-ccw
>  M: Cornelia Huck <cohuck@redhat.com>
> diff --git a/docs/devel/vfio-migration.rst b/docs/devel/vfio-migration.rst
> new file mode 100644
> index 000000000000..dab9127825e4
> --- /dev/null
> +++ b/docs/devel/vfio-migration.rst
> @@ -0,0 +1,119 @@
> +=====================
> +VFIO device Migration
> +=====================
> +
> +VFIO devices use iterative approach for migration because certain VFIO 
> devices

s/use/use an/ ?

> +(e.g. GPU) have large amount of data to be transfered. The iterative pre-copy
> +phase of migration allows for the guest to continue whilst the VFIO device 
> state
> +is transferred to destination, this helps to reduce the total downtime of the

s/to destination,/to the destination;/

> +VM. VFIO devices can choose to skip the pre-copy phase of migration by 
> returning
> +pending_bytes as zero during pre-copy phase.

s/during/during the/

> +
> +Detailed description of UAPI for VFIO device for migration is in the comment
> +above ``vfio_device_migration_info`` structure definition in header file
> +linux-headers/linux/vfio.h.

I think I'd copy that to this file. If I'm looking at the
documentation, I'd rather not go hunting for source code to find out
what structure you are talking about. Plus, as it's UAPI, I don't
expect it to change much, so it should be easy to keep the definitions
in sync (famous last words).

> +
> +VFIO device hooks for iterative approach:
> +-  A ``save_setup`` function that setup migration region, sets _SAVING flag 
> in

s/setup/sets up the/
s/in/in the/

> +VFIO device state and inform VFIO IOMMU module to start dirty page tracking.

s/inform/informs the/

> +
> +- A ``load_setup`` function that setup migration region on the destination 
> and

s/setup/sets up the/

> +sets _RESUMING flag in VFIO device state.

s/in/in the/

> +
> +- A ``save_live_pending`` function that reads pending_bytes from vendor 
> driver
> +that indicate how much more data the vendor driver yet to save for the VFIO
> +device.

"A ``save_live_pending`` function that reads pending_bytes from the
vendor driver, which indicates the amount of data that the vendor
driver has yet to save for the VFIO device." ?

> +
> +- A ``save_live_iterate`` function that reads VFIO device's data from vendor

s/reads/reads the/
s/from/from the/

> +driver through migration region during iterative phase.

s/through/through the/

> +
> +- A ``save_live_complete_precopy`` function that resets _RUNNING flag from 
> VFIO

s/from/from the/

> +device state, saves device config space, if any, and iteratively copies

s/saves/saves the/

> +remaining data for VFIO device till pending_bytes returned by vendor driver
> +is zero.

"...and interactively copies the remaining data for the VFIO device
until the vendor driver indicates that no data remains (pending_bytes
is zero)." ?

> +
> +- A ``load_state`` function loads config section and data sections generated 
> by
> +above save functions.

"A ``load_state`` function that loads the config section and the data
sections that are generated by the save functions above." ?

> +
> +- ``cleanup`` functions for both save and load that unmap migration region.

..."that perform any migration-related cleanup, including unmapping the
migration region." ?

> +
> +VM state change handler is registered to change VFIO device state based on VM
> +state change.

"A VM state change handler is registered to change the VFIO device
state when the VM state changes." ?

> +
> +Similarly, a migration state change notifier is added to get a notification 
> on

s/added/registered/ ?

> +migration state change. These states are translated to VFIO device state and
> +conveyed to vendor driver.
> +
> +System memory dirty pages tracking
> +----------------------------------
> +
> +A ``log_sync`` memory listener callback is added to mark system memory pages

s/is added to mark/marks those/ 

> +as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried

s/by/by the/
s/Dirty/The dirty/

> +per container. All pages pinned by vendor driver through vfio_pin_pages()

s/by/by the/

> +external API have to be marked as dirty during migration. When there are CPU
> +writes, CPU dirty page tracking can identify dirtied pages, but any page 
> pinned
> +by vendor driver can also be written by device. There is currently no device

s/by/by the/ (x2)

> +which has hardware support for dirty page tracking. So all pages which are
> +pinned by vendor driver are considered as dirty.
> +Dirty pages are tracked when device is in stop-and-copy phase because if 
> pages
> +are marked dirty during pre-copy phase and content is transfered from source 
> to
> +destination, there is no way to know newly dirtied pages from the point they
> +were copied earlier until device stops. To avoid repeated copy of same 
> content,
> +pinned pages are marked dirty only during stop-and-copy phase.

Let me take a quick stab at rewriting this paragraph (not sure if I
understood it correctly):

"Dirty pages are tracked when the device is in the stop-and-copy phase.
During the pre-copy phase, it is not possible to distinguish a dirty
page that has been transferred from the source to the destination from
newly dirtied pages, which would lead to repeated copying of the same
content. Therefore, pinned pages are only marked dirty during the
stop-and-copy phase." ?

> +
> +System memory dirty pages tracking when vIOMMU is enabled
> +---------------------------------------------------------
> +With vIOMMU, IO virtual address range can get unmapped while in pre-copy 
> phase

s/IO/an I/O/

> +of migration. In that case, unmap ioctl returns pages pinned in that range 
> and

s/unmap ioctl returns pages pinned/the unmap ioctl returns any pinned pages/ ?

> +QEMU reports corresponding guest physical pages dirty.
> +During stop-and-copy phase, an IOMMU notifier is used to get a callback for
> +mapped pages and then dirty pages bitmap is fetched from VFIO IOMMU modules 
> for
> +those mapped ranges.
> +
> +Flow of state changes during Live migration
> +===========================================
> +Below is the flow of state change during live migration where states in 
> brackets
> +represent VM state, migration state and VFIO device state as:
> +                (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)

"The values in the brackets represent the VM state, the migration
state, and the VFIO device state, respectively." ?

> +
> +Live migration save path
> +------------------------
> +                        QEMU normal running state
> +                        (RUNNING, _NONE, _RUNNING)
> +                                    |
> +                       migrate_init spawns migration_thread
> +                Migration thread then calls each device's .save_setup()
> +                        (RUNNING, _SETUP, _RUNNING|_SAVING)
> +                                    |
> +                        (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> +            If device is active, get pending_bytes by .save_live_pending()
> +         if total pending_bytes >= threshold_size, call .save_live_iterate()
> +                  Data of VFIO device for pre-copy phase is copied
> +     Iterate till total pending bytes converge and are less than threshold
> +                                    |
> +   On migration completion, vCPUs stops and calls .save_live_complete_precopy

s/vCPUs/vCPU/ ?

> +   for each active device. VFIO device is then transitioned in _SAVING state

s/VFIO device/The VFIO device/

s/in/into/

> +                    (FINISH_MIGRATE, _DEVICE, _SAVING)
> +                                    |
> +For VFIO device, iterate in .save_live_complete_precopy until pending data 
> is 0

s/For/For the/

> +                    (FINISH_MIGRATE, _DEVICE, _STOPPED)
> +                                    |
> +                    (FINISH_MIGRATE, _COMPLETED, _STOPPED)
> +                Migraton thread schedule cleanup bottom half and exit

s/schedule/schedules/
s/exit/exits/

> +
> +Live migration resume path
> +--------------------------
> +
> +             Incoming migration calls .load_setup for each device
> +                        (RESTORE_VM, _ACTIVE, _STOPPED)
> +                                    |
> +    For each device, .load_state is called for that device section data
> +                        (RESTORE_VM, _ACTIVE, _RESUMING)
> +                                    |
> +    At the end, called .load_cleanup for each device and vCPUs are started   
>                      |

Stray '|', probably should go on the next line?

s/called .load_cleanup/.load_cleanup is called/


> +                        (RUNNING, _NONE, _RUNNING)
> +
> +
> +Postcopy
> +========
> +Postcopy migration is not supported for VFIO devices.

s/not/not yet/, I think? (Disregarding any time frame here.)

Generally, I think this is useful information to have in the docs.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]