[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 07/25] migration/multifd: Cleanup src flushes on condition check
From: |
Fabiano Rosas |
Subject: |
[PULL 07/25] migration/multifd: Cleanup src flushes on condition check |
Date: |
Fri, 10 Jan 2025 09:13:55 -0300 |
From: Peter Xu <peterx@redhat.com>
The src flush condition check is over complicated, and it's getting more
out of control if postcopy will be involved.
In general, we have two modes to do the sync: legacy or modern ways.
Legacy uses per-section flush, modern uses per-round flush.
Mapped-ram always uses the modern, which is per-round.
Introduce two helpers, which can greatly simplify the code, and hopefully
make it readable again.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Message-Id: <20241206224755.1108686-7-peterx@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/multifd-nocomp.c | 42 ++++++++++++++++++++++++++++++++++++++
migration/multifd.h | 2 ++
migration/ram.c | 10 +++------
3 files changed, 47 insertions(+), 7 deletions(-)
diff --git a/migration/multifd-nocomp.c b/migration/multifd-nocomp.c
index 58372db0f4..c1f686c0ce 100644
--- a/migration/multifd-nocomp.c
+++ b/migration/multifd-nocomp.c
@@ -344,6 +344,48 @@ retry:
return true;
}
+/*
+ * We have two modes for multifd flushes:
+ *
+ * - Per-section mode: this is the legacy way to flush, it requires one
+ * MULTIFD_FLAG_SYNC message for each RAM_SAVE_FLAG_EOS.
+ *
+ * - Per-round mode: this is the modern way to flush, it requires one
+ * MULTIFD_FLAG_SYNC message only for each round of RAM scan. Normally
+ * it's paired with a new RAM_SAVE_FLAG_MULTIFD_FLUSH message in network
+ * based migrations.
+ *
+ * One thing to mention is mapped-ram always use the modern way to sync.
+ */
+
+/* Do we need a per-section multifd flush (legacy way)? */
+bool multifd_ram_sync_per_section(void)
+{
+ if (!migrate_multifd()) {
+ return false;
+ }
+
+ if (migrate_mapped_ram()) {
+ return false;
+ }
+
+ return migrate_multifd_flush_after_each_section();
+}
+
+/* Do we need a per-round multifd flush (modern way)? */
+bool multifd_ram_sync_per_round(void)
+{
+ if (!migrate_multifd()) {
+ return false;
+ }
+
+ if (migrate_mapped_ram()) {
+ return true;
+ }
+
+ return !migrate_multifd_flush_after_each_section();
+}
+
int multifd_ram_flush_and_sync(QEMUFile *f)
{
MultiFDSyncReq req;
diff --git a/migration/multifd.h b/migration/multifd.h
index 0fef431f6b..bd785b9873 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -355,6 +355,8 @@ static inline uint32_t multifd_ram_page_count(void)
void multifd_ram_save_setup(void);
void multifd_ram_save_cleanup(void);
int multifd_ram_flush_and_sync(QEMUFile *f);
+bool multifd_ram_sync_per_round(void);
+bool multifd_ram_sync_per_section(void);
size_t multifd_ram_payload_size(void);
void multifd_ram_fill_packet(MultiFDSendParams *p);
int multifd_ram_unfill_packet(MultiFDRecvParams *p, Error **errp);
diff --git a/migration/ram.c b/migration/ram.c
index 9eeb77665b..d9336d8a09 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1302,9 +1302,7 @@ static int find_dirty_block(RAMState *rs,
PageSearchStatus *pss)
pss->page = 0;
pss->block = QLIST_NEXT_RCU(pss->block, next);
if (!pss->block) {
- if (migrate_multifd() &&
- (!migrate_multifd_flush_after_each_section() ||
- migrate_mapped_ram())) {
+ if (multifd_ram_sync_per_round()) {
QEMUFile *f = rs->pss[RAM_CHANNEL_PRECOPY].pss_channel;
int ret = multifd_ram_flush_and_sync(f);
if (ret < 0) {
@@ -3178,8 +3176,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
out:
if (ret >= 0 && migration_is_running()) {
- if (migrate_multifd() && migrate_multifd_flush_after_each_section() &&
- !migrate_mapped_ram()) {
+ if (multifd_ram_sync_per_section()) {
ret = multifd_ram_flush_and_sync(f);
if (ret < 0) {
return ret;
@@ -3252,8 +3249,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
}
}
- if (migrate_multifd() &&
- migrate_multifd_flush_after_each_section()) {
+ if (multifd_ram_sync_per_section()) {
/*
* Only the old dest QEMU will need this sync, because each EOS
* will require one SYNC message on each channel.
--
2.35.3
- [PULL 00/25] Migration patches for 2025-01-10, Fabiano Rosas, 2025/01/10
- [PULL 01/25] migration/multifd: Fix compile error caused by page_size usage, Fabiano Rosas, 2025/01/10
- [PULL 02/25] migration/multifd: Further remove the SYNC on complete, Fabiano Rosas, 2025/01/10
- [PULL 03/25] migration/multifd: Allow to sync with sender threads only, Fabiano Rosas, 2025/01/10
- [PULL 04/25] migration/ram: Move RAM_SAVE_FLAG* into ram.h, Fabiano Rosas, 2025/01/10
- [PULL 05/25] migration/multifd: Unify RAM_SAVE_FLAG_MULTIFD_FLUSH messages, Fabiano Rosas, 2025/01/10
- [PULL 06/25] migration/multifd: Remove sync processing on postcopy, Fabiano Rosas, 2025/01/10
- [PULL 07/25] migration/multifd: Cleanup src flushes on condition check,
Fabiano Rosas <=
- [PULL 08/25] migration/multifd: Document the reason to sync for save_setup(), Fabiano Rosas, 2025/01/10
- [PULL 09/25] migration/multifd: Fix compat with QEMU < 9.0, Fabiano Rosas, 2025/01/10
- [PULL 10/25] migration: Add helper to get target runstate, Fabiano Rosas, 2025/01/10
- [PULL 11/25] qmp/cont: Only activate disks if migration completed, Fabiano Rosas, 2025/01/10
- [PULL 12/25] migration/block: Make late-block-active the default, Fabiano Rosas, 2025/01/10
- [PULL 13/25] migration/block: Apply late-block-active behavior to postcopy, Fabiano Rosas, 2025/01/10
- [PULL 14/25] migration/block: Fix possible race with block_inactive, Fabiano Rosas, 2025/01/10
- [PULL 15/25] migration/block: Rewrite disk activation, Fabiano Rosas, 2025/01/10
- [PULL 16/25] migration: Add more error handling to analyze-migration.py, Fabiano Rosas, 2025/01/10
- [PULL 17/25] migration: Remove unused argument in vmsd_desc_field_end, Fabiano Rosas, 2025/01/10