[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH 7/9] migration: Add new migration state wait-unplug
From: |
Jens Freimann |
Subject: |
[Qemu-devel] [PATCH 7/9] migration: Add new migration state wait-unplug |
Date: |
Fri, 2 Aug 2019 17:06:03 +0200 |
This patch is not ready for inclusion yet, I'm looking for
feedback/ideas on a particular problem. See below.
This patch adds a new migration state called wait-unplug. It is
entered after the SETUP state and will transition into ACTIVE once all
devices were succesfully unplugged from the guest.
So if a guest doesn't respond or takes long to honor the unplug
request the user will see the migration state 'wait-unplug'.
It adds a new function callback to VMStateDescription which is
called for every device that implements it and reports its
device_pending status.
Now this loop in the migration thread:
while (qemu_savevm_state_guest_unplug_pending()) { continue; }
clearly needs a condition to terminate after a while/or a certain
number of iterations. But I'm not sure what is a good solution. How much
waiting time is acceptable for a migration?
Signed-off-by: Jens Freimann <address@hidden>
---
include/migration/vmstate.h | 2 ++
migration/migration.c | 14 ++++++++++++++
migration/savevm.c | 18 ++++++++++++++++++
migration/savevm.h | 1 +
qapi/migration.json | 5 ++++-
5 files changed, 39 insertions(+), 1 deletion(-)
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index c2bfa7a7f0..8b2a125c4c 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -187,6 +187,8 @@ struct VMStateDescription {
int (*pre_save)(void *opaque);
int (*post_save)(void *opaque);
bool (*needed)(void *opaque);
+ bool (*dev_unplug_pending)(void *opaque);
+
const VMStateField *fields;
const VMStateDescription **subsections;
};
diff --git a/migration/migration.c b/migration/migration.c
index 8a607fe1e2..a7d21b73fe 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -946,6 +946,9 @@ static void fill_source_migration_info(MigrationInfo *info)
case MIGRATION_STATUS_CANCELLED:
info->has_status = true;
break;
+ case MIGRATION_STATUS_WAIT_UNPLUG:
+ info->has_status = true;
+ break;
}
info->status = s->state;
}
@@ -1680,6 +1683,7 @@ bool migration_is_idle(void)
case MIGRATION_STATUS_COLO:
case MIGRATION_STATUS_PRE_SWITCHOVER:
case MIGRATION_STATUS_DEVICE:
+ case MIGRATION_STATUS_WAIT_UNPLUG:
return false;
case MIGRATION_STATUS__MAX:
g_assert_not_reached();
@@ -1712,6 +1716,7 @@ void migrate_init(MigrationState *s)
error_free(s->error);
s->error = NULL;
+ /* go to WAIT_UNPLUG first? */
migrate_set_state(&s->state, MIGRATION_STATUS_NONE,
MIGRATION_STATUS_SETUP);
s->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -3218,6 +3223,15 @@ static void *migration_thread(void *opaque)
qemu_savevm_state_setup(s->to_dst_file);
+ migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
+ MIGRATION_STATUS_WAIT_UNPLUG);
+ while (qemu_savevm_state_guest_unplug_pending()) {
+ continue;
+ }
+ migrate_set_state(&s->state, MIGRATION_STATUS_WAIT_UNPLUG,
+ MIGRATION_STATUS_ACTIVE);
+
+
s->setup_time = qemu_clock_get_ms(QEMU_CLOCK_HOST) - setup_start;
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
MIGRATION_STATUS_ACTIVE);
diff --git a/migration/savevm.c b/migration/savevm.c
index 79ed44d475..2bb54b3a8a 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1085,6 +1085,24 @@ void qemu_savevm_state_header(QEMUFile *f)
}
}
+bool qemu_savevm_state_guest_unplug_pending(void)
+{
+ SaveStateEntry *se;
+ bool ret = false;
+
+ QTAILQ_FOREACH(se, &savevm_state.handlers, entry) {
+ if (!se->vmsd || !se->vmsd->dev_unplug_pending) {
+ continue;
+ }
+ ret = se->vmsd->dev_unplug_pending(se->opaque);
+ if (ret) {
+ break;
+ }
+ }
+
+ return ret;
+}
+
void qemu_savevm_state_setup(QEMUFile *f)
{
SaveStateEntry *se;
diff --git a/migration/savevm.h b/migration/savevm.h
index 51a4b9caa8..ba64a7e271 100644
--- a/migration/savevm.h
+++ b/migration/savevm.h
@@ -31,6 +31,7 @@
bool qemu_savevm_state_blocked(Error **errp);
void qemu_savevm_state_setup(QEMUFile *f);
+bool qemu_savevm_state_guest_unplug_pending(void);
int qemu_savevm_state_resume_prepare(MigrationState *s);
void qemu_savevm_state_header(QEMUFile *f);
int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy);
diff --git a/qapi/migration.json b/qapi/migration.json
index d567ac9fc3..c42381a85f 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -133,6 +133,9 @@
# @device: During device serialisation when pause-before-switchover is enabled
# (since 2.11)
#
+# @wait-unplug: wait for device unplug request by guest OS to be completed.
+# (since 4.2)
+#
# Since: 2.3
#
##
@@ -140,7 +143,7 @@
'data': [ 'none', 'setup', 'cancelling', 'cancelled',
'active', 'postcopy-active', 'postcopy-paused',
'postcopy-recover', 'completed', 'failed', 'colo',
- 'pre-switchover', 'device' ] }
+ 'pre-switchover', 'device', 'wait-unplug' ] }
##
# @MigrationInfo:
--
2.21.0
- [Qemu-devel] [PATCH 2/9] net/virtio: add failover support, (continued)
[Qemu-devel] [PATCH 4/9] migration: allow unplug during migration for failover devices, Jens Freimann, 2019/08/02
[Qemu-devel] [PATCH 5/9] qapi: add unplug primary event, Jens Freimann, 2019/08/02
[Qemu-devel] [PATCH 6/9] qapi: Add failover negotiated event, Jens Freimann, 2019/08/02
[Qemu-devel] [PATCH 7/9] migration: Add new migration state wait-unplug,
Jens Freimann <=
[Qemu-devel] [PATCH 8/9] pci: mark devices partially unplugged, Jens Freimann, 2019/08/02
[Qemu-devel] [PATCH 9/9] pci: mark device having guest unplug request pending, Jens Freimann, 2019/08/02
Re: [Qemu-devel] [PATCH v2 0/9] add failover feature for assigned network devices, Michael S. Tsirkin, 2019/08/02
Re: [Qemu-devel] [PATCH v2 0/9] add failover feature for assigned network devices, no-reply, 2019/08/02
Re: [Qemu-devel] [PATCH v2 0/9] add failover feature for assigned network devices, no-reply, 2019/08/02