[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH 6/7] migration: Fix arrays of pointers in JSON writer
From: |
Fabiano Rosas |
Subject: |
[PATCH 6/7] migration: Fix arrays of pointers in JSON writer |
Date: |
Tue, 7 Jan 2025 16:50:24 -0300 |
Currently, if an array of pointers contains a NULL pointer, that
pointer will be encoded as '0' in the stream. Since the JSON writer
doesn't define a "pointer" type, that '0' will now be an uint64, which
is different from the original type being pointed to, e.g. struct.
That mixed-type array shouldn't be compressed, otherwise data is lost
as the code currently makes the whole array have the type of the first
element.
While we could disable the array compression when a NULL pointer is
found, the JSON part of the stream still makes part of downtime, so we
should avoid writing unecessary bytes to it.
Keep the array compression in place, but break the array into several
type-contiguous pieces if NULL and non-NULL pointers are mixed.
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
migration/vmstate.c | 33 ++++++++++++++++++++++++++++++++-
scripts/analyze-migration.py | 9 ++++++++-
2 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/migration/vmstate.c b/migration/vmstate.c
index 52704c822c..a79ccf3875 100644
--- a/migration/vmstate.c
+++ b/migration/vmstate.c
@@ -425,15 +425,19 @@ int vmstate_save_state_v(QEMUFile *f, const
VMStateDescription *vmsd,
int size = vmstate_size(opaque, field);
uint64_t old_offset, written_bytes;
JSONWriter *vmdesc_loop = vmdesc;
+ bool is_prev_null = false;
trace_vmstate_save_state_loop(vmsd->name, field->name, n_elems);
if (field->flags & VMS_POINTER) {
first_elem = *(void **)first_elem;
assert(first_elem || !n_elems || !size);
}
+
for (i = 0; i < n_elems; i++) {
void *curr_elem = first_elem + size * i;
const VMStateField *inner_field;
+ bool is_null;
+ int max_elems = n_elems - i;
old_offset = qemu_file_transferred(f);
if (field->flags & VMS_ARRAY_OF_POINTER) {
@@ -448,12 +452,39 @@ int vmstate_save_state_v(QEMUFile *f, const
VMStateDescription *vmsd,
* not follow.
*/
inner_field = vmsd_create_fake_nullptr_field(field);
+ is_null = true;
} else {
inner_field = field;
+ is_null = false;
+ }
+
+ /*
+ * Due to the fake nullptr handling above, if there's mixed
+ * null/non-null data, it doesn't make sense to emit a
+ * compressed array representation spanning the entire array
+ * because the field types will be different (e.g. struct
+ * vs. uint64_t). Search ahead for the next null/non-null
+ * element and start a new compressed array if found.
+ */
+ if (field->flags & VMS_ARRAY_OF_POINTER &&
+ is_null != is_prev_null) {
+
+ is_prev_null = is_null;
+ vmdesc_loop = vmdesc;
+
+ for (int j = i + 1; j < n_elems; j++) {
+ void *elem = *(void **)(first_elem + size * j);
+ bool elem_is_null = !elem && size;
+
+ if (is_null != elem_is_null) {
+ max_elems = j - i;
+ break;
+ }
+ }
}
vmsd_desc_field_start(vmsd, vmdesc_loop, inner_field,
- i, n_elems);
+ i, max_elems);
if (inner_field->flags & VMS_STRUCT) {
ret = vmstate_save_state(f, inner_field->vmsd,
diff --git a/scripts/analyze-migration.py b/scripts/analyze-migration.py
index 4836920ddc..9138e91a11 100755
--- a/scripts/analyze-migration.py
+++ b/scripts/analyze-migration.py
@@ -497,7 +497,14 @@ def read(self):
raise Exception("internal index of data field unmatched
(%d/%d)" % (len(a), int(field['index'])))
a.append(field['data'])
else:
- self.data[field['name']] = field['data']
+ # There could be multiple entries for the same field
+ # name, e.g. when a compressed array was broken in
+ # more than one piece.
+ if (field['name'] in self.data and
+ type(self.data[field['name']]) == list):
+ self.data[field['name']].append(field['data'])
+ else:
+ self.data[field['name']] = field['data']
if 'subsections' in self.desc['struct']:
for subsection in self.desc['struct']['subsections']:
--
2.35.3
[PATCH 5/7] migration: Dump correct JSON format for nullptr replacement, Fabiano Rosas, 2025/01/07
[PATCH 2/7] migration: Remove unused argument in vmsd_desc_field_end, Fabiano Rosas, 2025/01/07
[PATCH 4/7] migration: Fix parsing of s390 stream, Fabiano Rosas, 2025/01/07
[PATCH 6/7] migration: Fix arrays of pointers in JSON writer,
Fabiano Rosas <=
[PATCH 7/7] s390x: Fix CSS migration, Fabiano Rosas, 2025/01/07
[PATCH 1/7] migration: Add more error handling to analyze-migration.py, Fabiano Rosas, 2025/01/07