[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Stable-9.2.1 24/41] migration: Fix arrays of pointers in JSON writer
From: |
Michael Tokarev |
Subject: |
[Stable-9.2.1 24/41] migration: Fix arrays of pointers in JSON writer |
Date: |
Mon, 27 Jan 2025 17:17:38 +0300 |
From: Fabiano Rosas <farosas@suse.de>
Currently, if an array of pointers contains a NULL pointer, that
pointer will be encoded as '0' in the stream. Since the JSON writer
doesn't define a "pointer" type, that '0' will now be an uint8, which
is different from the original type being pointed to, e.g. struct.
(we're further calling uint8 "nullptr", but that's irrelevant to the
issue)
That mixed-type array shouldn't be compressed, otherwise data is lost
as the code currently makes the whole array have the type of the first
element:
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 256, "type": "nullptr", "size": 1},
...,
]}
In the above, the valid pointer at position 254 got lost among the
compressed array of nullptr.
While we could disable the array compression when a NULL pointer is
found, the JSON part of the stream still makes part of downtime, so we
should avoid writing unecessary bytes to it.
Keep the array compression in place, but if NULL and non-NULL pointers
are mixed break the array into several type-contiguous pieces :
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 254, "type": "nullptr", "size": 1},
{"name": "css", "type": "struct", "struct": {"vmsd_name": "s390_css_img",
... }, "size": 768},
{"name": "css", "type": "nullptr", "size": 1},
...,
]}
Now each type-discontiguous region will become a new JSON entry. The
reader should interpret this as a concatenation of values, all part of
the same field.
Parsing the JSON with analyze-script.py now shows the proper data
being pointed to at the places where the pointer is valid and
"nullptr" where there's NULL:
"s390_css (14)": {
...
"css": [
"nullptr",
"nullptr",
...
"nullptr",
{
"chpids": [
{
"in_use": "0x00",
"type": "0x00",
"is_virtual": "0x00"
},
...
]
},
"nullptr",
}
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-7-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
(cherry picked from commit 35049eb0d2fc72bb8c563196ec75b4d6c13fce02)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
diff --git a/migration/vmstate.c b/migration/vmstate.c
index 52704c822c..82bd005a83 100644
--- a/migration/vmstate.c
+++ b/migration/vmstate.c
@@ -425,15 +425,19 @@ int vmstate_save_state_v(QEMUFile *f, const
VMStateDescription *vmsd,
int size = vmstate_size(opaque, field);
uint64_t old_offset, written_bytes;
JSONWriter *vmdesc_loop = vmdesc;
+ bool is_prev_null = false;
trace_vmstate_save_state_loop(vmsd->name, field->name, n_elems);
if (field->flags & VMS_POINTER) {
first_elem = *(void **)first_elem;
assert(first_elem || !n_elems || !size);
}
+
for (i = 0; i < n_elems; i++) {
void *curr_elem = first_elem + size * i;
const VMStateField *inner_field;
+ bool is_null;
+ int max_elems = n_elems - i;
old_offset = qemu_file_transferred(f);
if (field->flags & VMS_ARRAY_OF_POINTER) {
@@ -448,12 +452,39 @@ int vmstate_save_state_v(QEMUFile *f, const
VMStateDescription *vmsd,
* not follow.
*/
inner_field = vmsd_create_fake_nullptr_field(field);
+ is_null = true;
} else {
inner_field = field;
+ is_null = false;
+ }
+
+ /*
+ * Due to the fake nullptr handling above, if there's mixed
+ * null/non-null data, it doesn't make sense to emit a
+ * compressed array representation spanning the entire array
+ * because the field types will be different (e.g. struct
+ * vs. nullptr). Search ahead for the next null/non-null
element
+ * and start a new compressed array if found.
+ */
+ if (field->flags & VMS_ARRAY_OF_POINTER &&
+ is_null != is_prev_null) {
+
+ is_prev_null = is_null;
+ vmdesc_loop = vmdesc;
+
+ for (int j = i + 1; j < n_elems; j++) {
+ void *elem = *(void **)(first_elem + size * j);
+ bool elem_is_null = !elem && size;
+
+ if (is_null != elem_is_null) {
+ max_elems = j - i;
+ break;
+ }
+ }
}
vmsd_desc_field_start(vmsd, vmdesc_loop, inner_field,
- i, n_elems);
+ i, max_elems);
if (inner_field->flags & VMS_STRUCT) {
ret = vmstate_save_state(f, inner_field->vmsd,
diff --git a/scripts/analyze-migration.py b/scripts/analyze-migration.py
index 923f174f1b..8e1fbf4c9d 100755
--- a/scripts/analyze-migration.py
+++ b/scripts/analyze-migration.py
@@ -502,15 +502,25 @@ def read(self):
field['data'] = reader(field, self.file)
field['data'].read()
- if 'index' in field:
- if field['name'] not in self.data:
- self.data[field['name']] = []
- a = self.data[field['name']]
- if len(a) != int(field['index']):
- raise Exception("internal index of data field unmatched
(%d/%d)" % (len(a), int(field['index'])))
- a.append(field['data'])
+ fname = field['name']
+ fdata = field['data']
+
+ # The field could be:
+ # i) a single data entry, e.g. uint64
+ # ii) an array, indicated by it containing the 'index' key
+ #
+ # However, the overall data after parsing the whole
+ # stream, could be a mix of arrays and single data fields,
+ # all sharing the same field name due to how QEMU breaks
+ # up arrays with NULL pointers into multiple compressed
+ # array segments.
+ if fname not in self.data:
+ self.data[fname] = fdata
+ elif type(self.data[fname]) == list:
+ self.data[fname].append(fdata)
else:
- self.data[field['name']] = field['data']
+ tmp = self.data[fname]
+ self.data[fname] = [tmp, fdata]
if 'subsections' in self.desc['struct']:
for subsection in self.desc['struct']['subsections']:
--
2.39.5
- [Stable-9.2.1 13/41] docs: Correct '-runas' and '-fsdev/-virtfs proxy' indentation, (continued)
- [Stable-9.2.1 13/41] docs: Correct '-runas' and '-fsdev/-virtfs proxy' indentation, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 15/41] target/i386/cpu: Fix notes for CPU models, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 12/41] target/loongarch: Use actual operand size with vbsrl check, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 17/41] migration/multifd: Fix compat with QEMU < 9.0, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 22/41] migration: Rename vmstate_info_nullptr, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 16/41] migration/multifd: Fix compile error caused by page_size usage, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 20/41] migration: Fix parsing of s390 stream, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 19/41] migration: Remove unused argument in vmsd_desc_field_end, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 18/41] migration: Add more error handling to analyze-migration.py, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 21/41] s390x: Fix CSS migration, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 24/41] migration: Fix arrays of pointers in JSON writer,
Michael Tokarev <=
- [Stable-9.2.1 02/41] tcg: Reset free_temps before tcg_optimize, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 23/41] migration: Dump correct JSON format for nullptr replacement, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 25/41] multifd: bugfix for migration using compression methods, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 39/41] make-release: only leave tarball of wrap-file subprojects, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 27/41] multifd: bugfix for incorrect migration data with qatzip compression, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 28/41] tests/functional/test_rx_gdbsim: Use stable URL for test_linux_sash, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 26/41] multifd: bugfix for incorrect migration data with QPL compression, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 38/41] hw/cxl: Fix msix_notify: Assertion `vector < dev->msix_entries_nr`, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 36/41] pci: acpi: Windows 'PCI Label Id' bug workaround, Michael Tokarev, 2025/01/27
- [Stable-9.2.1 40/41] i386/cpu: Mark avx10_version filtered when prefix is NULL, Michael Tokarev, 2025/01/27