qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 6/7] migration: Fix arrays of pointers in JSON writer


From: Fabiano Rosas
Subject: Re: [PATCH v2 6/7] migration: Fix arrays of pointers in JSON writer
Date: Thu, 09 Jan 2025 13:16:37 -0300

Peter Xu <peterx@redhat.com> writes:

> On Thu, Jan 09, 2025 at 11:09:58AM -0300, Fabiano Rosas wrote:
>> Currently, if an array of pointers contains a NULL pointer, that
>> pointer will be encoded as '0' in the stream. Since the JSON writer
>> doesn't define a "pointer" type, that '0' will now be an uint8, which
>> is different from the original type being pointed to, e.g. struct.
>> 
>> (we're further calling uint8 "nullptr", but that's irrelevant to the
>> issue)
>> 
>> That mixed-type array shouldn't be compressed, otherwise data is lost
>> as the code currently makes the whole array have the type of the first
>> element:
>> 
>> css = {NULL, NULL, ..., 0x5555568a7940, NULL};
>> 
>> {"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
>>  "version": 1, "fields": [
>>     ...,
>>     {"name": "css", "array_len": 256, "type": "nullptr", "size": 1},
>>     ...,
>> ]}
>> 
>> In the above, the valid pointer at position 254 got lost among the
>> compressed array of nullptr.
>> 
>> While we could disable the array compression when a NULL pointer is
>> found, the JSON part of the stream still makes part of downtime, so we
>> should avoid writing unecessary bytes to it.
>> 
>> Keep the array compression in place, but if NULL and non-NULL pointers
>> are mixed break the array into several type-contiguous pieces :
>> 
>> css = {NULL, NULL, ..., 0x5555568a7940, NULL};
>> 
>> {"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
>>  "version": 1, "fields": [
>>      ...,
>>      {"name": "css", "array_len": 254, "type": "nullptr", "size": 1},
>>      {"name": "css", "type": "struct", "struct": {"vmsd_name": 
>> "s390_css_img", ... }, "size": 768},
>>      {"name": "css", "type": "nullptr", "size": 1},
>>      ...,
>> ]}
>> 
>> Now each type-discontiguous region will become a new JSON entry. The
>> reader should interpret this as a concatenation of values, all part of
>> the same field.
>> 
>> Parsing the JSON with analyze-script.py now shows the proper data
>> being pointed to at the places where the pointer is valid and
>> "nullptr" where there's NULL:
>> 
>> "s390_css (14)": {
>>     ...
>>     "css": [
>>         "nullptr",
>>         "nullptr",
>>         ...
>>         "nullptr",
>>         {
>>             "chpids": [
>>             {
>>                 "in_use": "0x00",
>>                 "type": "0x00",
>>                 "is_virtual": "0x00"
>>             },
>>             ...
>>             ]
>>         },
>>         "nullptr",
>>     }
>> 
>> Signed-off-by: Fabiano Rosas <farosas@suse.de>
>> ---
>>  migration/vmstate.c          | 33 ++++++++++++++++++++++++++++++++-
>>  scripts/analyze-migration.py | 29 +++++++++++++++++++++--------
>>  2 files changed, 53 insertions(+), 9 deletions(-)
>> 
>> diff --git a/migration/vmstate.c b/migration/vmstate.c
>> index 52704c822c..a79ccf3875 100644
>> --- a/migration/vmstate.c
>> +++ b/migration/vmstate.c
>> @@ -425,15 +425,19 @@ int vmstate_save_state_v(QEMUFile *f, const 
>> VMStateDescription *vmsd,
>>              int size = vmstate_size(opaque, field);
>>              uint64_t old_offset, written_bytes;
>>              JSONWriter *vmdesc_loop = vmdesc;
>> +            bool is_prev_null = false;
>>  
>>              trace_vmstate_save_state_loop(vmsd->name, field->name, n_elems);
>>              if (field->flags & VMS_POINTER) {
>>                  first_elem = *(void **)first_elem;
>>                  assert(first_elem || !n_elems || !size);
>>              }
>> +
>>              for (i = 0; i < n_elems; i++) {
>>                  void *curr_elem = first_elem + size * i;
>>                  const VMStateField *inner_field;
>> +                bool is_null;
>> +                int max_elems = n_elems - i;
>>  
>>                  old_offset = qemu_file_transferred(f);
>>                  if (field->flags & VMS_ARRAY_OF_POINTER) {
>> @@ -448,12 +452,39 @@ int vmstate_save_state_v(QEMUFile *f, const 
>> VMStateDescription *vmsd,
>>                       * not follow.
>>                       */
>>                      inner_field = vmsd_create_fake_nullptr_field(field);
>> +                    is_null = true;
>>                  } else {
>>                      inner_field = field;
>> +                    is_null = false;
>> +                }
>> +
>> +                /*
>> +                 * Due to the fake nullptr handling above, if there's mixed
>> +                 * null/non-null data, it doesn't make sense to emit a
>> +                 * compressed array representation spanning the entire array
>> +                 * because the field types will be different (e.g. struct
>> +                 * vs. uint64_t). Search ahead for the next null/non-null
>
> s/uint64_t/nullptr/
>

ok

>> +                 * element and start a new compressed array if found.
>> +                 */
>> +                if (field->flags & VMS_ARRAY_OF_POINTER &&
>> +                    is_null != is_prev_null) {
>> +
>> +                    is_prev_null = is_null;
>> +                    vmdesc_loop = vmdesc;
>> +
>> +                    for (int j = i + 1; j < n_elems; j++) {
>> +                        void *elem = *(void **)(first_elem + size * j);
>> +                        bool elem_is_null = !elem && size;
>> +
>> +                        if (is_null != elem_is_null) {
>> +                            max_elems = j - i;
>> +                            break;
>> +                        }
>> +                    }
>>                  }
>
> This definitely adds some slight more difficulty on reading this
> code.. Let's have this first.
>
> If it's proved that the JSON is a measurable overhead, I think we may
> consider turn that off by default even for precopy (postcopy never used
> vmdesc), then maybe we can simplify this chunk (and probably the whole
> compression idea, because when turned on it is for debugging).
>
>>  
>>                  vmsd_desc_field_start(vmsd, vmdesc_loop, inner_field,
>> -                                      i, n_elems);
>> +                                      i, max_elems);
>>  
>>                  if (inner_field->flags & VMS_STRUCT) {
>>                      ret = vmstate_save_state(f, inner_field->vmsd,
>> diff --git a/scripts/analyze-migration.py b/scripts/analyze-migration.py
>> index 134c25f20a..7f0797308d 100755
>> --- a/scripts/analyze-migration.py
>> +++ b/scripts/analyze-migration.py
>> @@ -501,15 +501,28 @@ def read(self):
>>              field['data'] = reader(field, self.file)
>>              field['data'].read()
>>  
>> -            if 'index' in field:
>> -                if field['name'] not in self.data:
>> -                    self.data[field['name']] = []
>> -                a = self.data[field['name']]
>> -                if len(a) != int(field['index']):
>> -                    raise Exception("internal index of data field unmatched 
>> (%d/%d)" % (len(a), int(field['index'])))
>> -                a.append(field['data'])
>> +            fname = field['name']
>> +            fdata = field['data']
>> +
>> +            # The field could be:
>> +            # i) a single data entry, e.g. uint64
>> +            # ii) an array, indicated by it containing the 'index' key
>> +            #
>> +            # However, the overall data after parsing the whole
>> +            # stream, could be a mix of arrays and single data fields,
>> +            # all sharing the same field name due to how QEMU breaks
>> +            # up arrays with NULL pointers into multiple compressed
>> +            # array segments.
>> +            if fname not in self.data:
>> +                if 'index' in field:
>> +                    self.data[fname] = []
>
> This needs to be:
>
>      self.data[fname] = [fdata]
>
> ?

Yes.

>
> Btw, since the new code will process it correctly with non-array below,
> IIUC here we can make it simple:
>
>   if 'index' in field:
>       self.data[fname] = fdata
>

Sorry, I don't understand what you mean here. I changed it now to:

    if fname not in self.data:
        if 'index' in field:
            self.data[fname] = [fdata]
        else:
            self.data[fname] = fdata
    elif type(self.data[fname]) == list:
        self.data[fname].append(fdata)
    else:
        tmp = self.data[fname]
        self.data[fname] = [tmp, fdata]

>> +                else:
>> +                    self.data[fname] = fdata
>> +            elif type(self.data[fname]) == list:
>> +                self.data[fname].append(fdata)
>>              else:
>> -                self.data[field['name']] = field['data']
>> +                tmp = self.data[fname]
>> +                self.data[fname] = [tmp, fdata]
>>  
>>          if 'subsections' in self.desc['struct']:
>>              for subsection in self.desc['struct']['subsections']:
>> -- 
>> 2.35.3
>> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]