qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 06/17] hw/sd/sdcard: Do not store vendor data on block dri


From: Fabiano Rosas
Subject: Re: [PATCH v3 06/17] hw/sd/sdcard: Do not store vendor data on block drive (CMD56)
Date: Wed, 10 Jul 2024 13:21:51 -0300

Peter Xu <peterx@redhat.com> writes:

> On Wed, Jul 10, 2024 at 11:08:20AM -0300, Fabiano Rosas wrote:
>> >> I think it's ok:
>> >> 
>> >> {
>> >>   "field": "unused",
>> >>   "version_id": 1,
>> >>   "field_exists": false,
>> >>   "size": 512
>> >> },
>> >> 
>> >> vs.
>> >> 
>> >> {
>> >>   "field": "vendor_data",
>> >>   "version_id": 0,
>> >>   "field_exists": false,
>> >>   "num": 512,
>> >>   "size": 1
>> >> },
>> >> 
>> >> The unused field was introduced in 2016 so there's no chance of
>> >> migrating a QEMU that old to/from 9.1.
>> >
>> > What happens if an old qemu 9.0 sends rubbish here to a new QEMU, while the
>> > new QEMU would consider it meaningful data?
>> 
>> It will send zeros, no? The code will have to cope with that. The
>> alternative is to put the vendor_data in a subsection and the code will
>> also have to cope with the lack of data when the old QEMU doesn't send
>> it.
>
> Ah indeed, that "static const uint8_t buf[1024]" is there at least since
> 2017.  So yes, probably always sending zeros.

@Philippe, can vendor_data be 0 after migration? Otherwise 9.0 -> 9.1
migration might crash.

>
> Nothing I can think of otherwise indeed, if we want to trust that nothing
> will migrate before 2016.  It's just that we may want to know how that
> "2016" is justified to be safe if we would like to allow that in the
> future.

It's not about trust, we simply don't support migrations other than
n->n+1 and (maybe) n->n-1. So QEMU from 2016 is certainly not included.

>
> One thing _could_ be that "rule of thumb" is we plan to obsolete machines
> with 6 years, so anything "UNUSED" older than 6 years can be over-written?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]