|
From: | Stefan Berger |
Subject: | Re: [Qemu-devel] blobstore disk format (was Re: Design of the blobstore) |
Date: | Fri, 16 Sep 2011 12:46:40 -0400 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) Gecko/20110621 Fedora/3.1.11-1.fc14 Lightning/1.0b3pre Thunderbird/3.1.11 |
On 09/16/2011 10:44 AM, Michael S. Tsirkin wrote:
Why not let a convenience library handle the metadata on the device level, having it create the blob that the NVRAM layer ends up writing and parsing before the device uses it? Otherwise I should maybe rename the nvram to meatdata_store :-/On Thu, Sep 15, 2011 at 10:33:13AM -0400, Stefan Berger wrote:On 09/15/2011 08:28 AM, Michael S. Tsirkin wrote:So the below is a proposal for a directory scheme for storing (optionally multiple) nvram images, along with any metadata. Data is encoded using BER: http://en.wikipedia.org/wiki/Basic_Encoding_Rules Specifically, we mostly use the subsets.Would it change anything if we were to think of the NVRAM image as another piece of metadata?Yes, we can do that, sure. I had the feeling that it will help to lay out the image at the end, to make directory listing more efficient - the rest of metadata is usually small, image might be somewhat large.
Is that guaranteed just by using ASN.1 ? Do we need to add a revision to the metadata? How do we handle metadata that was to change over time, i.e., new attribute/values being added into a finite store?I am also wondering whether each device shouldn't just handle the metadata itself,It could be that just means we will have custom code with different bugs in each device. Note that from experience with formats, the problem with time becomes less trivial than it seems as we need to provide forward and backward compatibility guarantees.
Yes, it could also be metadata. One should probably always be allowed to write a shorter blob than registered, but not a longer one. If the device did that, maybe it should assume it needs to prepend a header to the actual blob indicating the actual size of the following blob so trailing garbage can be ignored.so generate a blob from data structures containing all the metadata it needs, arranging attribute and value pairs itself (maybe using some convenience function for serialization/deserialization) and let the NVRAM layer not handle the metadata at all but only blobs, their maximum sizes, actual sizesActual size seems to be a TPM specific thing.
You mean doing it at the NVRAM level seems wrong. Of course, again something a device could write into a header prepended to the actual blob. Maybe every device that needs it should do that so that if we were to support encryption of blobs and the key for decryption was wrong one could detect it early without feeding badly decrypted / corrupted state into the device and see what happens.encryption, integrity value (crc32 or sha1) and so on. What metadata should there be that really need to be handled on the NVRAM API and below level rather than on the device-specific code level?So checksum (checksum value and type) 'and so on' are what I call metadata :) Doing it at device level seems wrong.
So is ASN1 the answer or does one still need to add a revision tag to each blob putting in custom code for parsing the different revisions of data structures (I guess) that may be extended/changed over time?We use a directory as a SET in a CER format. This allows generating directory online without scanning the entries beforehand.I guess it is the 'unknown' for me... but what is the advantage of using ASN1 for this rather than just writing out packed and endianess-normalized data structures (with revision value),If you want an example of where this 'custom formats are easy so let us write one' leads to in the end, look no further than live migration code. It's a mess of hacks that does not even work across upstream qemu versions, leave alone across downstreams (different linux distros).
Stefan
having them crc32-protected to have some sanity checking in place? StefanI'm not sure why we want crc specifically in TPM. If it is 'just because we can' then it probably applies to other non-volatile storage? Storage generally?The rest of the encoding uses a DER format. This makes for fast parsing as entries are easy to skip. Each entry is encoded in DER format. Each entry is a SEQUENCE with two objects: 1. nvram 2. optional name - a UTF8String Binary data is stored as OCTET-STRING values on disk. Any RW metadata is stored as OCTET-STRING value as well. Any RO metadata is stored in appropriate universal encoding, by type. On the context below, an attribute is either a IA5String or a SEQUENCE. If IA5String, this is the attribute name, and it has no value. If SEQUENCE, the first entry in the sequence is an IA5String, it is the attribute name. The rest of the entries represent the attribute value. Mandatory/optional attributes: depends on type. tpm will have realsize as RW mandatory attribute. Each nvram is built as a SEQUENCE including 4 objects 1. type - an IA5String. downstreams can use other types such as UUIDs instead to ensure no conflicts with upstream 2. SET of mandatory attributes 3. SET of optional attributes 4. data - a RW OCTET-STRING It is envisioned that attributes won't be too large, so they can easily be kept in memory.
[Prev in Thread] | Current Thread | [Next in Thread] |