qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI


From: Alexey Kardashevskiy
Subject: Re: [Qemu-ppc] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
Date: Mon, 03 Jun 2013 15:46:00 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

On 05/31/2013 08:41 PM, Paolo Bonzini wrote:
> Il 31/05/2013 12:25, Alexey Kardashevskiy ha scritto:
>> On 05/31/2013 08:07 PM, Benjamin Herrenschmidt wrote:
>>> On Fri, 2013-05-31 at 15:58 +1000, Alexey Kardashevskiy wrote:
>>>>
>>>> And another question (sorry I am not very familiar with terminology but
>>>> cc:Ben is :) ) - what happens with indirect requests if migration happened
>>>> in the middle of handling such a request? virtio-scsi does not seem to
>>>> handle this situation anyhow, it just reconstructs the whole request and
>>>> that's it.
>>>
>>> So Paolo, the crux of the question here is really whether we have any
>>> guarantee about the state of the request when this happens (by this I
>>> mean a save happening with requests still "in flight") ?
>>>
>>> IE. Can the request can be at any stage of processing, with the data
>>> transfer phase being half way through, or do we somewhat know for sure
>>> that the request will *not* have started transferring any data ?
>>>
>>> This is key, because in the latter case, all we really need to do is
>>> save the request itself, and re-parse it on restore as if it was
>>> new really (at least from a DMA descriptor perspective).
>>>
>>> However, if the data transfer is already half way through, we need to
>>> somewhat save the state of the data transfer machinery, ie. the position
>>> of the "cursor" that follows the guest-provided DMA descriptor list,
>>> etc... (which isn't *that* trivial since we have a concept of indirect
>>> descriptors and we use pointers to follow them, so we'd probably have
>>> to re-walk the whole user descriptors list until we reach the same 
>>> position).
> 
> It may be halfway through, but it is always restarted on the destination.
> 
> virtio-scsi parses the whole descriptor chain upfront and sends the
> guest addresses in the migration stream.
> 
>> Is not it the same QEMU thread which handles hcalls and QEMU console
>> commands so the migration cannot stop parsing/handling a vscsi_req?
> 
> The VM is paused and I/O is flushed at the point when the reqs are sent.
>  That's why you couldn't get a pending request.  Only failed requests
> remain in queue.


Ok. I implemented {save|load}_request for IBMVSCSI, started testing - the
destination system behaves very unstable, sometime it is a fault in
_raw_spin_lock or it looks okay but any attempt to read the filesystem
leads to 100% cpu load in qemu process and no response from the guest.

I tried virtio-scsi as well (as it was referred as a good example), it
fails in exactly the same way. So I started wondering - when did you try it
last time? :)

My test is:
1. create qcow2 image 8GB, put it to 2GB USB disk.
2. put 1.8GB "dummy" image onto the same USB disk.
3. run qemu with qcow2 image.
4. do "mkfs.ext4 /dev/sda" in the guest. It creates 300MB file when there
is enough space.
5. wait till the source qemu gets stopped due to io error (info status
confirms this).
6. migrate.
7. remove "dummy".
8. "c"ontinue in the destination guest.

Is it good/bad/ugly? What do I miss? Thanks!


-- 
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]