[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] A small patch to introduce stop conditions to t
From: |
Thomas Treutner |
Subject: |
Re: [Qemu-devel] [PATCH] A small patch to introduce stop conditions to the live migration. |
Date: |
Thu, 15 Sep 2011 10:27:45 +0200 |
User-agent: |
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.21) Gecko/20110831 Thunderbird/3.1.13 |
Am 14.09.2011 17:45, schrieb Anthony Liguori:
On 09/14/2011 08:18 AM, Thomas Treutner wrote:
Currently, it is possible that a live migration never finishes, when
the dirty page rate is high compared to the scan/transfer rate. The
exact values for MAX_MEMORY_ITERATIONS and
MAX_TOTAL_MEMORY_TRANSFER_FACTOR are arguable, but there should be
*some* limit to force the final iteration of a live migration that
does not converge.
No, there shouldn't be.
I think there should be. The iterative pre-copy mechanism is completely
depending on the assumption of convergence. Currently, the probable
chance that this assumption does not hold is totally ignored, which is
kind of burying one's head in the sand to me.
A management app
I do not know of any management app that takes care of this. Can you
give an example where management app developers actually knew about this
issue and took care of it? I didn't see any big warning regarding
migration, but just stumbled upon it by coincidence. libvirt just seems
to program around MAX_THROTTLE nowadays, which is another PITA. As a
user, I can and have to assume that a certain function actually does
what it promises and if it can't for whatever reason, it throws an
error. Would you be happy with a function that promises the write of a
file, but if the location given is not writable, it just sits there and
waits forever until you somehow, manually notice why and what the remedy is?
can always stop a guest to force convergence.
What do you mean by stop exactly? Pausing the guest? Is it then
automatically unpaused by qemu again at the destination host?
> If you make migration have unbounded downtime by default
> then you're making migration unsafe for smarter consumers.
I'd prefer that compared to having the common case unsafe. If migration
doesn't converge, it is now eventually finished at some distant point in
time only because the VM's service severely suffers from the migration,
meaning it can do less and less page dirtying. In reality, users would
quickly stop using the service, as response times etc. are going through
the roof and they're running in network timeouts. Having a single,
longer downtime is better than a potentially everlasting unresponsive VM.
You can already set things like maximum downtime to force convergence.
The maximum downtine parameter seems to be a nice switch, but it is
another example of surprise. The value you choose is not even in within
a magnitude of what happens, as the "bandwidth" used for calculations
seems to be a buffer bandwidth, but not the real network bandwidth. Even
with extremely aggressive bridge-timings, there is a factor of ~20
between the default 30ms setting and the actual result.
I know the - arguable, in my pov - policy is "just give progress info
when requested (although our algorithm strictly requires steady
progress, but we do no want to hear that when things go hot), and let
mgmt apps decide", but that is not implemented correctly either. First,
because of the bandwidth/downtime issue above, second, because of
incorrect memory transfer amounts, where duplicate (unused?) pages are
accounted as 1 byte of transfer. It may be correct regarding the
physical view, but from a logical, management app view, the migration
has progressed by a full page, not just 1 byte. It is hard to argue that
mgmt apps should care about things working out nicely, when the
information given to them is not consistent to each other and switches
presented are doing something but not in any way what they said they would.
If you wanted to have some logic like an exponentially increasing
maximum downtime given a fixed timeout, that would be okay provided it
was an optional feature.
I'm already doing a similar thing using libvirt, I'm just coming back to
this as such an approach is causing lots of pain and clutter-up code,
and the original issue can be solved with 3-4 changed lines of code in
qemu.
AFAIK, there is neither a way to synchronize on the actual start of the
migration (so you can start polling and setting a custom downtime value)
nor to synchronize on the end of the migration (so you know when to stop
polling). As a result, one is playing around with crude sleeps, hoping
that the migration, although of course already triggered, has actually
started yet, and then trying in vain not to step on any invalidated data
structures while monitoring the progress in a second thread, as no one
knows when the main thread with the blocking live migration will pull
the rug out from under the monitoring thread's feet. Then, lots of code
is needed to clean up this holy mess and regularly, a SEGV is happening:
http://pastebin.com/jT6sXubu
I don't know of any way to reliably and cleanly solve this issue within
"a management app", as I don't see any mechanism that the main thread
signals a monitoring thread to stop monitoring *before* it will pull the
rug. Sending the signal directly after the migration call unblocks is
not enough, I've tried that, the result is above. There is still room
for two threads in one critical section.
regards,
thomas