qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] util/qemu-sockets: make keep-alive enabled by default


From: Denis V. Lunev
Subject: Re: [PATCH 2/2] util/qemu-sockets: make keep-alive enabled by default
Date: Thu, 9 Jul 2020 11:54:00 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0

On 7/9/20 11:29 AM, Daniel P. Berrangé wrote:
> On Wed, Jul 08, 2020 at 10:15:39PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> Keep-alive won't hurt, let's try to enable it even if not requested by
>> user.
> Keep-alive intentionally breaks TCP connections earlier than normal
> in face of transient networking problems.
>
> The question is more about which type of pain is more desirable. A
> stall in the network connection (for a potentially very long time),
> or an intentionally broken socket.
>
> I'm not at all convinced it is a good idea to intentionally break
> /all/ QEMU sockets in the face of transient problems, even if the
> problems last for 2 hours or more. 
>
> I could see keep-alives being ok on some QEMU socket. For example
> VNC/SPICE clients, as there is no downside to proactively culling
> them as they can trivially reconnect. Migration too is quite
> reasonable to use keep alives, as you generally want migration to
> run to completion in a short amount of time, and aborting migration
> needs to be safe no matter what.
>
> Breaking chardevs or block devices or network devices that use
> QEMU sockets though will be disruptive. The only solution once
> those backends have a dead socket is going to be to kill QEMU
> and cold-boot the VM again.

nope, and this is exactly what we are trying to achive.

Let us assume that QEMU NBD is connected to the
outside world, f.e. to some HA service running in
other virtual machine. Once that far away VM is
becoming dead, it is re-started on some other host
with the same IP.

QEMU NBD has an ability to reconnect to this same
endpoint and this process is transient for the guest.

This is the workflow we are trying to improve.

Anyway, sitting over dead socket is somewhat
which is not productive. This is like NFS hard and
soft mounts. In hypervisor world using hard mounts
(defaults before the patch) leads to various non
detectable deadlocks, that is why we are proposing
soft with such defaults.

It should also be noted that this is more consistent
as we could face the problem if we perform write
to the dead socket OR we could hang forever, thus
the problem with the current state is still possible.
With new settings we would consistently observe
the problem.

Den



reply via email to

[Prev in Thread] Current Thread [Next in Thread]