qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 01/10] migration: Increase default number of multifd chann


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v2 01/10] migration: Increase default number of multifd channels to 16
Date: Fri, 3 Jan 2020 17:32:44 +0000
User-agent: Mutt/1.13.0 (2019-11-30)

* Daniel P. Berrangé (address@hidden) wrote:
> On Fri, Jan 03, 2020 at 05:01:14PM +0000, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (address@hidden) wrote:
> > > On Wed, Dec 18, 2019 at 03:01:10AM +0100, Juan Quintela wrote:
> > > > We can scale much better with 16, so we can scale to higher numbers.
> > > 
> > > What was the test scenario showing such scaling ?
> > > 
> > > In the real world I'm sceptical that virt hosts will have
> > > 16 otherwise idle CPU cores available that are permissible
> > > to use for migration, or indeed whether they'll have network
> > > bandwidth available to allow 16 cores to saturate the link.
> > 
> > With TLS or compression, the network bandwidth could easily be there.
> 
> Yes, but this constant is setting a default that applies regardless of
> whether TLS / compression is enabled and/or whether CPU cores are idle.
> IOW, there can be cases where using 16 threads will be a perf win, I'm
> just questioning the suitability as a global default out of the box.
> 
> I feel like what's really lacking with migration is documentation
> around the usefulness of the very many parameters, and the various
> interesting combinations & tradeoffs around enabling them. So instead
> of changing the default threads, can we focusing on improving
> documentation so that mgmt apps have good information on which to
> make the decision about whether & when to use 2 or 16 or $NNN migration
> threads.

Yes, although the short answer is; increase it if you find your
migration threads are saturated, either due to a very fast network
connection, or a CPU heavy setting (such as TLS or compression).
The answer to that might also vary if you have compression/encryption
offload engines (which I'd like to try).  Given that this series is for
compression, I guess that's the use Juan is using here.

On a 100Gbps NIC (which are readily available these days), I managed to
squeeze 70Gbps out of an earlier multifd version with 8 channels, which
beat the RDMA code in throughput (albeit eating CPU).

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]