qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 01/10] migration: Increase default number of multifd chann


From: Daniel P . Berrangé
Subject: Re: [PATCH v2 01/10] migration: Increase default number of multifd channels to 16
Date: Tue, 7 Jan 2020 12:49:34 +0000
User-agent: Mutt/1.12.1 (2019-06-15)

On Fri, Jan 03, 2020 at 07:25:08PM +0100, Juan Quintela wrote:
> Daniel P. Berrangé <address@hidden> wrote:
> > On Wed, Dec 18, 2019 at 03:01:10AM +0100, Juan Quintela wrote:
> >> We can scale much better with 16, so we can scale to higher numbers.
> >
> > What was the test scenario showing such scaling ?
> 
> On my test hardware, with 2 channels we can saturate around 8Gigabit max,
> more than that, and the migration thread is not fast enough to fill the
> network bandwidth.
> 
> With 8 that is enough to fill whatever we can find.
> We used to have a bug where we were getting trouble with more channels
> than cores.  That was the initial reason why the default was so low.
> 
> So, pros/cons are:
> - have low value (2).  We are backwards compatible, but we are not using
>   all  bandwith.  Notice that we will dectect the error before 5.0 is
>   out and print a good error message.
> 
> - have high value (I tested 8 and 16).  Found no performance loss when
>   moving to lower bandwidth limits, and clearly we were able to saturate
>   the higher speeds (I tested on localhost, so I had big enough bandwidth)
> 
> 
> > In the real world I'm sceptical that virt hosts will have
> > 16 otherwise idle CPU cores available that are permissible
> > to use for migration, or indeed whether they'll have network
> > bandwidth available to allow 16 cores to saturate the link.
> 
> The problem here is that if you have such a host, and you want to have
> high speed migration, you need to configure it.  My measumermets are
> that high number of channels don't affect performance with low
> bandwidth, but low number of channels affect performance with high
> bandwidth speed.

I'm not concerned about impact on performance of migration on a
low bandwidth link, rather I'm concerned about impact on performance
of other guests on the host. It will cause migration to contend with
other guest's vCPUs and network traffic. 

> So, if we want to have something that works "automatically" everywhere,
> we need to put it to at least 8.  Or we can trust that management app
> will do the right thing.

Aren't we still setting the bandwidth limit to MB bandwidth out of the
box, so we already require mgmt app to change settings to use more
bandwidth ? 

> If you are using a low value of bandwidth, the only difference with 16
> channels is that you are using a bit more memory (just the space for the
> stacks) and that you are having less contention for the locks (but with
> low bandwidth you are not having contention anyways).
> 
> So,  I think that the question is:
> - What does libvirt prefferes

Libvirt doesn't really have an opinion in this case. I believe we'll
always set the number of channels on both src & dst, so we don't
see the defaults.

> - What does ovirt/openstack preffer

Libvirt should insulate them from any change in defaults in QEMU
in this case, but always explicitly setting channels on src & dst
to match.

> - Do we really want that the user "have" to configure that value

Right so this is the key quesiton - for a user not using libvirt
or a libvirt based mgmt app, what we do want out out of the box
migration to be tuned for ?

If we want to maximise migration performance, at cost of anything
else, then we can change the migration channels count, but probably
also ought to remove the 32MB bandwidth cap as no useful guest with
active apps will succeed migration with a 32MB cap.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]