qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 01/10] migration: Increase default number of multifd chann


From: Juan Quintela
Subject: Re: [PATCH v2 01/10] migration: Increase default number of multifd channels to 16
Date: Tue, 07 Jan 2020 14:32:24 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)

Daniel P. Berrangé <address@hidden> wrote:
> On Fri, Jan 03, 2020 at 07:25:08PM +0100, Juan Quintela wrote:
>> Daniel P. Berrangé <address@hidden> wrote:
>> > On Wed, Dec 18, 2019 at 03:01:10AM +0100, Juan Quintela wrote:
>> >> We can scale much better with 16, so we can scale to higher numbers.
>> >
>> > What was the test scenario showing such scaling ?
>> 
>> On my test hardware, with 2 channels we can saturate around 8Gigabit max,
>> more than that, and the migration thread is not fast enough to fill the
>> network bandwidth.
>> 
>> With 8 that is enough to fill whatever we can find.
>> We used to have a bug where we were getting trouble with more channels
>> than cores.  That was the initial reason why the default was so low.
>> 
>> So, pros/cons are:
>> - have low value (2).  We are backwards compatible, but we are not using
>>   all  bandwith.  Notice that we will dectect the error before 5.0 is
>>   out and print a good error message.
>> 
>> - have high value (I tested 8 and 16).  Found no performance loss when
>>   moving to lower bandwidth limits, and clearly we were able to saturate
>>   the higher speeds (I tested on localhost, so I had big enough bandwidth)
>> 
>> 
>> > In the real world I'm sceptical that virt hosts will have
>> > 16 otherwise idle CPU cores available that are permissible
>> > to use for migration, or indeed whether they'll have network
>> > bandwidth available to allow 16 cores to saturate the link.
>> 
>> The problem here is that if you have such a host, and you want to have
>> high speed migration, you need to configure it.  My measumermets are
>> that high number of channels don't affect performance with low
>> bandwidth, but low number of channels affect performance with high
>> bandwidth speed.
>
> I'm not concerned about impact on performance of migration on a
> low bandwidth link, rather I'm concerned about impact on performance
> of other guests on the host. It will cause migration to contend with
> other guest's vCPUs and network traffic. 

Two things here:
- vcpus:  If you want migration to consume all the bandwidth, you are
  happy with it using more vcpus.
- bandwidth: It will only consume only the one that the guest has
  assigned, split (we hope evenly) between all the channels.

My main reason to have a higher number of channels is:
- test better the code with more than one channel
- work "magically" well in all scenarios.  With a low number of
  channels, we are not going to be able to saturate a big network pipe.


>
>> So, if we want to have something that works "automatically" everywhere,
>> we need to put it to at least 8.  Or we can trust that management app
>> will do the right thing.
>
> Aren't we still setting the bandwidth limit to MB bandwidth out of the
> box, so we already require mgmt app to change settings to use more
> bandwidth ? 

Yeap.  This is the default bandwidth.

#define MAX_THROTTLE  (32 << 20)


>> If you are using a low value of bandwidth, the only difference with 16
>> channels is that you are using a bit more memory (just the space for the
>> stacks) and that you are having less contention for the locks (but with
>> low bandwidth you are not having contention anyways).
>> 
>> So,  I think that the question is:

Note that my idea is to make multifd "default" in the near future (5.1
timeframe or so).

>> - What does libvirt prefferes
>
> Libvirt doesn't really have an opinion in this case. I believe we'll
> always set the number of channels on both src & dst, so we don't
> see the defaults.

What does libvirt does today for this value?

>> - What does ovirt/openstack preffer
>
> Libvirt should insulate them from any change in defaults in QEMU
> in this case, but always explicitly setting channels on src & dst
> to match.

I agree here, they should don't care by default.

>> - Do we really want that the user "have" to configure that value
>
> Right so this is the key quesiton - for a user not using libvirt
> or a libvirt based mgmt app, what we do want out out of the box
> migration to be tuned for ?

In my opinion, we should have something like:
- multifd: enabled by default
- max downtime: 300 ms (current) looks right to me
- max bandwidth: 32MB/s (current) seems a bit low. 100MB/s (i.e. almost
  full gigabit ethernet) seems reasonable to me.  Having a default for
  10Gigabit ethernet or similar seems too high.

> If we want to maximise migration performance, at cost of anything
> else, then we can change the migration channels count, but probably
> also ought to remove the 32MB bandwidth cap as no useful guest with
> active apps will succeed migration with a 32MB cap.

Will start another series with the current values to discuss all the
defaults, ok?

thanks for the comments, Juan.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]