lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] [Bulk] KVM performance [Was: Compiling takes longer with gcc-4


From: Greg Chicares
Subject: Re: [lmi] [Bulk] KVM performance [Was: Compiling takes longer with gcc-4.9.2]
Date: Mon, 25 Jan 2016 16:23:22 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.3.0

On 2016-01-25 00:47, Greg Chicares wrote:
> On 2016-01-18 14:32, Vadim Zeitlin wrote:
>> On Mon, 18 Jan 2016 05:04:23 +0000 Greg Chicares <address@hidden> wrote:
> [...OLD vs "NEW" machines, msw-xp, gcc-4.9.1 ...]
>> GC> Strangely, OLD beats NEW for building wx and wxPdfDoc, with --jobs=8.
>> GC> Less strangely, NEW beats OLD for lmi makefile targets with --jobs=4;
>> GC> but it's still a bit strange that the difference is so small.
>> 
>>  Yes, both are indeed pretty strange but...
>> 
>> GC> But what I really want to compare is cross compiling on x86_64-linux,
>> 
>> ... it's this set of benchmarks that I'm most eager to see.
> 
> Time to build wx with g++-mingw-w64-i686 gcc-4.9.1 ...
> 
> Debian-8 running on bare metal, "OLD" machine, using a crucial m500 SSD:
> 
>   make --jobs=16 install
>   2953.35s user 110.49s system 997% cpu 5:07.17 total
>                                         ^^^^^^^ = 307s
> 
> Debian-8 guest, debian-7 host, qemu-kvm, "OLD" machine, WD "RE" HDD:
>   make --jobs=8 install
>   real 7m16.047s user 37m31.684s sys 2m18.176s
>        ^^^^^^^^^ = 436s
> 
> Further data to make those numbers comparable:
>   same machine, debian-7 on bare metal, cross-gcc-4.6:
>   make --jobs=16 install  1673.20s user 139.40s system 1297% cpu 2:19.65 
> total [140s]
>   make --jobs=8  install  1089.86s user  99.95s system  708% cpu 2:47.96 
> total [168s]
> which suggests that 8 vs 16 vCPUs is a 140:168 speed ratio. Thus, if I
> had given the VM 16 vCPUs, it might have taken
>   436s * 140/168 = 363s
> and thus (363 / 307) - 1 = 18% is the penalty for (VM, HDD) vs.
> (bare metal, SSD). Sure, chroot would be faster, but it's not
> like night and day.

More data: I increased the VM's vCPUs to 16 (and its RAM to 7000MB
just in case, but it never used more than 3.2GB)...

/home/greg/build/wx-msw[0]$nproc
16
/home/greg/build/wx-msw[0]$echo $coefficiency
--jobs=16

Rebuild wx (as above):

/home/greg/build/wx-msw[0]$make clean >/dev/null
/bin/sh: 1: cd: can't cd to samples
make: [clean] Error 2 (ignored)
/home/greg/build/wx-msw[0]$time make $coefficiency install >/dev/null
make $coefficiency install > /dev/null  3730.90s user 214.05s system 999% cpu 
6:34.75 total

395s, not as good as the 363s extrapolated above; reason: the final
link takes a long time, and isn't parallel. Thus,
  (395 / 307) - 1 = 29%
is the total penalty for (VM,HDD) vs. (bare metal, SSD), and it's
my impression that the SSD on this machine doesn't perform much
better than the HDD (except acoustically) because it's SATA II.
Still, even if the pure virtualization penalty is 30%, that's
not terrible; but a chroot removes that penalty at a cost of zero
once it's set up.

BTW, to build lmi:

/home/greg/build/lmi-msw[0]$make clean >/dev/null
/home/greg/build/lmi-msw[0]$time make $coefficiency install >/dev/null 2>&1
make $coefficiency install > /dev/null 2>&1  928.11s user 69.48s system 1067% 
cpu 1:33.43 total

Again, this is gcc-4.9.1, but without '-std=c++11'.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]