qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 74/74] cputlb: queue async flush jobs without the BQL


From: Emilio G. Cota
Subject: Re: [PATCH v8 74/74] cputlb: queue async flush jobs without the BQL
Date: Wed, 20 May 2020 00:46:13 -0400

On Mon, May 18, 2020 at 09:46:36 -0400, Robert Foley wrote:
> We re-ran the numbers with the latest re-based series.
> 
> We used an aarch64 ubuntu VM image with a host CPU:
> Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz, 2 CPUs, 10 cores/CPU,
> 20 Threads/CPU.  40 cores total.
> 
> For the bare hardware and kvm tests (first chart) the host CPU was:
> HiSilicon 1620 CPU 2600 Mhz,  2 CPUs, 64 Cores per CPU, 128 CPUs total.
> 
> First, we ran a test of building the kernel in the VM.
> We did not see any major improvements nor major regressions.
> We show the results of the Speedup of building the kernel
> on bare hardware compared with kvm and QEMU (both the baseline and cpu locks).
> 
> 
>                    Speedup vs a single thread for kernel build
> 
>   40 +----------------------------------------------------------------------+
>      |         +         +         +          +         +         +  **     |
>      |                                                bare hardwar********* |
>      |                                                          kvm ####### |
>   35 |-+                                                   baseline $$$$$$$-|
>      |                                                    *cpu lock %%%%%%% |
>      |                                                 ***                  |
>      |                                               **                     |
>   30 |-+                                          ***                     +-|
>      |                                         ***                          |
>      |                                      ***                             |
>      |                                    **                                |
>   25 |-+                               ***                                +-|
>      |                              ***                                     |
>      |                            **                                        |
>      |                          **                                          |
>   20 |-+                      **                                          +-|
>      |                      **                                #########     |
>      |                    **                  ################              |
>      |                  **          ##########                              |
>      |                **         ###                                        |
>   15 |-+             *       ####                                         +-|
>      |             **     ###                                               |
>      |            *    ###                                                  |
>      |           *  ###                                                     |
>   10 |-+       **###                                                      +-|
>      |        *##                                                           |
>      |       ##  $$$$$$$$$$$$$$$$                                           |
>      |     #$$$$$%%%%%%%%%%%%%%%%%%%%                                       |
>    5 |-+  $%%%%%%                    %%%$%$%$%$%$%$%$%$%$%$%$%$%$%$%$%    +-|
>      |   %%                                                           %     |
>      | %%                                                                   |
>      |%        +         +         +          +         +         +         |
>    0 +----------------------------------------------------------------------+
>      0         10        20        30         40        50        60        70
>                                    Guest vCPUs
> 
> 
> After seeing these results and the scaling limits inherent in the build 
> itself,
> we decided to run a test which might show the scaling improvements clearer.

Thanks for doing these tests. I know from experience that benchmarking
is hard and incredibly time consuming, so please do not be discouraged by
my comments below.

A couple of points:

1. I am not familiar with aarch64 KVM but I'd expect it to scale almost
like the native run. Are you assigning enough RAM to the guest? Also,
it can help to run the kernel build in a ramfs in the guest.

2. The build itself does not seem to impose a scaling limit, since
it scales very well when run natively (per-thread I presume aarch64 TCG is
still slower than native, even if TCG is run on a faster x86 machine).
The limit here is probably aarch64 TCG. In particular, last time I
checked aarch64 TCG has room for improvement scalability-wise handling
interrupts and some TLB operations; this is likely to explain why we
see no benefit with per-CPU locks, i.e. the bottleneck is elsewhere.
This can be confirmed with the sync profiler.

IIRC I originally used ppc64 for this test because ppc64 TCG does not
have any other big bottlenecks scalability-wise. I just checked but
unfortunately I can't find the ppc64 image I used :( What I can offer
is the script I used to run these benchmarks; see the appended.

Thanks,
                Emilio

---
#!/bin/bash

set -eu

# path to host files
MYHOME=/local/home/cota/src

# guest image
QEMU_INST_PATH=$MYHOME/qemu-inst
IMG=$MYHOME/qemu/img/ppc64/ubuntu.qcow2

ARCH=ppc64
COMMON_ARGS="-M pseries -nodefaults \
                -hda $IMG -nographic -serial stdio \
                -net nic -net user,hostfwd=tcp::2222-:22 \
                -m 48G"

# path to this script's directory, where .txt output will be copied
# from the guest.
QELT=$MYHOME/qelt
HOST_PATH=$QELT/fig/kcomp

# The guest must be able to SSH to the HOST without entering a password.
# The way I set this up is to have a passwordless SSH key in the guest's
# root user, and then copy that key's public key to the host.
# I used the root user because the guest runs on bootup (as root) a
# script that scp's run-guest.sh (see below) from the host, then executes it.
# This is done via a tiny script in the guest invoked from systemd once
# boot-up has completed.
HOST=address@hidden

# This is a script in the host to use an appropriate cpumask to
# use cores in the same socket if possible.
# See https://github.com/cota/cputopology-perl
CPUTOPO=$MYHOME/cputopology-perl

# For each run we create this file that then the guest will SCP
# and execute. It is a quick and dirty way of passing arguments to the guest.
create_file () {
    TAG=$1
    CORES=$2
    NAME=$ARCH.$TAG-$CORES.txt

    echo '#!/bin/bash' > run-guest.sh
    echo 'cp -r /home/cota/linux-4.18-rc7 /tmp2/linux' >> run-guest.sh
    echo "cd /tmp2/linux" >> run-guest.sh
    echo "{ time make -j $CORES vmlinux >/dev/null; } 2>>/home/cota/$NAME" >> 
run-guest.sh
    # Output with execution time is then scp'ed to the host.
    echo "ssh $HOST 'cat >> $HOST_PATH/$NAME' < /home/cota/$NAME" >> 
run-guest.sh
    echo "poweroff" >> run-guest.sh
}

# Change here THREADS and also the TAGS that point to different QEMU 
installations.
for THREADS in 64 32 16; do
    for TAG in cpu-exclusive-work cputlb-no-bql per-cpu-lock cpu-has-work 
baseline; do
        QEMU=$QEMU_INST_PATH/$TAG/bin/qemu-system-$ARCH
        CPUMASK=$($CPUTOPO/list.pl --policy=compact-smt $THREADS)

        create_file $TAG $THREADS
        time taskset -c $CPUMASK $QEMU $COMMON_ARGS -smp $THREADS
    done
done



reply via email to

[Prev in Thread] Current Thread [Next in Thread]