[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Scalability / lock contention in HVF support?
From: |
Michael Pratt |
Subject: |
Scalability / lock contention in HVF support? |
Date: |
Tue, 10 Jan 2023 18:18:52 -0500 |
Hi all,
I've been investigating slow guest boot times for VMs running on
(amd64) macOS hosts, using accel=hvf. What I've found is that boot
time scales poorly with guest CPU count. Approximate boot times (host
has 12 CPUs):
1 CPU -> ~1m30s
2 CPU -> ~1m15s
4 CPU -> ~1m40s
6 CPU -> ~4m
8 CPU -> ~6m
Profiling qemu reveals quite of bit of time in hvf_vcpu_exec ->
qemu_mutex_lock_iothread / qemu_mutex_unlock_iothread (lock + unlock
was ~15% of all cycles in the 6 CPU case), which looks like lock
contention to me.
Indeed, this lock is pretty much held unconditionally for the duration
of all VM exits:
https://gitlab.com/qemu-project/qemu/-/blob/master/target/i386/hvf/hvf.c#L453.
This seems like a pretty serious scalability bottleneck.
I'm not sure exactly which exit reasons I'm getting often, but I see
the next highest CPU on decode_instruction/exec_instruction. I'm not
sure if this is from MMIO, IO ports, or APIC access.
FWIW, I took a look at the KVM support, which seems to avoid taking
this lock for many exit reasons, including MMIO and IO ports (not sure
about APIC).
Am I even looking down the right path? Is scalability a known issue with HVF?
Thanks,
Michael
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Scalability / lock contention in HVF support?,
Michael Pratt <=