qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] network latency for virtio-net-pci


From: George Shuklin
Subject: Re: [Qemu-discuss] network latency for virtio-net-pci
Date: Tue, 02 Sep 2014 15:03:47 +0300
User-agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.0

On 09/02/2014 06:05 AM, Zhang Haoyu wrote:
We running qemu 1.5, and one of the users complains on high latency on
the overlay network of openstack.

I done some research and comparation and found those numbers:

V - virtual machine with qemu/virtio-net-pci tap device.
H - hardware server
S - hardware switch
O - openvswitch bridge (OVS 2.0)

V-O-V - 300 µs
V-H - 180 µs
H-S-H - 140 µs
V-O-S-O-V - 600 µs

After doing some math with linear equations and few more tests I found
following latencies:

internal linux latency - 50 µs
hardware switch latency - 40 µs
openvswitch (gre-mode) latency  40 µs

and, most important:

QEMU device - 130 µs
Did you use vhost-net?
Yep. Here command line for VM (network part)

qemu-system-x86_64 -machine accel=kvm:tcg ... -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:0e:4c:5c,bus=pci.0,addr=0x3 ...
2. Is any way to reduce latency for qemu network devices?

1) try to patch vhost: add polling mode
    N.B., you can also introduce similar implementation to guest virtio driver.
2) try to patch Workqueue based vhost work scheduling
3) tune CFS paras for lower latency: sched_min_granularity_ns, 
sched_latency_ns, sched_wakeup_granularity_ns
4) renice vhost thread to higher priority(e.g., -10)
5) bind vcpu to pcpu

Thanks, I'll try.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]