[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-discuss] network latency for virtio-net-pci
From: |
Zhang Haoyu |
Subject: |
Re: [Qemu-discuss] network latency for virtio-net-pci |
Date: |
Tue, 2 Sep 2014 11:05:36 +0800 |
>Hello.
>
>We running qemu 1.5, and one of the users complains on high latency on
>the overlay network of openstack.
>
>I done some research and comparation and found those numbers:
>
>V - virtual machine with qemu/virtio-net-pci tap device.
>H - hardware server
>S - hardware switch
>O - openvswitch bridge (OVS 2.0)
>
>V-O-V - 300 µs
>V-H - 180 µs
>H-S-H - 140 µs
>V-O-S-O-V - 600 µs
>
>After doing some math with linear equations and few more tests I found
>following latencies:
>
>internal linux latency - 50 µs
>hardware switch latency - 40 µs
>openvswitch (gre-mode) latency 40 µs
>
>and, most important:
>
>QEMU device - 130 µs
>
Did you use vhost-net?
>It looks really slow.
>
>My questions:
>
>1. Is newer versions of qemu showing better lattency? (If possible, show
>me please your V-to-V ping on same host)
I think it does not.
>2. Is any way to reduce latency for qemu network devices?
>
1) try to patch vhost: add polling mode
N.B., you can also introduce similar implementation to guest virtio driver.
2) try to patch Workqueue based vhost work scheduling
3) tune CFS paras for lower latency: sched_min_granularity_ns,
sched_latency_ns, sched_wakeup_granularity_ns
4) renice vhost thread to higher priority(e.g., -10)
5) bind vcpu to pcpu
Thanks,
Zhang Haoyu
>Thanks!