[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-discuss] copy of 4G file in qemu fails
From: |
Roman Mashak |
Subject: |
[Qemu-discuss] copy of 4G file in qemu fails |
Date: |
Mon, 10 Nov 2014 21:41:26 -0500 |
Hello,
I use qemu-1.6.2 from ovs-dpdk package available at
https://01.org/sites/default/files/downloads/packet-processing/openvswitchdpdk.l.1.1.0-27.gz
I start qemu as:
% sudo qemu-system-x86_64 -cpu host -boot c -hda fedora.qcow2
-snapshot -m 1024 --enable-kvm -name vm0 -nographic -pidfile
/usr/local/ovs_dpdk/var/run/vm0.pid
-mem-path /dev/hugepages -mem-prealloc -monitor
unix:/usr/local/ovs_dpdk/var/run/vm0monitor,server,nowait
-net none -netdev
type=tap,id=net0,script=no,downscript=no,ifname=vhost0,vhost=on
-device
virtio-net-pci,netdev=net0,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
%
So I can ping from VM through vport and via a physical interface the
other machine. However running a heavy traffic (scp of 4G binary file
to VM) drops off the vhost interface:
% scp address@hidden:/home/user/image.iso .
...
After ~90-91% completed I get this in qemu:
[ 88.198496] perf samples too long (2506 > 2500), lowering
kernel.perf_event_max_sample_rate to 50000
[ 117.924805] perf samples too long (5060 > 5000), lowering
kernel.perf_event_max_sample_rate to 25000
And shortly after in ovs-dpdk console:
APP: (0) Device has been removed from ovdk_pf port vhost0
...
I connected to running qemu with GDB and found that the qemu sends
ioctl to stop vhost:
(gdb) bt
#0 vhost_net_stop (address@hidden, ncs=0x555556851e40,
address@hidden)
at /home/rmashak/work/ovs_dpdk_1_1_0/qemu/hw/net/vhost_net.c:251
#1 0x00005555557a8c43 in virtio_net_vhost_status (status=7 '\a',
n=0x555556858df8)
at /home/rmashak/work/ovs_dpdk_1_1_0/qemu/hw/net/virtio-net.c:136
#2 virtio_net_set_status (vdev=<optimized out>, status=<optimized out>)
at /home/rmashak/work/ovs_dpdk_1_1_0/qemu/hw/net/virtio-net.c:148
#3 0x00005555557b4cb7 in virtio_set_status (vdev=0x555556858df8,
val=<optimized out>)
at /home/rmashak/work/ovs_dpdk_1_1_0/qemu/hw/virtio/virtio.c:533
#4 0x000055555576433b in vm_state_notify (address@hidden,
address@hidden) at vl.c:1764
#5 0x000055555576a8da in do_vm_stop (state=RUN_STATE_IO_ERROR)
at /home/rmashak/work/ovs_dpdk_1_1_0/qemu/cpus.c:445
#6 vm_stop (address@hidden)
at /home/rmashak/work/ovs_dpdk_1_1_0/qemu/cpus.c:1119
#7 0x00005555555f4dea in bdrv_error_action (bs=0x55555615a0a0,
address@hidden, address@hidden,
address@hidden) at block.c:2805
#8 0x0000555555688e75 in ide_handle_rw_error (address@hidden,
error=28, op=<optimized out>) at hw/ide/core.c:610
#9 0x00005555556892a7 in ide_dma_cb (opaque=0x55555687bb18,
---Type <return> to continue, or q <return> to quit---
ret=<optimized out>) at hw/ide/core.c:629
#10 0x000055555562aba1 in dma_complete (dbs=0x7fffe8021490, ret=-28)
at dma-helpers.c:120
#11 0x000055555562ae2a in dma_bdrv_cb (opaque=0x7fffe8021490, ret=-28)
at dma-helpers.c:148
#12 0x00005555555ed472 in bdrv_co_em_bh (opaque=0x7fffe803f310) at block.c:3850
#13 0x00005555555e02d7 in aio_bh_poll (address@hidden)
at async.c:81
#14 0x00005555555dfe58 in aio_poll (ctx=0x55555613ba20,
address@hidden) at aio-posix.c:185
#15 0x00005555555e0190 in aio_ctx_dispatch (source=<optimized out>,
callback=<optimized out>, user_data=<optimized out>) at async.c:194
#16 0x00007ffff76f02a6 in g_main_context_dispatch ()
from /lib64/libglib-2.0.so.0
#17 0x00005555556f549a in glib_pollfds_poll () at main-loop.c:188
#18 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:233
#19 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:465
#20 0x00005555555db710 in main_loop () at vl.c:2089
#21 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
at vl.c:4431
(gdb)
It looks that an error has occurred during disk I/O (frame 8) and it
resulted in subsequent VM stop.
Is this a known issue? Can disk I/O be tuned to avoid such behaviour
during copying of large files?
Thanks.
--
Roman Mashak
- [Qemu-discuss] copy of 4G file in qemu fails,
Roman Mashak <=