qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

low io performance after loadvm.


From: address@hidden
Subject: low io performance after loadvm.
Date: Sat, 30 Mar 2024 06:57:04 +0000

Hi team,

I’m testing a libvirt feature in Ubuntu. And I found the IO perf is low after I revert a snapshot.
Then I try to start a VM by qemu-system-x86_64, add a blkiotune to restrict the iops , savevm then loadvm. Get the same result (low IO performance).

My qemu start cmd:

 

qemu-system-x86_64 \

  -name ubuntu-20.04-vm,debug-threads=on \

  -machine pc-i440fx-8.2,usb=off,dump-guest-core=off \

  -accel kvm \

  -cpu Broadwell-IBRS,vme=on,ss=on,vmx=on,f16c=on,rdrand=on,hypervisor=on,arat=on,tsc-adjust=on,md-clear=on,stibp=on,ssbd=on,xsaveopt=on,pdpe1gb=on,abm=on,tsx-ctrl=off,hle=off,rtm=off \

  -m 8192 \

  -overcommit mem-lock=off \

  -smp 2,sockets=1,dies=1,cores=1,threads=2 \

  -numa node,nodeid=0,cpus=0-1,memdev=ram \

  -object memory-backend-ram,id=ram,size=8192M \

  -uuid d2d68f5d-bff0-4167-bbc3-643e3566b8fb \

  -display none \

  -nodefaults \

  -monitor stdio \

  -rtc base=utc,driftfix=slew \

  -no-shutdown \

  -boot strict=on \

  -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 \

  -blockdev '{"driver":"file","filename":"/virt/images/focal-server-cloudimg-amd64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}' \

  -blockdev '{"node-name":"libvirt-4-format","read-only":false,"driver":"qcow2","file":"libvirt-4-storage","backing":null}' \

  -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=libvirt-4-format,id=virtio-disk0,bootindex=1 \

  -blockdev '{"driver":"file","filename":"/virt/disks/vm1_disk_1.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \

  -blockdev '{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":null}' \

  -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=libvirt-3-format,id=virtio-disk1 \

  -blockdev '{"driver":"file","filename":"/virt/disks/vm1_disk_2.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \

  -blockdev '{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}' \

  -device virtio-blk-pci,bus=pci.0,addr=0x6,drive=libvirt-2-format,id=virtio-disk2 \

  -blockdev '{"driver":"file","filename":"/virt/disks/vm1_disk_3.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \

  -blockdev '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \

  -device virtio-blk-pci,bus=pci.0,addr=0x7,drive=libvirt-1-format,id=virtio-disk3 \

  -chardev pty,id=charserial0 \

  -device isa-serial,chardev=charserial0,id=serial0,index=0 \

  -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 \

  -msg timestamp=on

 

Steps to reproduce:
1. Start a VM by above cmd

     find the following similar msg after launching vm "char device redirected to /dev/pts/1 (label charserial0)"

2. (qemu) info status
    VM status: running

3. (qemu) block_set_io_throttle virtio-disk1/virtio-backend 0 0 0 300 0 0

     (qemu) block_set_io_throttle virtio-disk2/virtio-backend 0 0 0 300 0 0

     (qemu) block_set_io_throttle virtio-disk3/virtio-backend 0 0 0 300 0 0

4. In host, screen /dev/pts/1 (maybe other path)

5. login the VM and run fio on any disk

6. The fio result iops should be 300
                root@ubuntu:~# fio --name=test_vdb --ioengine=sync --rw=randwrite --bs=4k --size=20G --numjobs=1 --time_based --runtime=30s --          filename=/dev/vdb --direct=1

test_vdb: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1

fio-3.16

Starting 1 process

Jobs: 1 (f=1): [w(1)][100.0%][w=1201KiB/s][w=300 IOPS][eta 00m:00s]

test_vdb: (groupid=0, jobs=1): err= 0: pid=1111: Thu Mar 28 09:08:46 2024

...

7. (qemu) savevm snapshot1
8. (qemu) loadvm snapshot1

9. repeat the steps 4, 5, 6

10. The fio result iops should be 300 but it is only 70-80

                root@ubuntu:~# fio --name=test_vdb --ioengine=sync --rw=randwrite --bs=4k --size=20G --numjobs=1 --time_based --runtime=30s --filename=/dev/vdb --direct=1

test_vdb: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1

fio-3.16

Starting 1 process

Jobs: 1 (f=1): [w(1)][100.0%][w=344KiB/s][w=86 IOPS][eta 00m:00s]

...

 

my env:

Test in 2 qemu versions
:~# qemu-system-x86_64 --version
QEMU emulator version 6.2.0 (Debian 1:6.2+dfsg-2ubuntu6.17)
Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers

 

: ~#qemu-system-x86_64 --version

QEMU emulator version 8.2.0

Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers

:~# lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 22.04.4 LTS
Release:    22.04
Codename:    jammy

 

I want to know why the IO has became lower after loadvm? What should I do to avoid this behaviour?

Thanks!

Best Regards,

Kevin Xin


reply via email to

[Prev in Thread] Current Thread [Next in Thread]