[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Applying Throttle Block Filter via QMP Command
From: |
Henry lol |
Subject: |
Re: Applying Throttle Block Filter via QMP Command |
Date: |
Thu, 9 Jan 2025 11:02:46 +0900 |
I'm sorry for giving you the wrong information.
I didn't use the -drive parameter in QEMU, but the -blockdev parameter instead.
Below are the commands I used in the scenario where I/O performance
remains the same
1-1 execute the qemu process with
...
-object throttle-group,id=tg,x-bps-total=10485760 \
-blockdev
'{"driver":"qcow2","node-name":"qcow2-node","file":{"driver":"file","filename":"/path/to/file.qcow2"}}'
\
-device
virtio-blk-pci,scsi=off,drive=qcow2-node,id=did,bootindex=1,bus=pci.0,addr=0x05,serial=1234
1-2 run the blockdev-add command via qmp socket
{
"execute":"blockdev-add",
"arguments":{
"driver": "throttle",
"node-name": "throttle-node",
"throttle-group": "tg",
"file": "qcow2-node"
}}
scenario where throttle works as expected
2-1 execute the qemu process with
...
-object throttle-group,id=tg,x-bps-total=10485760 \
-blockdev
'{"driver":"throttle","throttle-group":"tg","node-name":"throttle-node","file":{
"driver":"qcow2","node-name":"qcow2-node","file":{"driver":"file","filename":"/path/to/file.qcow2"}
}}' \
-device
virtio-blk-pci,scsi=off,drive=qcow2-node,id=did,bootindex=1,bus=pci.0,addr=0x05,serial=1234
2025년 1월 8일 (수) 오후 6:45, Henry lol <pub.virtualization@gmail.com>님이 작성:
>
> Hello,
>
> I want to apply a throttle block filter using the QMP command, but it
> doesn't seem to work as the I/O performance remains the same.
>
> Are there any additional steps I need to follow?
> I predefined the throttle-group object and block device in the QEMU
> parameters and then used the blockdev-add QMP command to apply the
> filter, as described in the link
> - https://github.com/qemu/qemu/blob/master/docs/throttle.txt#L315-L322
>
> Additionally, I’ve confirmed that the filter works well when defined
> in the QEMU -drive parameter instead of using the QMP command.
>
> thanks,