[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v1 0/2] Add timeout mechanism to qmp actions
From: |
Fam Zheng |
Subject: |
Re: [PATCH v1 0/2] Add timeout mechanism to qmp actions |
Date: |
Thu, 22 Oct 2020 17:29:16 +0100 |
On Tue, 2020-10-20 at 09:34 +0800, Zhenyu Ye wrote:
> On 2020/10/19 21:25, Paolo Bonzini wrote:
> > On 19/10/20 14:40, Zhenyu Ye wrote:
> > > The kernel backtrace for io_submit in GUEST is:
> > >
> > > guest# ./offcputime -K -p `pgrep -nx fio`
> > > b'finish_task_switch'
> > > b'__schedule'
> > > b'schedule'
> > > b'io_schedule'
> > > b'blk_mq_get_tag'
> > > b'blk_mq_get_request'
> > > b'blk_mq_make_request'
> > > b'generic_make_request'
> > > b'submit_bio'
> > > b'blkdev_direct_IO'
> > > b'generic_file_read_iter'
> > > b'aio_read'
> > > b'io_submit_one'
> > > b'__x64_sys_io_submit'
> > > b'do_syscall_64'
> > > b'entry_SYSCALL_64_after_hwframe'
> > > - fio (1464)
> > > 40031912
> > >
> > > And Linux io_uring can avoid the latency problem.
Thanks for the info. What this tells us is basically the inflight
requests are high. It's sad that the linux-aio is in practice
implemented as a blocking API.
Host side backtrace will be of more help. Can you get that too?
Fam
> >
> > What filesystem are you using?
> >
>
> On host, the VM image and disk images are based on ext4 filesystem.
> In guest, the '/' uses xfs filesystem, and the disks are raw devices.
>
> guest# df -hT
> Filesystem Type Size Used Avail Use% Mounted on
> devtmpfs devtmpfs 16G 0 16G 0% /dev
> tmpfs tmpfs 16G 0 16G 0% /dev/shm
> tmpfs tmpfs 16G 976K 16G 1% /run
> /dev/mapper/fedora-root xfs 8.0G 3.2G 4.9G 40% /
> tmpfs tmpfs 16G 0 16G 0% /tmp
> /dev/sda1 xfs 1014M 181M 834M 18% /boot
> tmpfs tmpfs 3.2G 0 3.2G 0% /run/user/0
>
> guest# lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 10G 0 disk
> ├─sda1 8:1 0 1G 0 part /boot
> └─sda2 8:2 0 9G 0 part
> ├─fedora-root 253:0 0 8G 0 lvm /
> └─fedora-swap 253:1 0 1G 0 lvm [SWAP]
> vda 252:0 0 10G 0 disk
> vdb 252:16 0 10G 0 disk
> vdc 252:32 0 10G 0 disk
> vdd 252:48 0 10G 0 disk
>
> Thanks,
> Zhenyu
>