qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v17 6/8] softmmu/dirtylimit: Implement virtual CPU throttle


From: Jason Wang
Subject: Re: [PATCH v17 6/8] softmmu/dirtylimit: Implement virtual CPU throttle
Date: Thu, 26 May 2022 10:51:55 +0800

On Wed, May 25, 2022 at 11:56 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, May 25, 2022 at 11:38:26PM +0800, Hyman Huang wrote:
> > > 2. Also this algorithm only control or limits dirty rate by guest
> > > writes. There can be some memory dirtying done by virtio based devices
> > > which is accounted only at qemu level so may not be accounted through
> > > dirty rings so do we have plan for that in future? Those are not issue
> > > for auto-converge as it slows full VM but dirty rate limit only slows
> > > guest writes.
> > >
> > From the migration point of view, time spent on migrating memory is far
> > greater than migrating devices emulated by qemu. I think we can do that when
> > migrating device costs the same magnitude time as migrating memory.
> >
> > As to auto-converge, it throttle vcpu by kicking it and force it to sleep
> > periodically. The two seems has no much difference from the perspective of
> > internal method but the auto-converge is kind of "offensive" when doing
> > restraint. I'll read the auto-converge implementation code and figure out
> > the problem you point out.
>
> This seems to be not virtio-specific, but can be applied to any device DMA
> writting to guest mem (if not including vfio).  But indeed virtio can be
> normally faster.
>
> I'm also curious how fast a device DMA could dirty memories.  This could be
> a question to answer to all vcpu-based throttling approaches (including the
> quota based approach that was proposed on KVM list).  Maybe for kernel
> virtio drivers we can have some easier estimation?

As you said below, it really depends on the speed of the backend.

>  My guess is it'll be
> much harder for DPDK-in-guest (aka userspace drivers) because IIUC that
> could use a large chunk of guest mem.

Probably, for vhost-user backend, it could be ~20Mpps or even higher.

Thanks

>
> [copy Jason too]
>
> --
> Peter Xu
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]