[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] coroutine: cap per-thread local pool size
From: |
Daniel P . Berrangé |
Subject: |
Re: [PATCH] coroutine: cap per-thread local pool size |
Date: |
Wed, 20 Mar 2024 14:09:32 +0000 |
User-agent: |
Mutt/2.2.12 (2023-09-09) |
On Wed, Mar 20, 2024 at 09:35:39AM -0400, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2024 at 08:10:49PM +0000, Daniel P. Berrangé wrote:
> > On Tue, Mar 19, 2024 at 01:55:10PM -0400, Stefan Hajnoczi wrote:
> > > On Tue, Mar 19, 2024 at 01:43:32PM +0000, Daniel P. Berrangé wrote:
> > > > On Mon, Mar 18, 2024 at 02:34:29PM -0400, Stefan Hajnoczi wrote:
> > > > > diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
> > > > > index 5fd2dbaf8b..2790959eaf 100644
> > > > > --- a/util/qemu-coroutine.c
> > > > > +++ b/util/qemu-coroutine.c
> > > >
> > > > > +static unsigned int get_global_pool_hard_max_size(void)
> > > > > +{
> > > > > +#ifdef __linux__
> > > > > + g_autofree char *contents = NULL;
> > > > > + int max_map_count;
> > > > > +
> > > > > + /*
> > > > > + * Linux processes can have up to max_map_count virtual memory
> > > > > areas
> > > > > + * (VMAs). mmap(2), mprotect(2), etc fail with ENOMEM beyond
> > > > > this limit. We
> > > > > + * must limit the coroutine pool to a safe size to avoid running
> > > > > out of
> > > > > + * VMAs.
> > > > > + */
> > > > > + if (g_file_get_contents("/proc/sys/vm/max_map_count", &contents,
> > > > > NULL,
> > > > > + NULL) &&
> > > > > + qemu_strtoi(contents, NULL, 10, &max_map_count) == 0) {
> > > > > + /*
> > > > > + * This is a conservative upper bound that avoids exceeding
> > > > > + * max_map_count. Leave half for non-coroutine users like
> > > > > library
> > > > > + * dependencies, vhost-user, etc. Each coroutine takes up 2
> > > > > VMAs so
> > > > > + * halve the amount again.
> >
> > Leaving half for loaded libraries, etc is quite conservative
> > if max_map_count is the small-ish 64k default.
> >
> > That reservation could perhaps a fixed number like 5,000 ?
>
> While I don't want QEMU to abort, once this heuristic is in the code it
> will be scary to make it more optimistic and we may never change it. So
> now is the best time to try 5,000.
>
> I'll send a follow-up patch that reserves 5,000 mappings. If that turns
> out to be too optimistic we can increase the reservation.
BTW, I suggested 5,000, because I looked at a few QEM processes I have
running on Fedora and saw just under 1,000 lines in /proc/$PID/maps,
of which only a subset is library mappings. So multiplying that x5 felt
like a fairly generous overhead for more complex build configurations.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|