qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 06/12] accel/tcg: better handle memory constrained systems


From: Alex Bennée
Subject: Re: [PATCH v2 06/12] accel/tcg: better handle memory constrained systems
Date: Thu, 23 Jul 2020 11:06:43 +0100
User-agent: mu4e 1.5.5; emacs 28.0.50

Daniel P. Berrangé <berrange@redhat.com> writes:

> On Thu, Jul 23, 2020 at 10:22:25AM +0100, Alex Bennée wrote:
>> 
>> Daniel P. Berrangé <berrange@redhat.com> writes:
>> 
>> > On Wed, Jul 22, 2020 at 12:02:59PM -0700, Richard Henderson wrote:
>> >> On 7/22/20 9:44 AM, Daniel P. Berrangé wrote:
>> >> > OpenStack uses TCG in alot of their CI infrastructure for example
>> >> > and runs multiple VMs. If there's 4 VMs, that's another 4 GB of
>> >> > RAM usage just silently added on top of the explicit -m value.
>> >> > 
>> >> > I wouldn't be surprised if this pushes CI into OOM, even without
>> >> > containers or cgroups being involved, as they have plenty of other
>> >> > services consuming RAM in the CI VMs.
>> >> 
>> >> I would hope that CI would also supply a -tb_size to go along with that -m
>> >> value.  Because we really can't guess on their behalf.
>> >
>> > I've never even seen mention of -tb_size argument before myself, nor
>> > seen anyone else using it and libvirt doesn't set it, so I think
>> > this is not a valid assumption.
>> >
>> >
>> >> > The commit 600e17b261555c56a048781b8dd5ba3985650013 talks about this
>> >> > minimizing codegen cache flushes, but doesn't mention the real world
>> >> > performance impact of eliminating those flushes ?
>> >> 
>> >> Somewhere on the mailing list was this info.  It was so dreadfully slow 
>> >> it was
>> >> *really* noticable.  Timeouts everywhere.
>> >> 
>> >> > Presumably this makes the guest OS boot faster, but what's the before
>> >> > and after time ?  And what's the time like for values in between the
>> >> > original 32mb and the new 1 GB ?
>> >> 
>> >> But it wasn't "the original 32MB".
>> >> It was the original "ram_size / 4", until that broke due to argument 
>> >> parsing
>> >> ordering.
>> >
>> > Hmm, 600e17b261555c56a048781b8dd5ba3985650013 says it was 32 MB as the
>> > default in its commit message, which seems to match the code doing
>> >
>> >  #define DEFAULT_CODE_GEN_BUFFER_SIZE_1 (32 * MiB)
>> 
>> You need to look earlier in the sequence (see the tag pull-tcg-20200228):
>> 
>>   47a2def4533a2807e48954abd50b32ecb1aaf29a
>> 
>> so when the argument ordering broke the guest ram_size heuristic we
>> started getting reports of performance regressions because we fell back
>> to that size. Before then it was always based on guest ram size within
>> the min/max bounds set by those defines.
>
> Ah I see. That's a shame, as something based on guest RAM size feels like
> a much safer bet for a default heuristic than basing it on host RAM
> size.

It was a poor heuristic because the amount of code generation space you
need really depends on the amount of code being executed and that is
more determined by workload than RAM size. You may have 4gb of RAM
running a single program with a large block cache or 128Mb of RAM but
constantly swapping code from a block store which triggers a
re-translation every time.

Also as the translation cache is mmap'ed it doesn't all have to get
used. Having spare cache isn't too wasteful.

> I'd probably say that the original commit which changed the argument
> processing is flawed, and could/should be fixed.

I'd say not - we are not trying to replace/fix the original heuristic
but introduce a new one to finesse behaviour in relatively resource
constrained machines. Nothing we do can cope with all the potential
range of invocations of QEMU people might do. For that the user will
have to look at workload and tweak the tb-size control. The default was
chosen to make the "common" case of running a single guest on a users
desktop work at a reasonable performance level. You'll see we make that
distinction in the comments between system emulation and for example
linux-user where it's much more reasonable to expect multiple QEMU
invocations.

> The problem that commit was trying to solve was to do validation of the
> value passed to -m. In fixing that it also moving the parsing. The key
> problem here is that we need to do parsing and validating at different
> points in the startup procedure.  IOW, we need to split the logic, not
> simply moving the CLI parsing to the place that makes validation work.
>
> Regards,
> Daniel


-- 
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]