emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Indentation and gc


From: Ihor Radchenko
Subject: Re: Indentation and gc
Date: Sun, 12 Mar 2023 15:04:30 +0000

Eli Zaretskii <eliz@gnu.org> writes:

>> I used 0.7%*RAM because total RAM is the only reasonable metrics. What
>> else can we use to avoid memory over-consumption on low-end machines?
>
> It could be the amount of VM on low-memory machines, but something
> else on high-memory machines.  No one said it has to be derived from
> the total VM on both systems with 2 GiB and systems with 128 GiB.

For high-memory machines, the aim is using the limit (100Mb, in what I
proposed), or something close to it. I see no point trying to be
precise here - even what I propose is better when memory is not low, if
we aim for increased, yet safe gc-cons-threshold.

>> Of course, I used implicit assumption that memory usage will scale with
>> gc-cons-threshold linearly. IMHO, it is a safe assumption - the real
>> memory usage increase is slower than linear. For example, see my Emacs
>> loading data for different threshold values:
>
> We are talking about changing the threshold for the session itself,
> not just for the initial load.  So the statistics of the initial load
> is not what is needed.

Session statistics is much harder to gather.
It is more realistic to ask people about benchmarking their init.el if
we want to be serious about bumping gc-cons-threshold.

At least, we can then get a good limit for init.el.

>> The 0.7% is to ensure safe 800kb lower bound on low-end computers.
>
> I don't see why it would be the value we need to adhere to.  That it's
> the current default doesn't make it sacred, and using it as basis for
> relative figures such as 0.7% has no real basis.

We do not have to adhere to this value. But we know for sure that it is
100% safe. And we probably cannot easily get the data from low-end
machines - much fewer users are using those.

So, instead of arguing about lower limit as well, let's just use the
safe one. We can always bump it later, if we wish to bother.

>> > Anyway, how about if you try running with the threshold you think we
>> > should adopt, and report back after a month or so, say?
>> 
>> I am using 250Mb threshold for the last 3 years or so.
>> GCs are sometimes noticeable, but not annoying:
>> 
>> - gc-elapsed 297 sec / gcs-done 290 -> ~1 sec per GC
>
> IMO, 1 sec per GC is pretty annoying.  It's around 0.165 sec in my
> production session, and it still quite noticeable.  I'd be interested
> to hear what others think.

1 sec has little to do with my gc-cons-threshold, I am afraid. It is the
combination of packages I use. I have also seen worse memory
consumption. In particular, when using spell-fu, which copies over local
dictionary in every single buffer.

That's why I am seeing reducing the frequency of GCs as more important
than trying to reduce GC time, which cannot be even halved easily.

>> - memory-limit 6,518,516, stable
>
> ??? That's 6 GiB.  Didn't you say your memory footprint stabilizes at
> 1 GiB?

memory-limit is a natively compiled function defined in subr.el.

Signature
(memory-limit)

Documentation
Return an estimate of Emacs virtual memory usage, divided by 1024.

It is different from Lisp object storage size, which is about 1.3Gb.
And Emacs memory usage in system monitor is about 1.7Gb.

> Anyway, we need such statistics from many people and many different
> values of the threshold, and then we will be in a position to decide
> on better default values, and perhaps also on some more dynamic
> adjustments to it.  We are not ready for that yet.

Shall we ask about benchmarking init.el with different gc-cons-threshold
values as a start?

-- 
Ihor Radchenko // yantar92,
Org mode contributor,
Learn more about Org mode at <https://orgmode.org/>.
Support Org development at <https://liberapay.com/org-mode>,
or support my work at <https://liberapay.com/yantar92>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]