|
From: | Dmitry Gutov |
Subject: | bug#64735: 29.0.92; find invocations are ~15x slower because of ignores |
Date: | Tue, 12 Sep 2023 16:11:01 +0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 |
On 12/09/2023 14:39, Eli Zaretskii wrote:
Date: Tue, 12 Sep 2023 02:06:50 +0300 Cc:luangruo@yahoo.com,sbaugh@janestreet.com,yantar92@posteo.net, 64735@debbugs.gnu.org From: Dmitry Gutov<dmitry@gutov.dev>No, we don't wait until it's zero, we perform GC on the first opportunity that we_notice_ that it crossed zero. So examining how negative is the value of consing_until_gc when GC is actually performed could tell us whether we checked the threshold with high enough frequency, and comparing these values between different runs could tell us whether the shorter time spend in GC means really less garbage or less frequent checks for the need to GC.Good point, I'm attaching the same outputs with "last value of consing_until_gc" added to every line. There are some pretty low values in the "read-process-output-max 409600" part of the experiment, which probably means runtime staying in C accumulating the output into the (now larger) buffer? Not sure.No, I think this means we really miss some GC opportunities, and we cons quite a lot more strings between GC cycles due to that.
Or possibly same number of strings but longer ones?
I guess this happens because we somehow cons many strings in code that doesn't call maybe_gc or something.
Yes, staying in some C code that doesn't call maybe_gc for a while.I think we're describing the same thing, only I was doing that from the positive side (less frequent GCs = better performance in this scenario), and you from the negative one (less frequent GCs = more chances for an OOM to happen in some related but different scenario).
[Prev in Thread] | Current Thread | [Next in Thread] |