bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#64735: 29.0.92; find invocations are ~15x slower because of ignores


From: Dmitry Gutov
Subject: bug#64735: 29.0.92; find invocations are ~15x slower because of ignores
Date: Wed, 13 Sep 2023 17:27:49 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0

On 13/09/2023 14:38, Eli Zaretskii wrote:
Date: Tue, 12 Sep 2023 23:27:49 +0300
Cc: luangruo@yahoo.com, sbaugh@janestreet.com, yantar92@posteo.net,
  64735@debbugs.gnu.org
From: Dmitry Gutov <dmitry@gutov.dev>

That's why I said "or writing a new filter function".
read_and_dispose_of_process_output will have to call this new filter
differently, passing it the raw text read from the subprocess, where
read_and_dispose_of_process_output current first decodes the text and
produces a Lisp string from it.  Then the filter would need to do
something similar to what insert-file-contents does: insert the raw
input into the gap, then call decode_coding_gap to decode that
in-place.

Does the patch from my last patch-bearing email look similar enough to
what you're describing?

The one called read_and_insert_process_output.diff

No, not entirely: it still produces a Lisp string when decoding is
needed, and then inserts that string into the buffer.

Are you sure? IIUC the fact that is passes 'curbuf' as the last argument to 'decode_coding_c_string' means that decoding happens inside the buffer. This has been my explanation for the performance improvement anyway.

If it still generated a Lisp string, I think that would mean that we could save the general shape of internal-default-process-filter and just improve its implementation for the same measured benefit.

Did you look at what insert-file-contents does?  If not I suggest to
have a look, starting from this comment:

   /* Here, we don't do code conversion in the loop.  It is done by
      decode_coding_gap after all data are read into the buffer.  */

and ending here:

   if (CODING_MAY_REQUIRE_DECODING (&coding)
       && (inserted > 0 || CODING_REQUIRE_FLUSHING (&coding)))
     {
       /* Now we have all the new bytes at the beginning of the gap,
          but `decode_coding_gap` can't have them at the beginning of the gap,
          so we need to move them.  */
       memmove (GAP_END_ADDR - inserted, GPT_ADDR, inserted);
       decode_coding_gap (&coding, inserted);
       inserted = coding.produced_char;
       coding_system = CODING_ID_NAME (coding.id);
     }
   else if (inserted > 0)
     {
       /* Make the text read part of the buffer.  */
       eassert (NILP (BVAR (current_buffer, enable_multibyte_characters)));
       insert_from_gap_1 (inserted, inserted, false);

       invalidate_buffer_caches (current_buffer, PT, PT + inserted);
       adjust_after_insert (PT, PT_BYTE, PT + inserted, PT_BYTE + inserted,
                           inserted);
     }

That does look different. I'm not sure how long it would take me to adapt this code (if you have an alternative patch to suggest right away, please go ahead), but if this method turns out to be faster, it sounds like we could improve the performance of 'call_process' the same way. That would be a win-win.

The result there, though, is that a "filter" (in the sense that
make-process uses that term) is not used at all.

Sure, but in this case we don't need any filtering.  It's basically
the same idea as internal-default-process-filter: we just need to
insert the process output into a buffer, and optionally decode it.

Pretty much. But that raises the question of what to do with the existing function internal-default-process-filter.

Looking around, it doesn't seem to be used with advice (a good thing: the proposed change would break that), but it is called directly in some packages like magit-blame, org-assistant, with-editor, wisi, sweeprolog, etc. I suppose we'd just keep it around unchanged.

And if we want to use it in production, we should
probably work on adding that special default filter which inserts and
decodes directly into the buffer, because that will probably lower the
GC pressure and thus has hope of being faster.  Or even replace the
default filter implementation with that new one.

But a filter must be a Lisp function, which can't help but accept only
Lisp strings (not C string) as argument. Isn't that right?

We can provide a special filter identified by a symbol.  Such a filter
will not be Lisp-callable, it will exist for the cases where we need
to insert the output into the process buffer.

The would be the safest alternative. OTOH, this way we'd pass up on the opportunity to make all existing asynchronous processes without custom filters, a little bit faster in one fell swoop.

Any Lisp callback could
then access the process output as the text of that buffer, no Lisp
strings needed.  I thought this was a worthy goal; apologies if I
misunderstood.

Sorry, I was just quibbling about the terminology, to make sure we are on the same page on what is being proposed. If the patch and evidence look good to people, that is. And I'd like to explore that improvement venue to the max.

But note that it has limitations as well (e.g. filter is the only way to get in-process callbacks from the process, and avoiding it for best performance will require external callback such as timers), so if someone has any better ideas how to improve GC time to a comparable extent but keep design unchanged, that's also welcome.

Should we also discuss increasing the default of read-process-output-max? Even increasing it 10x (not necessarily 100x) creates a noticeable difference, especially combined with the proposed change.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]