bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#64735: 29.0.92; find invocations are ~15x slower because of ignores


From: Eli Zaretskii
Subject: bug#64735: 29.0.92; find invocations are ~15x slower because of ignores
Date: Tue, 12 Sep 2023 22:35:37 +0300

> Date: Tue, 12 Sep 2023 21:48:37 +0300
> Cc: luangruo@yahoo.com, sbaugh@janestreet.com, yantar92@posteo.net,
>  64735@debbugs.gnu.org
> From: Dmitry Gutov <dmitry@gutov.dev>
> 
> > then we could try extending
> > internal-default-process-filter (or writing a new filter function
> > similar to it) so that it inserts the stuff into the gap and then uses
> > decode_coding_gap,
> 
> Can that work at all? By the time internal-default-process-filter is 
> called, we have already turned the string from char* into Lisp_Object 
> text, which we then pass to it. So consing has already happened, IIUC.

That's why I said "or writing a new filter function".
read_and_dispose_of_process_output will have to call this new filter
differently, passing it the raw text read from the subprocess, where
read_and_dispose_of_process_output current first decodes the text and
produces a Lisp string from it.  Then the filter would need to do
something similar to what insert-file-contents does: insert the raw
input into the gap, then call decode_coding_gap to decode that
in-place.

> > which converts inserted bytes in-place -- that, at
> > least, will be correct and will avoid consing intermediate temporary
> > strings from the process output, then decoding them, then inserting
> > them.  Other than that, the -2 and -3 variants are very close
> > runners-up of -5, so maybe I'm missing something, but I see no reason
> > be too excited here?  I mean, 0.89 vs 0.92? really?
> 
> The important part is not 0.89 vs 0.92 (that would be meaningless 
> indeed), but that we have an _asyncronous_ implementation of the feature 
> that works as fast as the existing synchronous one (or faster! if we 
> also bind read-process-output-max to a large value, the time is 0.72).
> 
> The possible applications for that range from simple (printing progress 
> bar while the scan is happening) to more advanced (launching a 
> concurrent process where we pipe the received file names concurrently to 
> 'xargs grep'), including visuals (xref buffer which shows the 
> intermediate search results right away, updating them gradually, all 
> without blocking the UI).

Hold your horses.  Emacs only reads output from sub-processes when
it's idle.  So printing a progress bar (which makes Emacs not idle)
with the asynchronous implementation is basically the same as having
the synchronous implementation call some callback from time to time
(which will then show the progress).

As for piping to another process, this is best handled by using a
shell pipe, without passing stuff through Emacs.  And even if you do
need to pass it through Emacs, you could do the same with the
synchronous implementation -- only the "xargs" part needs to be
asynchronous, the part that reads file names does not.  Right?

Please note: I'm not saying that the asynchronous implementation is
not interesting.  It might even have advantages in some specific use
cases.  So it is good to have it.  It just isn't a breakthrough,
that's all.  And if we want to use it in production, we should
probably work on adding that special default filter which inserts and
decodes directly into the buffer, because that will probably lower the
GC pressure and thus has hope of being faster.  Or even replace the
default filter implementation with that new one.

> > About inserting into the buffer: what we do is insert into the gap,
> > and when the gap becomes full, we enlarge it.  Enlarging the gap
> > involves: (a) enlarging the chunk of memory allocated to buffer text
> > (which might mean we ask the OS for more memory), and (b) moving the
> > characters after the gap to the right to free space for inserting more
> > stuff.  This is pretty fast, but still, with a large pipe buffer and a
> > lot of output, we do this many times, so it could add up to something
> > pretty tangible.  It's hard to me to tell whether this is
> > significantly faster than consing strings and inserting them, only
> > measurements can tell.
> 
> See the benchmark tables and the POC patch in my previous email. Using a 
> better filter function would be ideal, but it seems like that's not 
> going to fit the current design. Happy to be proven wrong, though.

I see no reason why reading subprocess output couldn't use the same
technique as insert-file-contents does.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]