emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Improvement proposals for `completing-read'


From: Dmitry Gutov
Subject: Re: Improvement proposals for `completing-read'
Date: Sun, 11 Apr 2021 03:51:43 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1

On 10.04.2021 12:18, Daniel Mendler wrote:
On 4/10/21 4:21 AM, Dmitry Gutov wrote:
These `consult--async-*` functions can be chained together to produce an async pipeline. The goal here was to have reusable functions which I can glue together to create different async backends. See for example the pipeline for asynchronous commands: https://github.com/minad/consult/blob/3121b34e207222b2db6ac96a655d68c0edf1a449/consult.el#L1505-L1513.


I also like that idea.

How are the backtraces in case of error? Whether they can be made readable enough, was one of the sticking points in the (quite heated) discussion of bug#41531.

Backtraces are rather opaque. But having such issues on the language level should not be a road block. I think the proper fix would be to improve the debugging infrastructure slightly.

Instead of printing bytecodes - make this more accessible. Either show some disassembled string or show location information, which should be attached to the lambdas. Elisp retains all the information due to its dynamic nature. So all the information is still present, we can print all objects in a nice way.

If you just (load ...) the package, there will be no bytecode in the output. I was more concerned about simply how it reads (whether it's easy enough to diagnose a problem by reading the backtrace). Or at least whether it's not considerably worse than the alternatives.

Regarding navigation, though, precise symbol locations are an old Emacs problem. IIRC Alan posted about making some progress on it in the recent months, and lambdas could indeed similarly be annotated with that solution.

Then there were a few other issue with lambdas, I think the interpreter captures too much in the closures which can leads to large closures, which is bad for debugability. The bytecode compiler in contrast seems to perform an analysis. Is this right - please correct me if I am wrong? I wonder why there is even the actual interpreter left - why is it not possible to pass everything through bytecode? I guess this is a legacy issue and also a bootstrapping issue since the bytecode compiler is written in elisp itself.

Speed, probably (all the JS VMs include an interpreter stage, IIRC)?

And if we always worked with byte code directly, stuff like edebug, backtrace printer, would need to be repurposed.

There are others here better qualified to answer, anyway.

Furthermore I had another issue with lambdas - if you create large closures, which I am doing with Consult async, which capture the whole candidates set, then you end up with memory problems if you register these closures as hooks. The problem is that `add-hook/remove-hook` compares using `equal` and this uses hash tables internally, which can get very expensive. See bug#46326, bug#46407 and bug#46414.

Perhaps that's another reason not to use hook for this, and instead to attach frontend updater callbacks to the "future" values directly? With lexically scoped callbacks, for example.

I would probably say that a UI should itself know better when to refresh or not, but I'm guessing you have good counter-examples.

One could update the UI using some timer if an async source is used (polling). However since I am setting this on top of the `completing-read' infrastructure I felt it to be better to do it the other way round, since the table is only queried when the user enters new input. I guess for fast sources polling will be as good, but for slow sources, notifying the UI is better.

Perhaps we're just talking about the same thing, differently.

What I meant, the source should invoke the callback when it gets new data, and the callback should store the result somewhere, and it can notify the UI about the possibility of refreshing (the frontend implements the callback, so it knows whether and how to do it). But whether to refresh right away, or wait, debounce, or simply discard the output, should be the frontend's choice.

No hurry at all. Sometimes, though, a big feature like that can inform the whole design from the outset.

Yes, sure. When planning to do a big overhaul you are certainly right. But currently I am more focused on fixing a few smaller pain points with the API, like retaining text properties and so on.

Sounds good. I just wanted to add some context for completeness, in case the work turns into the direction of the "next completing-read".

Yes, it seems the discussion already went a bit in that direction. I agree that it is good to keep all these points in the head if one designs a new `completing-read'. However from my work on Consult I am actually not that unhappy with `completing-read' as is. With the handful of small proposals I made in my original mail the state will be improved where I had issues. If you look at my `consult--read` wrapper, it has to do some special enhancements (preview, narrowing, async, ...), but I think one can work reasonably well with the `completing-read' API. For now I prefer to work with what exists than throwing everything out. At least the Consult/Embark package show that one can implement more advanced completion features based on top of the existing infrastructure, with only a small amount of advices/hacks.

I've read it briefly, thanks.

Sounds like you have what is needed to propose an "async" extension to the standard completion tables API?

;-)

(No pressure.)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]