emacs-orgmode
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: profiling latency in large org-mode buffers (under both main & org-f


From: Max Nikulin
Subject: Re: profiling latency in large org-mode buffers (under both main & org-fold feature)
Date: Wed, 2 Mar 2022 19:23:02 +0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0

On 27/02/2022 13:43, Ihor Radchenko wrote:
Now, I did an extended profiling of what is happening using perf:

      6.20%   [.] buf_bytepos_to_charpos
Maybe I am interpreting such results wrongly, but it does not look like 
a bottleneck. Anyway thank you very much for such efforts, however it is 
unlikely that I will join to profiling in near future.
buf_bytepos_to_charpos contains the following loop:

   for (tail = BUF_MARKERS (b); tail; tail = tail->next)
     {
       CONSIDER (tail->bytepos, tail->charpos);

       /* If we are down to a range of 50 chars,
         don't bother checking any other markers;
         scan the intervening chars directly now.  */
       if (best_above - bytepos < distance
           || bytepos - best_below < distance)
        break;
       else
         distance += BYTECHAR_DISTANCE_INCREMENT;
     }

I am not sure if I understand the code correctly, but that loop is
clearly scaling performance with the number of markers
I may be terribly wrong, but it looks like an optimization attempt that 
may actually ruin performance. My guess is the following. Due to 
multibyte characters position in buffer counted in characters may 
significantly differ from index in byte sequence. Since markers have 
both values bytepos and charpos, they are used (when available) to 
narrow down initial estimation interval [0, buffer size) to nearest 
existing markers. The code below even creates temporary markers to make 
next call of the function faster.
It seems, buffers do not have any additional structures that track size 
in bytes and in characters of spans (I would not expect that 
representation of whole buffer in memory is single contiguous byte 
array). When there are no markers at all, the function has to iterate 
over each character and count its length.
The problem is that when the buffer has a lot of markers far aside from 
the position passed as argument, then iteration over markers just 
consumes CPU with no significant improvement of original estimation of 
boundaries.
If markers were organized in a tree than search would be much faster (at 
least for buffers with a lot of markers.
In some cases such function may take a hint: previous known 
bytepos+charpos pair.
I hope I missed something, but what I can expect from the code of 
buf_bytepos_to_charpos is that it is necessary to iterate over all 
markers to update positions after each typed character.
Finally, FYI. I plan to work on an alternative mechanism to access Org
headings - generic Org query library. It will not use markers and
implement ideas from org-ql. org-refile will eventually use that generic
library instead of current mechanism.
I suppose that markers might be implemented in an efficient way, and 
much better performance may be achieved when low-level data structures 
are accessible. I am in doubts concerning attempts to create something 
that resembles markers but based purely on high-level API.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]