[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Some experience with the igc branch
From: |
Gerd Möllmann |
Subject: |
Re: Some experience with the igc branch |
Date: |
Tue, 24 Dec 2024 15:12:40 +0100 |
User-agent: |
Gnus/5.13 (Gnus v5.13) |
Eli Zaretskii <eliz@gnu.org> writes:
>>
>> I'm using SIGPROF below to make it more concrete. Similar for other
>> signals.
>>
>> The idea is to get the backtrace in the SIGPROF handler, without
>> accessing Lisp data. That can be done, as I've tried to show.
>> Then place that backtrace somewhere.
>
> Let's be more accurate: when I said "Lisp data", I actually meant any
> data that is part of the Lisp machine's global state. That's because
> you cannot safely access that state while the Lisp machine runs (and
> modifies the state). You need the Lisp machine stopped in its tracks.
> Agreed?
Ok, let's use that definition.
> Now, with that definition, isn't specpdl stack part of "Lisp data"?
> If so, and if we can safely access it from a signal handler, why do we
> need to move it aside at all? And how would the "message handler" be
> different in that aspect from a signal hanlder?
We're coming from the problem that MPS uses signals for memory barriers.
On platforms != macOS. And I am proposing a solution for that.
The SIGPROF handler does two things: (1) get the current backtrace,
which does not trip on memory barriers, and (2) build a summary, i.e.
count same backtraces using a hash table. (2) trips on memory barriers.
So, my proposal, is to do (1) in the signal handler and do (2)
elsewhere, not in the signal handler. Where (2) is done is a matter of
design. If we use Helmut's work queue, it would be the main thread, I
suppose.
In any case we're in "normal" multi-threading territory, with the usual
restrictions and so on, but these are restrictions Emacs has. And we
don't need anything from MPS, which might or might not be possible to
get.
>
>> In an an actor model architecture, one would use a message that contains
>> the backtrace and post it to a message board. I used that architecture
>> just as an example, because I like it a lot. In the same architecture,
>> typically a scheduler thread would then assign a thread to handle the
>> message. The handler handling the profiler message would then do what
>> record_backtrace today does after get_backtrace, i.e. count same
>> backtraces.
>
> What is the purpose of delaying the part of record_backtrace after
> get_backtrace to later? Is the counting it does dangerous when done
> from a signal handler?
That part (2) which can trip on memory barriers because it accesses
MPS-managed memory like vectors and so on.
>
>> That's only one example architectures, of course. One can use something
>> else, like queues that are handled by another thread, one doesn't need a
>> scheduler thread, and so on, and so on. Pip's work queue is an
>> example.
>
> Doing this from another thread raises the problem I describe above: we
> need the Lisp thread(s) stopped, because you cannot examine the data
> of the Lisp machine while the machine is running. And if we stop the
> Lisp threads, why do we need the other thread at all?
>
> I guess we are tossing ideas without sufficient detail, so each one
> understands something different from each idea (since we have
> different backgrounds and experiences). My suggestion is that to
> describe each idea in enough detail to make the design and its
> implications clear to all. A kind of DR, if you want. Then we will
> be on the same page, and can have an effective discussion of the
> various ideas.
I hope the above helps. Please understand that I'm not proposing a
ready-made design, but mainly recommend moving (2) out of the signal
handler. Sorry if that was too abstract so far, I guess that's just the
way I'm thinking.
If it helps, maybe we should concentrate on solving this with Helmut's
work queue. Put the backtrace from (1) in the work queue, then do (2)
where the work queue is processed. Something like that.
- Re: Some experience with the igc branch, (continued)
- Re: Some experience with the igc branch, Pip Cet, 2024/12/23
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/23
- Re: Some experience with the igc branch, Pip Cet, 2024/12/24
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/24
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/24
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/24
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/24
- Re: Some experience with the igc branch, Pip Cet, 2024/12/24
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/24
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/24
- Re: Some experience with the igc branch,
Gerd Möllmann <=
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/24
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/24
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/25
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/25
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/25
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/25
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/25
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/25
- Re: Some experience with the igc branch, Helmut Eller, 2024/12/25
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/25