[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Some experience with the igc branch
From: |
Pip Cet |
Subject: |
Re: Some experience with the igc branch |
Date: |
Fri, 27 Dec 2024 16:42:48 +0000 |
"Eli Zaretskii" <eliz@gnu.org> writes:
>> Date: Fri, 27 Dec 2024 14:34:22 +0000
>> From: Pip Cet <pipcet@protonmail.com>
>> Cc: gerd.moellmann@gmail.com, ofv@wanadoo.es, emacs-devel@gnu.org,
>> eller.helmut@gmail.com, acorallo@gnu.org
>>
>> "Eli Zaretskii" <eliz@gnu.org> writes:
>>
>> > OK, but still, since you wrote the code to implement it, I guess you
>> > have at least some initial design ideas? I hoped you could describe
>> > those ideas, so we could better understand what you have in mind, and
>> > provide a more useful feedback about possible problems, if any, with
>> > those ideas.
>>
>> The idea is that the main thread, after initialization, never calls into
>> MPS itself.
>
> Thanks. I will ask some questions below to understand better what you
> suggest.
Thanks!
>> Instead, we create an allocation thread, reacting to messages from the
>> main thread.
>>
>> The allocation thread never actually does anything in parallel with the
>> main thread: its purpose is to provide a separate stack, not
>> parallelization.
>
> Why is it important to have a separate stack when MPS allocates
> memory?
Because that way, signal handlers can wait for the MPS allocation to
finish. A signal handler waiting for the thread it interrupted
deadlocks. A signal handler waiting for another thread works.
>> All redirected MPS calls wait synchronously for the allocation thread to
>> respond.
>>
>> This includes the MPS SIGSEGV handler, which calls into MPS, so it must
>> be directed to another thread.
>
> MPS SIGSEGV handler is invoked when the Lisp machine touches objects
> which were relocated by MPS, right?
> What exactly does the allocation thread do when that happens?
Attempt to trigger another fault at the same address, which calls into
MPS, which eventually does whatever is necessary to advance to a state
where there is no longer a memory barrier. Of course we could call the
MPS signal handler directly from the allocation thread rather than
triggering another fault. (MPS allows for the possibility that the
memory barrier is no longer in place by the time the arena lock has been
acquired, and it has to, for multi-threaded operation.)
What precisely MPS does is an implementation detail, and may be
complicated (the instruction emulation code which causes so much trouble
for weak objects, for example).
I also think it's an implementation detail what MPS uses memory barriers
for: I don't think the current code uses superfluous memory barriers to
gather statistics, for example, but we certainly cannot assume that will
never happen.
>> All this makes the previously fast allocation path very slow, and we
>> need a workaround for that:
>>
>> We ensure that we allocate at least 1MB (magic number here) at a time,
>> then split the area into MPS objects when we need to. The assumption
>> that we can split MPS allocations is significant but justifiable,
>> because MPS will be in the same state after two successful back-to-back
>> allocations and a single allocation combining the two.
>
> This seems to rely on some knowledge of MPS internals?
Yes. The assumption is that object sizes are determined by the skip
function, not fixed at allocation time. This must be spelled out
clearly in our code, and ideally it's something which the MPS
documentation should guarantee (AFAIK, it doesn't right now).
> But more worrisome: what about "sudden" needs for more that 1MB of
> memory? For example, C-w in a large buffer needs to allocate a Lisp
> string for the killed text.
That's why I said "at least". If we need more than 1MB we'll allocate
as much as we need.
>> 1. there is no other thread which might trigger a memory barrier (the
>> allocation thread doesn't)
>
> So the allocation thread doesn't GC? If so, who does?
It does GC. It doesn't trigger memory barriers on its own.
> If the allocation thread does GC, then how can you ensure it doesn't
> trigger a barrier?
MPS never triggers memory barriers from MPS code.
>> 3. we don't allocate memory
>
> Why can't GC happen when we don't allocate memory?
>
>> 4. we don't trigger memory barriers
>
> Same question here.
I meant all four conditions are necessary, not that any one of thew
mould be sufficient.
GC can happen if another thread triggers a memory barrier OR another
thread allocates OR we hit a memory barrier OR we allocate. The
question is whether it is ever useful to assume that GC can happen ONLY
in these four cases.
Pip
- Re: Some experience with the igc branch, (continued)
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/28
- Re: Some experience with the igc branch, Pip Cet, 2024/12/25
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/25
- Re: Some experience with the igc branch, Pip Cet, 2024/12/26
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/26
- Re: Some experience with the igc branch, Pip Cet, 2024/12/27
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/27
- Re: Some experience with the igc branch,
Pip Cet <=
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/28
- Re: Some experience with the igc branch, Pip Cet, 2024/12/29
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/29
- Re: Some experience with the igc branch, Pip Cet, 2024/12/29
- Re: Some experience with the igc branch, Eli Zaretskii, 2024/12/29
- Re: Some experience with the igc branch, Pip Cet, 2024/12/29
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/26
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/26
- Re: Some experience with the igc branch, Pip Cet, 2024/12/24
- Re: Some experience with the igc branch, Gerd Möllmann, 2024/12/25