qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 07/10] tcg: implement bulletproof JIT


From: Joelle van Dyne
Subject: Re: [PATCH 07/10] tcg: implement bulletproof JIT
Date: Wed, 14 Oct 2020 15:54:27 -0700

There's about 40 instances of *code_ptr or code_ptr[i] changed to
TCG_CODE_PTR_RW(s, code_ptr). It's around 2 instances per function, so
if I go with a local variable, that would be ~20 extra LOC.

Another alternative is two separate functions: tcg_code_ptr_insn_rw()
which returns tcg_insn_unit * and tcg_code_ptr_rw() which returns void
*. I'll go that route unless there's any objections?

-j

On Wed, Oct 14, 2020 at 2:49 PM Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 10/14/20 1:58 PM, Joelle van Dyne wrote:
> > Much of the code that uses the macro is like the following (from
> > aarch64/tcg-include.inc.c)
> >
> >         *TCG_CODE_PTR_RW(s, code_ptr) =
> >             deposit32(*TCG_CODE_PTR_RW(s, code_ptr), 0, 26, offset);
> >
> > Before the change, it was just *code_ptr. I'm saying the alternative
> > was to have to write "tcg_insn_unit *rw_code_ptr = (tcg_insn_unit
> > *)TCG_CODE_PTR_RW(s, code_ptr)" everywhere or else inline cast it.
> > Whereas making it return tcg_insn_unit * means only three instances of
> > casting to uint8_t *. Using void * means casting at every instance.
>
> I should have done more than skim, I suppose.
>
> Well, without going back to look, how many of these are there, really?
> Virtually all of the writes should be via tcg_out32().
>
> If there's < 5 of the above per tcg/foo/ -- particularly if they're all
> restricted to relocations as in the above -- then I'm ok with local variable
> assignment to "rw_ptr".  Especially since the replacement isn't exactly small,
> and you're having to split to two separate lines anyway.
>
> I'll have a real look when you've split this into parts, because otherwise 
> it's
> just too big.
>
>
> r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]