qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: spin loop 100x faster in user mode (CPL=3) than superuser (CPL=0)?


From: Garrick Toubassi
Subject: Re: spin loop 100x faster in user mode (CPL=3) than superuser (CPL=0)?
Date: Fri, 19 Nov 2021 12:55:43 -0800

I debugged this a bit more and am closer to an explanation.  Some crude rdtsc cycle profiling led me to the fact that notdirty_write is called a ton in the slow case, and, in aggregate, takes up most of the time.  Running qemu with "-trace memory_notdirty_write_access" shows that in the slow case it is logging this millions of times:

memory_notdirty_write_access 0x7be0 ram_addr 0x7be0 size 4
memory_notdirty_write_access 0x7be0 ram_addr 0x7be0 size 4
memory_notdirty_write_access 0x7be0 ram_addr 0x7be0 size 4
memory_notdirty_write_access 0x7be0 ram_addr 0x7be0 size 4

This is almost certainly the local variable used for the spin loop (which iterates 10M times).  Note the address is 0x7be0 which shares the same page as the boot block (0x7c00).  I am setting the stack to 0x7c00 to grow down from the boot block based on a suggestion for early bootstrapping from wiki.osdev.org/MBR_(x86).

So I am doing a lot of writes against a page which is also executable which I expect is part of the problem.  The reason it's so fast in user mode has nothing to do with privilege level, it's just that I happen to be using a stack that is not on a page shared with executable code.  In fact if I change the slow version's stack to not share a page with executable code (just moving it to 0x6900) it works fine/fast.  Again, has nothing to do with privilege level.

This is easily avoided on my end and of course having pages that are both executable and written to is not a common practice.

But the question remains as to whether the behavior is expected that if you have pages which are executable and also written to, the writes will be persistently slow?  Or is that a possible (fringe) bug?

Thank you!

gt


On Fri, Nov 12, 2021 at 10:16 AM Garrick Toubassi <gtoubassi@gmail.com> wrote:
Thanks Alex!

Thanks for the pointer to gdbstub.  I just single stepped through the loop and see it behaving "as expected", which leads me to believe the performance issue doesn't show up in the execution of the client code.  But it sounds like you are saying you see evidence of it executing at 0x9fffc?  Can you elaborate?

Here's what I did, let me know if I'm misunderstanding.  I ran

% qemu-system-x86_64 -s -S ./kernel.img

Then

% gdb ./kernel.img
(gdb) target remote localhost:1234

Then set a breakpoint at the spin() function (see source)
(gdb) b *0x7e00
(gdb) cont

At that point I stepped through the loop several times and it behaved as expected.  I even "let it rip" with:

(gdb) while 1
 > x/i $rip
 > stepi
 > end

And it stayed well behaved operating on the "client" code as I'd expect.

My next step would be to step through the emulator itself but it sounds like you are seeing something that would short circuit that labor intensive exercise.  Pointers appreciated!

gt

On Fri, Nov 12, 2021 at 3:06 AM Alex Bennée <alex.bennee@linaro.org> wrote:

Alex Bennée <alex.bennee@linaro.org> writes:

> Garrick Toubassi <gtoubassi@gmail.com> writes:
>
>> I went ahead and created a short repro case which can be found at https://github.com/gtoubassi/qemu-spinrepro.  Would appreciate
>> thoughts from anyone or guidance on how to debug.
>
> Well something weird is going on that is chewing through the code
> generation logic. If you run with:
>
>  ./qemu-system-x86_64 -serial mon:stdio -kernel ~/Downloads/kernel.img
>
> And then C-a c to bring up the monitor you can type "info jit" and see:
>
>   (qemu) info jit
>   Translation buffer state:
>   gen code size       1063758051/1073736704
>   TB count            1
>   TB avg target size  1 max=1 bytes
>   TB avg host size    64 bytes (expansion ratio: 64.0)
>   cross page TB count 0 (0%)
>   direct jump count   0 (0%) (2 jumps=0 0%)
>   TB hash buckets     1/8192 (0.01% head buckets used)
>   TB hash occupancy   0.00% avg chain occ. Histogram: [0.0,2.5)%|█        ▁|[22.5,25.0]%
>   TB hash avg chain   1.000 buckets. Histogram: 1|█|1
>
<snip>

Hmm ok that's just a result of the code disappearing down a hole:

  0x0009fffc:  00 00                    addb     %al, (%bx, %si)
  0x0009fffe:  00 00                    addb     %al, (%bx, %si)
  0x000a0000:  ff                       .byte    0xff
  0x000a0001:  ff                       .byte    0xff

and as that code is being executed out of a place without a phys_pc we
don't cache the TB (by design). Usually this isn't a massive problem but
obviously something has gone wrong in the code to be executing these
junk instructions.

Have you traced the execution of your code via gdbstub?

--
Alex Bennée

reply via email to

[Prev in Thread] Current Thread [Next in Thread]