qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] 886cc6: accel/tcg: fix race in cpu_exec_step_


From: Peter Maydell
Subject: [Qemu-commits] [qemu/qemu] 886cc6: accel/tcg: fix race in cpu_exec_step_atomic (bug 1...
Date: Mon, 02 Mar 2020 05:00:14 -0800

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 886cc68943ebe8cf7e5f970be33459f95068a441
      
https://github.com/qemu/qemu/commit/886cc68943ebe8cf7e5f970be33459f95068a441
  Author: Alex Bennée <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M accel/tcg/cpu-exec.c

  Log Message:
  -----------
  accel/tcg: fix race in cpu_exec_step_atomic (bug 1863025)

The bug describes a race whereby cpu_exec_step_atomic can acquire a TB
which is invalidated by a tb_flush before we execute it. This doesn't
affect the other cpu_exec modes as a tb_flush by it's nature can only
occur on a quiescent system. The race was described as:

  B2. tcg_cpu_exec => cpu_exec => tb_find => tb_gen_code
  B3. tcg_tb_alloc obtains a new TB

      C3. TB obtained with tb_lookup__cpu_state or tb_gen_code
          (same TB as B2)

          A3. start_exclusive critical section entered
          A4. do_tb_flush is called, TB memory freed/re-allocated
          A5. end_exclusive exits critical section

  B2. tcg_cpu_exec => cpu_exec => tb_find => tb_gen_code
  B3. tcg_tb_alloc reallocates TB from B2

      C4. start_exclusive critical section entered
      C5. cpu_tb_exec executes the TB code that was free in A4

The simplest fix is to widen the exclusive period to include the TB
lookup. As a result we can drop the complication of checking we are in
the exclusive region before we end it.

Cc: Yifan <address@hidden>
Buglink: https://bugs.launchpad.net/qemu/+bug/1863025
Reviewed-by: Paolo Bonzini <address@hidden>
Reviewed-by: Richard Henderson <address@hidden>
Signed-off-by: Alex Bennée <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: b59ea3640cc03f1f609fc5155605be71f43bcd2e
      
https://github.com/qemu/qemu/commit/b59ea3640cc03f1f609fc5155605be71f43bcd2e
  Author: Zenghui Yu <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M include/qemu/compiler.h

  Log Message:
  -----------
  compiler.h: Don't use compile-time assert when __NO_INLINE__ is defined

Our robot reported the following compile-time warning while compiling
Qemu with -fno-inline cflags:

In function 'load_memop',
    inlined from 'load_helper' at /qemu/accel/tcg/cputlb.c:1578:20,
    inlined from 'full_ldub_mmu' at /qemu/accel/tcg/cputlb.c:1624:12:
/qemu/accel/tcg/cputlb.c:1502:9: error: call to 'qemu_build_not_reached' 
declared with attribute error: code path is reachable
         qemu_build_not_reached();
         ^~~~~~~~~~~~~~~~~~~~~~~~
    [...]

It looks like a false-positive because only (MO_UB ^ MO_BSWAP) will
hit the default case in load_memop() while need_swap (size > 1) has
already ensured that MO_UB is not involved.

So the thing is that compilers get confused by the -fno-inline and
just can't accurately evaluate memop_size(op) at compile time, and
then the qemu_build_not_reached() is wrongly triggered by (MO_UB ^
MO_BSWAP).  Let's carefully don't use the compile-time assert when
no functions will be inlined into their callers.

Reported-by: Euler Robot <address@hidden>
Suggested-by: Richard Henderson <address@hidden>
Signed-off-by: Zenghui Yu <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: d6fa50f536b2e4422f3065b6a11c4f8e42676f47
      
https://github.com/qemu/qemu/commit/d6fa50f536b2e4422f3065b6a11c4f8e42676f47
  Author: Richard Henderson <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M tcg/arm/tcg-target.inc.c

  Log Message:
  -----------
  tcg/arm: Split out tcg_out_epilogue

We will shortly use this function from tcg_out_op as well.

Reviewed-by: Philippe Mathieu-Daudé <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: fc1bfdfd0cc49cbe084b1da11a6fadf80b9c95c8
      
https://github.com/qemu/qemu/commit/fc1bfdfd0cc49cbe084b1da11a6fadf80b9c95c8
  Author: Richard Henderson <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M tcg/arm/tcg-target.inc.c

  Log Message:
  -----------
  tcg/arm: Expand epilogue inline

It is, after all, just two instructions.

Profiling on a cortex-a15, using -d nochain to increase the number
of exit_tb that are executed, shows a minor improvement of 0.5%.

Signed-off-by: Richard Henderson <address@hidden>


  Commit: a2fa63a8f57b6294009f3db198fa841139051c56
      
https://github.com/qemu/qemu/commit/a2fa63a8f57b6294009f3db198fa841139051c56
  Author: Alex Bennée <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M accel/tcg/translate-all.c

  Log Message:
  -----------
  accel/tcg: use units.h for defining code gen buffer sizes

It's easier to read.

Signed-off-by: Alex Bennée <address@hidden>
Reviewed-by: Niek Linnenbank <address@hidden>
Reviewed-by: Richard Henderson <address@hidden>
Reviewed-by: Philippe Mathieu-Daudé <address@hidden>
Tested-by: Philippe Mathieu-Daudé <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: 47a2def4533a2807e48954abd50b32ecb1aaf29a
      
https://github.com/qemu/qemu/commit/47a2def4533a2807e48954abd50b32ecb1aaf29a
  Author: Alex Bennée <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M accel/tcg/translate-all.c

  Log Message:
  -----------
  accel/tcg: remove link between guest ram and TCG cache size

Basing the TB cache size on the ram_size was always a little heuristic
and was broken by a1b18df9a4 which caused ram_size not to be fully
realised at the time we initialise the TCG translation cache.

The current DEFAULT_CODE_GEN_BUFFER_SIZE may still be a little small
but follow-up patches will address that.

Fixes: a1b18df9a4
Cc: Igor Mammedov <address@hidden>
Signed-off-by: Alex Bennée <address@hidden>
Reviewed-by: Richard Henderson <address@hidden>
Tested-by: Philippe Mathieu-Daudé <address@hidden>
Reviewed-by: Niek Linnenbank <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: 21f2f447ad28d099d3ac5adc8ffe8eeaa56604f4
      
https://github.com/qemu/qemu/commit/21f2f447ad28d099d3ac5adc8ffe8eeaa56604f4
  Author: Alex Bennée <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M accel/tcg/translate-all.c

  Log Message:
  -----------
  accel/tcg: only USE_STATIC_CODE_GEN_BUFFER on 32 bit hosts

There is no particular reason to use a static codegen buffer on 64 bit
hosts as we have address space to burn. Allow the common CONFIG_USER
case to use the mmap'ed buffers like SoftMMU.

Signed-off-by: Alex Bennée <address@hidden>
Reviewed-by: Richard Henderson <address@hidden>
Reviewed-by: Philippe Mathieu-Daudé <address@hidden>
Tested-by: Philippe Mathieu-Daudé <address@hidden>
Reviewed-by: Niek Linnenbank <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: 600e17b261555c56a048781b8dd5ba3985650013
      
https://github.com/qemu/qemu/commit/600e17b261555c56a048781b8dd5ba3985650013
  Author: Alex Bennée <address@hidden>
  Date:   2020-02-28 (Fri, 28 Feb 2020)

  Changed paths:
    M accel/tcg/translate-all.c

  Log Message:
  -----------
  accel/tcg: increase default code gen buffer size for 64 bit

While 32mb is certainly usable a full system boot ends up flushing the
codegen buffer nearly 100 times. Increase the default on 64 bit hosts
to take advantage of all that spare memory. After this change I can
boot my tests system without any TB flushes.

As we usually run more CONFIG_USER binaries at a time in typical usage
we aren't quite as profligate for user-mode code generation usage. We
also bring the static code gen defies to the same place to keep all
the reasoning in the comments together.

Signed-off-by: Alex Bennée <address@hidden>
Tested-by: Niek Linnenbank <address@hidden>
Reviewed-by: Niek Linnenbank <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>


  Commit: 9f1750ed68911bef069b5d9ba5cef8150972bcf1
      
https://github.com/qemu/qemu/commit/9f1750ed68911bef069b5d9ba5cef8150972bcf1
  Author: Peter Maydell <address@hidden>
  Date:   2020-03-02 (Mon, 02 Mar 2020)

  Changed paths:
    M accel/tcg/cpu-exec.c
    M accel/tcg/translate-all.c
    M include/qemu/compiler.h
    M tcg/arm/tcg-target.inc.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20200228' into staging

Fix race in cpu_exec_step_atomic.
Work around compile failure with -fno-inine.
Expand tcg/arm epilogue inline.
Adjustments to the default code gen buffer size.

# gpg: Signature made Sat 29 Feb 2020 02:13:43 GMT
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "address@hidden"
# gpg: Good signature from "Richard Henderson <address@hidden>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth/tags/pull-tcg-20200228:
  accel/tcg: increase default code gen buffer size for 64 bit
  accel/tcg: only USE_STATIC_CODE_GEN_BUFFER on 32 bit hosts
  accel/tcg: remove link between guest ram and TCG cache size
  accel/tcg: use units.h for defining code gen buffer sizes
  tcg/arm: Expand epilogue inline
  tcg/arm: Split out tcg_out_epilogue
  compiler.h: Don't use compile-time assert when __NO_INLINE__ is defined
  accel/tcg: fix race in cpu_exec_step_atomic (bug 1863025)

Signed-off-by: Peter Maydell <address@hidden>


Compare: https://github.com/qemu/qemu/compare/a4c7ed8b89e8...9f1750ed6891



reply via email to

[Prev in Thread] Current Thread [Next in Thread]