qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] tcg: Avoid double lock if page tables happen to be in mm


From: Richard Henderson
Subject: Re: [PATCH 3/3] tcg: Avoid double lock if page tables happen to be in mmio memory.
Date: Thu, 15 Feb 2024 09:30:27 -1000
User-agent: Mozilla Thunderbird

On 2/15/24 05:01, Jonathan Cameron wrote:
On i386, after fixing the page walking code to work with pages in
MMIO memory (specifically CXL emulated interleaved memory),
a crash was seen in an interrupt handling path.

Useful part of bt

Peter identified this as being due to the BQL already being
held when the page table walker encounters MMIO memory and attempts
to take the lock again.  There are other examples of similar paths
TCG, so this follows the approach taken in those of simply checking
if the lock is already held and if it is, don't take it again.

Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
  accel/tcg/cputlb.c | 9 +++++++--
  1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 047cd2cc0a..3b8d178707 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -2019,6 +2019,7 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, 
CPUTLBEntryFull *full,
                                 int mmu_idx, MMUAccessType type, uintptr_t ra)
  {
      MemoryRegionSection *section;
+    bool locked = bql_locked();
      MemoryRegion *mr;
      hwaddr mr_offset;
      MemTxAttrs attrs;
@@ -2030,10 +2031,14 @@ static uint64_t do_ld_mmio_beN(CPUState *cpu, 
CPUTLBEntryFull *full,
      section = io_prepare(&mr_offset, cpu, full->xlat_section, attrs, addr, 
ra);
      mr = section->mr;
- bql_lock();
+    if (!locked) {
+        bql_lock();
+    }
      ret = int_ld_mmio_beN(cpu, full, ret_be, addr, size, mmu_idx,
                            type, ra, mr, mr_offset);
-    bql_unlock();
+    if (!locked) {
+        bql_unlock();
+    }

On top of other comments, I'm never keen on this type of test/lock/test/unlock. When this kind of thing is encountered, it means we should have been using a recursive lock in the first place.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]