[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Stable-8.1.1 12/34] softmmu: Use async_run_on_cpu in tcg_commit
From: |
Michael Tokarev |
Subject: |
[Stable-8.1.1 12/34] softmmu: Use async_run_on_cpu in tcg_commit |
Date: |
Sat, 9 Sep 2023 13:27:05 +0300 |
From: Richard Henderson <richard.henderson@linaro.org>
After system startup, run the update to memory_dispatch
and the tlb_flush on the cpu. This eliminates a race,
wherein a running cpu sees the memory_dispatch change
but has not yet seen the tlb_flush.
Since the update now happens on the cpu, we need not use
qatomic_rcu_read to protect the read of memory_dispatch.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1826
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1834
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1846
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 0d58c660689f6da1e3feff8a997014003d928b3b)
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
diff --git a/accel/tcg/cpu-exec-common.c b/accel/tcg/cpu-exec-common.c
index 9a5fabf625..7e35d7f4b5 100644
--- a/accel/tcg/cpu-exec-common.c
+++ b/accel/tcg/cpu-exec-common.c
@@ -33,36 +33,6 @@ void cpu_loop_exit_noexc(CPUState *cpu)
cpu_loop_exit(cpu);
}
-#if defined(CONFIG_SOFTMMU)
-void cpu_reloading_memory_map(void)
-{
- if (qemu_in_vcpu_thread() && current_cpu->running) {
- /* The guest can in theory prolong the RCU critical section as long
- * as it feels like. The major problem with this is that because it
- * can do multiple reconfigurations of the memory map within the
- * critical section, we could potentially accumulate an unbounded
- * collection of memory data structures awaiting reclamation.
- *
- * Because the only thing we're currently protecting with RCU is the
- * memory data structures, it's sufficient to break the critical
section
- * in this callback, which we know will get called every time the
- * memory map is rearranged.
- *
- * (If we add anything else in the system that uses RCU to protect
- * its data structures, we will need to implement some other mechanism
- * to force TCG CPUs to exit the critical section, at which point this
- * part of this callback might become unnecessary.)
- *
- * This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(),
which
- * only protects cpu->as->dispatch. Since we know our caller is about
- * to reload it, it's safe to split the critical section.
- */
- rcu_read_unlock();
- rcu_read_lock();
- }
-}
-#endif
-
void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 87dc9a752c..41788c0bdd 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -133,7 +133,6 @@ static inline void cpu_physical_memory_write(hwaddr addr,
{
cpu_physical_memory_rw(addr, (void *)buf, len, true);
}
-void cpu_reloading_memory_map(void);
void *cpu_physical_memory_map(hwaddr addr,
hwaddr *plen,
bool is_write);
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 7597dc1c39..18277ddd67 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -680,8 +680,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx,
hwaddr orig_addr,
IOMMUTLBEntry iotlb;
int iommu_idx;
hwaddr addr = orig_addr;
- AddressSpaceDispatch *d =
- qatomic_rcu_read(&cpu->cpu_ases[asidx].memory_dispatch);
+ AddressSpaceDispatch *d = cpu->cpu_ases[asidx].memory_dispatch;
for (;;) {
section = address_space_translate_internal(d, addr, &addr, plen,
false);
@@ -2412,7 +2411,7 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu,
{
int asidx = cpu_asidx_from_attrs(cpu, attrs);
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
- AddressSpaceDispatch *d = qatomic_rcu_read(&cpuas->memory_dispatch);
+ AddressSpaceDispatch *d = cpuas->memory_dispatch;
int section_index = index & ~TARGET_PAGE_MASK;
MemoryRegionSection *ret;
@@ -2487,23 +2486,42 @@ static void tcg_log_global_after_sync(MemoryListener
*listener)
}
}
+static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data)
+{
+ CPUAddressSpace *cpuas = data.host_ptr;
+
+ cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as);
+ tlb_flush(cpu);
+}
+
static void tcg_commit(MemoryListener *listener)
{
CPUAddressSpace *cpuas;
- AddressSpaceDispatch *d;
+ CPUState *cpu;
assert(tcg_enabled());
/* since each CPU stores ram addresses in its TLB cache, we must
reset the modified entries */
cpuas = container_of(listener, CPUAddressSpace, tcg_as_listener);
- cpu_reloading_memory_map();
- /* The CPU and TLB are protected by the iothread lock.
- * We reload the dispatch pointer now because cpu_reloading_memory_map()
- * may have split the RCU critical section.
+ cpu = cpuas->cpu;
+
+ /*
+ * Defer changes to as->memory_dispatch until the cpu is quiescent.
+ * Otherwise we race between (1) other cpu threads and (2) ongoing
+ * i/o for the current cpu thread, with data cached by mmu_lookup().
+ *
+ * In addition, queueing the work function will kick the cpu back to
+ * the main loop, which will end the RCU critical section and reclaim
+ * the memory data structures.
+ *
+ * That said, the listener is also called during realize, before
+ * all of the tcg machinery for run-on is initialized: thus halt_cond.
*/
- d = address_space_to_dispatch(cpuas->as);
- qatomic_rcu_set(&cpuas->memory_dispatch, d);
- tlb_flush(cpuas->cpu);
+ if (cpu->halt_cond) {
+ async_run_on_cpu(cpu, tcg_commit_cpu, RUN_ON_CPU_HOST_PTR(cpuas));
+ } else {
+ tcg_commit_cpu(cpu, RUN_ON_CPU_HOST_PTR(cpuas));
+ }
}
static void memory_map_init(void)
--
2.39.2
- [Stable-8.1.1 07/34] accel/kvm: Specify default IPA size for arm64, (continued)
- [Stable-8.1.1 07/34] accel/kvm: Specify default IPA size for arm64, Michael Tokarev, 2023/09/09
- [Stable-8.1.1 09/34] target/arm: Fix 64-bit SSRA, Michael Tokarev, 2023/09/09
- [Stable-8.1.1 10/34] docs/about/license: Update LICENSE URL, Michael Tokarev, 2023/09/09
- [Stable-8.1.1 11/34] softmmu: Assert data in bounds in iotlb_to_section, Michael Tokarev, 2023/09/09
- Re: [Stable-8.1.1 11/34] softmmu: Assert data in bounds in iotlb_to_section, Alex Bennée, 2023/09/20
- Re: [Stable-8.1.1 11/34] softmmu: Assert data in bounds in iotlb_to_section, Michael Tokarev, 2023/09/22
[Stable-8.1.1 12/34] softmmu: Use async_run_on_cpu in tcg_commit,
Michael Tokarev <=
[Stable-8.1.1 13/34] block-migration: Ensure we don't crash during migration cleanup, Michael Tokarev, 2023/09/09
[Stable-8.1.1 14/34] target/arm: properly document FEAT_CRC32, Michael Tokarev, 2023/09/09
[Stable-8.1.1 15/34] linux-user: Adjust brk for load_bias, Michael Tokarev, 2023/09/09
[Stable-8.1.1 16/34] target/i386: raise FERR interrupt with iothread locked, Michael Tokarev, 2023/09/09
[Stable-8.1.1 17/34] ui/dbus: Properly dispose touch/mouse dbus objects, Michael Tokarev, 2023/09/09
[Stable-8.1.1 18/34] ppc/vof: Fix missed fields in VOF cleanup, Michael Tokarev, 2023/09/09
[Stable-8.1.1 19/34] hw/ppc/e500: fix broken snapshot replay, Michael Tokarev, 2023/09/09
[Stable-8.1.1 20/34] target/ppc: Flush inputs to zero with NJ in ppc_store_vscr, Michael Tokarev, 2023/09/09
[Stable-8.1.1 22/34] hw/ide/core: set ERR_STAT in unsupported command completion, Michael Tokarev, 2023/09/09
[Stable-8.1.1 21/34] target/ppc: Fix LQ, STQ register-pair order for big-endian, Michael Tokarev, 2023/09/09