[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v5 32/54] tcg/loongarch64: Support softmmu unaligned accesses
From: |
Richard Henderson |
Subject: |
[PATCH v5 32/54] tcg/loongarch64: Support softmmu unaligned accesses |
Date: |
Mon, 15 May 2023 07:32:51 -0700 |
Test the final byte of an unaligned access.
Use BSTRINS.D to clear the range of bits, rather than AND.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/loongarch64/tcg-target.c.inc | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index 33d8e67513..7d0165349d 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -848,7 +848,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s,
HostAddress *h,
int fast_ofs = TLB_MASK_TABLE_OFS(mem_index);
int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask);
int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table);
- tcg_target_long compare_mask;
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
@@ -872,14 +871,20 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s,
HostAddress *h,
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP2,
offsetof(CPUTLBEntry, addend));
- /* We don't support unaligned accesses. */
+ /*
+ * For aligned accesses, we check the first byte and include the alignment
+ * bits within the address. For unaligned access, we check that we don't
+ * cross pages using the address of the last byte of the access.
+ */
if (a_bits < s_bits) {
- a_bits = s_bits;
+ unsigned a_mask = (1u << a_bits) - 1;
+ unsigned s_mask = (1u << s_bits) - 1;
+ tcg_out_addi(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg, s_mask - a_mask);
+ } else {
+ tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg);
}
- /* Clear the non-page, non-alignment bits from the address. */
- compare_mask = (tcg_target_long)TARGET_PAGE_MASK | ((1 << a_bits) - 1);
- tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask);
- tcg_out_opc_and(s, TCG_REG_TMP1, TCG_REG_TMP1, addr_reg);
+ tcg_out_opc_bstrins_d(s, TCG_REG_TMP1, TCG_REG_ZERO,
+ a_bits, TARGET_PAGE_BITS - 1);
/* Compare masked address with the TLB entry. */
ldst->label_ptr[0] = s->code_ptr;
--
2.34.1
- [PATCH v5 21/54] tcg/arm: Use full load/store helpers in user-only mode, (continued)
- [PATCH v5 21/54] tcg/arm: Use full load/store helpers in user-only mode, Richard Henderson, 2023/05/15
- [PATCH v5 25/54] tcg/sparc64: Rename tcg_out_movi_imm13 to tcg_out_movi_s13, Richard Henderson, 2023/05/15
- [PATCH v5 36/54] tcg: Introduce tcg_out_movext3, Richard Henderson, 2023/05/15
- [PATCH v5 20/54] tcg/arm: Adjust constraints on qemu_ld/st, Richard Henderson, 2023/05/15
- [PATCH v5 27/54] tcg/sparc64: Rename tcg_out_movi_imm32 to tcg_out_movi_u32, Richard Henderson, 2023/05/15
- [PATCH v5 32/54] tcg/loongarch64: Support softmmu unaligned accesses,
Richard Henderson <=
- [PATCH v5 26/54] target/sparc64: Remove tcg_out_movi_s13 case from tcg_out_movi_imm32, Richard Henderson, 2023/05/15
- [PATCH v5 39/54] tcg: Introduce atom_and_align_for_opc, Richard Henderson, 2023/05/15
- [PATCH v5 37/54] tcg: Merge tcg_out_helper_load_regs into caller, Richard Henderson, 2023/05/15
- [PATCH v5 38/54] tcg: Support TCG_TYPE_I128 in tcg_out_{ld, st}_helper_{args, ret}, Richard Henderson, 2023/05/15
- [PATCH v5 28/54] tcg/sparc64: Split out tcg_out_movi_s32, Richard Henderson, 2023/05/15
- [PATCH v5 23/54] tcg/s390x: Use full load/store helpers in user-only mode, Richard Henderson, 2023/05/15
- [PATCH v5 33/54] tcg/riscv: Support softmmu unaligned accesses, Richard Henderson, 2023/05/15