[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for
From: |
Peter Maydell |
Subject: |
[PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores |
Date: |
Sun, 11 Jun 2023 17:00:13 +0100 |
In the recent refactoring we missed a few places which should be
calling finalize_memop_asimd() for ASIMD loads and stores but
instead are just calling finalize_memop(); fix these.
For the disas_ldst_single_struct() and disas_ldst_multiple_struct()
cases, this is not a behaviour change because there the size
is never MO_128 and the two finalize functions do the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/tcg/translate-a64.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index d271449431a..1108f8287b8 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -3309,6 +3309,7 @@ static void disas_ldst_reg_roffset(DisasContext *s,
uint32_t insn,
if (!fp_access_check(s)) {
return;
}
+ memop = finalize_memop_asimd(s, size);
} else {
if (size == 3 && opc == 2) {
/* PRFM - prefetch */
@@ -3321,6 +3322,7 @@ static void disas_ldst_reg_roffset(DisasContext *s,
uint32_t insn,
is_store = (opc == 0);
is_signed = !is_store && extract32(opc, 1, 1);
is_extended = (size < 3) && extract32(opc, 0, 1);
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
}
if (rn == 31) {
@@ -3333,7 +3335,6 @@ static void disas_ldst_reg_roffset(DisasContext *s,
uint32_t insn,
tcg_gen_add_i64(dirty_addr, dirty_addr, tcg_rm);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
clean_addr = gen_mte_check1(s, dirty_addr, is_store, true, memop);
if (is_vector) {
@@ -3398,6 +3399,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s,
uint32_t insn,
if (!fp_access_check(s)) {
return;
}
+ memop = finalize_memop_asimd(s, size);
} else {
if (size == 3 && opc == 2) {
/* PRFM - prefetch */
@@ -3410,6 +3412,7 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s,
uint32_t insn,
is_store = (opc == 0);
is_signed = !is_store && extract32(opc, 1, 1);
is_extended = (size < 3) && extract32(opc, 0, 1);
+ memop = finalize_memop(s, size + is_signed * MO_SIGN);
}
if (rn == 31) {
@@ -3419,7 +3422,6 @@ static void disas_ldst_reg_unsigned_imm(DisasContext *s,
uint32_t insn,
offset = imm12 << size;
tcg_gen_addi_i64(dirty_addr, dirty_addr, offset);
- memop = finalize_memop(s, size + is_signed * MO_SIGN);
clean_addr = gen_mte_check1(s, dirty_addr, is_store, rn != 31, memop);
if (is_vector) {
@@ -3861,7 +3863,7 @@ static void disas_ldst_multiple_struct(DisasContext *s,
uint32_t insn)
* promote consecutive little-endian elements below.
*/
clean_addr = gen_mte_checkN(s, tcg_rn, is_store, is_postidx || rn != 31,
- total, finalize_memop(s, size));
+ total, finalize_memop_asimd(s, size));
/*
* Consecutive little-endian elements from a single register
@@ -4019,7 +4021,7 @@ static void disas_ldst_single_struct(DisasContext *s,
uint32_t insn)
total = selem << scale;
tcg_rn = cpu_reg_sp(s, rn);
- mop = finalize_memop(s, scale);
+ mop = finalize_memop_asimd(s, scale);
clean_addr = gen_mte_checkN(s, tcg_rn, !is_load, is_postidx || rn != 31,
total, mop);
--
2.34.1
- [PATCH v2 11/23] target/arm: Convert load/store exclusive and ordered to decodetree, (continued)
- [PATCH v2 11/23] target/arm: Convert load/store exclusive and ordered to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 16/23] target/arm: Convert LDR/STR with 12-bit immediate to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 22/23] target/arm: Convert load/store single structure to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 13/23] target/arm: Convert load reg (literal) group to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 09/23] target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 07/23] target/arm: Convert CFINV, XAFLAG and AXFLAG to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 06/23] target/arm: Convert barrier insns to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 04/23] target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores,
Peter Maydell <=
- [PATCH v2 10/23] target/arm: Convert exception generation instructions to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 12/23] target/arm: Convert LDXP, STXP, CASP, CAS to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 17/23] target/arm: Convert LDR/STR reg+reg to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 14/23] target/arm: Convert load/store-pair to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 18/23] target/arm: Convert atomic memory ops to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 23/23] target/arm: Convert load/store tags insns to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 05/23] target/arm: Convert hint instruction space to decodetree, Peter Maydell, 2023/06/11
- [PATCH v2 02/23] target/arm: Return correct result for LDG when ATA=0, Peter Maydell, 2023/06/11