qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] target/ppc: Fix load endianness for lxvwsx/lxvdsx


From: Richard Henderson
Subject: Re: [PATCH] target/ppc: Fix load endianness for lxvwsx/lxvdsx
Date: Tue, 18 May 2021 05:42:03 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1

On 5/18/21 4:23 AM, Giuseppe Musacchio wrote:
TARGET_WORDS_BIGENDIAN may not match the machine endianness if that's a
runtime-configurable parameter.

Fixes: bcb0b7b1a1c05707304f80ca6f523d557816f85c
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/212

Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
---
  target/ppc/translate/vsx-impl.c.inc | 12 ++++++++++--
  1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/target/ppc/translate/vsx-impl.c.inc 
b/target/ppc/translate/vsx-impl.c.inc
index b817d31260..3e840e756f 100644
--- a/target/ppc/translate/vsx-impl.c.inc
+++ b/target/ppc/translate/vsx-impl.c.inc
@@ -139,7 +139,11 @@ static void gen_lxvwsx(DisasContext *ctx)
      gen_addr_reg_index(ctx, EA);
data = tcg_temp_new_i32();
-    tcg_gen_qemu_ld_i32(data, EA, ctx->mem_idx, MO_TEUL);
+    if (ctx->le_mode) {
+        tcg_gen_qemu_ld_i32(data, EA, ctx->mem_idx, MO_LEUL);
+    } else {
+        tcg_gen_qemu_ld_i32(data, EA, ctx->mem_idx, MO_BEUL);
+    }

Reducing this replication is why we have default_tcg_memop_mask.

This should be ctx->default_tcg_memop_mask | MO_UL.

      tcg_gen_gvec_dup_i32(MO_UL, vsr_full_offset(xT(ctx->opcode)), 16, 16, 
data);
tcg_temp_free(EA);
@@ -162,7 +166,11 @@ static void gen_lxvdsx(DisasContext *ctx)
      gen_addr_reg_index(ctx, EA);
data = tcg_temp_new_i64();
-    tcg_gen_qemu_ld_i64(data, EA, ctx->mem_idx, MO_TEQ);
+    if (ctx->le_mode) {
+        tcg_gen_qemu_ld_i64(data, EA, ctx->mem_idx, MO_LEQ);
+    } else {
+        tcg_gen_qemu_ld_i64(data, EA, ctx->mem_idx, MO_BEQ);
+    }

Similarly ... | MO_Q.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]