[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PULL v2 00/39] tcg patch queue
From: |
Peter Maydell |
Subject: |
Re: [PULL v2 00/39] tcg patch queue |
Date: |
Tue, 6 Feb 2024 21:24:40 +0000 |
On Tue, 6 Feb 2024 at 03:22, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> v2: Fix rebase error in patch 38 (tcg/s390x: Support TCG_COND_TST{EQ,NE}).
>
>
> r~
>
>
> The following changes since commit 39a6e4f87e7b75a45b08d6dc8b8b7c2954c87440:
>
> Merge tag 'pull-qapi-2024-02-03' of https://repo.or.cz/qemu/armbru into
> staging (2024-02-03 13:31:58 +0000)
>
> are available in the Git repository at:
>
> https://gitlab.com/rth7680/qemu.git tags/pull-tcg-20240205-2
>
> for you to fetch changes up to 23c5692abc3917151dee36c00d751cf5bc46ef19:
>
> tcg/tci: Support TCG_COND_TST{EQ,NE} (2024-02-05 22:45:41 +0000)
>
> ----------------------------------------------------------------
> tcg: Introduce TCG_COND_TST{EQ,NE}
> target/alpha: Use TCG_COND_TST{EQ,NE}
> target/m68k: Use TCG_COND_TST{EQ,NE} in gen_fcc_cond
> target/sparc: Use TCG_COND_TSTEQ in gen_op_mulscc
> target/s390x: Use TCG_COND_TSTNE for CC_OP_{TM,ICM}
> target/s390x: Improve general case of disas_jcc
This really doesn't want to pass the ubuntu-20.04-s390x-all job:
https://gitlab.com/qemu-project/qemu/-/jobs/6109442678
https://gitlab.com/qemu-project/qemu/-/jobs/6108249863
https://gitlab.com/qemu-project/qemu/-/jobs/6106928534
https://gitlab.com/qemu-project/qemu/-/jobs/6105718495
Now, this has definitely been a flaky job recently, so maybe it's
not this pullreq's fault.
This is a passing job from the last successful merge:
https://gitlab.com/qemu-project/qemu/-/jobs/6089342252
That took 24 minutes to run, and all the failed jobs above
took 70 minutes plus.
TBH I think there is something weird with the runner. Looking
at the timestamps in the log, it seems like the passing job
completed its compile step in about 14 minutes, whereas one
of the failing jobs took about 39 minutes. So the entire
run of the job slowed down by more than 2.5x, which is enough
to put it into the range where either the whole job or
individual tests time out.
thuth: any idea why that might happen? (I look in on the
machine from time to time and it doesn't seem to be doing
anything it shouldn't that would be eating CPU.)
Christian: this is on the s390x machine we have. Does the
VM setup for that share IO or CPU with other VMs somehow?
Is there some reason why it might have very variable
performance over time?
thanks
-- PMM