qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 14/19] arm/hvf: Add a WFI handler


From: Peter Maydell
Subject: Re: [PATCH v8 14/19] arm/hvf: Add a WFI handler
Date: Tue, 15 Jun 2021 11:38:07 +0100

On Wed, 19 May 2021 at 21:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> From: Peter Collingbourne <pcc@google.com>
>
> Sleep on WFI until the VTIMER is due but allow ourselves to be woken
> up on IPI.
>
> In this implementation IPI is blocked on the CPU thread at startup and
> pselect() is used to atomically unblock the signal and begin sleeping.
> The signal is sent unconditionally so there's no need to worry about
> races between actually sleeping and the "we think we're sleeping"
> state. It may lead to an extra wakeup but that's better than missing
> it entirely.
>
> Signed-off-by: Peter Collingbourne <pcc@google.com>
> [agraf: Remove unused 'set' variable, always advance PC on WFX trap]
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Acked-by: Roman Bolshakov <r.bolshakov@yadro.com>
>
> ---

> +static void hvf_wfi(CPUState *cpu)
> +{
> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
> +    hv_return_t r;
> +    uint64_t ctl;
> +
> +    if (cpu->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_FIQ)) {
> +        /* Interrupt pending, no need to wait */
> +        return;
> +    }
> +
> +    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CTL_EL0,
> +                            &ctl);
> +    assert_hvf_ok(r);
> +
> +    if (!(ctl & 1) || (ctl & 2)) {
> +        /* Timer disabled or masked, just wait for an IPI. */
> +        hvf_wait_for_ipi(cpu, NULL);
> +        return;
> +    }
> +
> +    uint64_t cval;
> +    r = hv_vcpu_get_sys_reg(cpu->hvf->fd, HV_SYS_REG_CNTV_CVAL_EL0,
> +                            &cval);
> +    assert_hvf_ok(r);
> +
> +    int64_t ticks_to_sleep = cval - mach_absolute_time();

This looks odd. The CNTV_CVAL is the compare value against
the CNTVCT (virtual count), which should start at 0 when the
VM starts, pause when the VM is paused, and so on. But here
we are comparing it against what looks like a host absolute
timecount...

> +    if (ticks_to_sleep < 0) {
> +        return;
> +    }
> +
> +    uint64_t seconds = ticks_to_sleep / arm_cpu->gt_cntfrq_hz;
> +    uint64_t nanos =
> +        (ticks_to_sleep - arm_cpu->gt_cntfrq_hz * seconds) *
> +        1000000000 / arm_cpu->gt_cntfrq_hz;

Should this be calling gt_cntfrq_period_ns() ?
(If not, please use the NANOSECONDS_PER_SECOND constant instead of
a raw 1000000000.)

> +
> +    /*
> +     * Don't sleep for less than the time a context switch would take,
> +     * so that we can satisfy fast timer requests on the same CPU.
> +     * Measurements on M1 show the sweet spot to be ~2ms.
> +     */
> +    if (!seconds && nanos < 2000000) {

"2 * SCALE_MS" is a bit easier to read I think.

> +        return;
> +    }
> +
> +    struct timespec ts = { seconds, nanos };
> +    hvf_wait_for_ipi(cpu, &ts);
> +}

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]