qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [QEMU-PPC] [PATCH 2/2] target/ppc/spapr: Add SPAPR_CAP_CC


From: David Gibson
Subject: Re: [Qemu-ppc] [QEMU-PPC] [PATCH 2/2] target/ppc/spapr: Add SPAPR_CAP_CCF_ASSIST
Date: Mon, 4 Mar 2019 11:52:58 +1100
User-agent: Mutt/1.11.3 (2019-02-01)

On Fri, Mar 01, 2019 at 03:26:45PM +1100, Suraj Jitindar Singh wrote:
> On Fri, 2019-03-01 at 14:19 +1100, Suraj Jitindar Singh wrote:
> > Introduce a new spapr_cap SPAPR_CAP_CCF_ASSIST to be used to indicate
> > the requirement for a hw-assisted version of the count cache flush
> > workaround.
> > 
> > The count cache flush workaround is a software workaround which can
> > be
> > used to flush the count cache on context switch. Some revisions of
> > hardware may have a hardware accelerated flush, in which case the
> > software flush can be shortened. This cap is used to set the
> > availability of such hardware acceleration for the count cache flush
> > routine.
> > 
> > The availability of such hardware acceleration is indicated by the
> > H_CPU_CHAR_BCCTR_FLUSH_ASSIST flag being set in the characteristics
> > returned from the KVM_PPC_GET_CPU_CHAR ioctl.
> > 
> > Signed-off-by: Suraj Jitindar Singh <address@hidden>
> > ---
> >  hw/ppc/spapr.c         |  2 ++
> >  hw/ppc/spapr_caps.c    | 25 +++++++++++++++++++++++++
> >  hw/ppc/spapr_hcall.c   |  3 +++
> >  include/hw/ppc/spapr.h |  5 ++++-
> >  target/ppc/kvm.c       | 14 ++++++++++++++
> >  target/ppc/kvm_ppc.h   |  6 ++++++
> >  6 files changed, 54 insertions(+), 1 deletion(-)
> > 
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 1df324379f..708e18dcdf 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -2086,6 +2086,7 @@ static const VMStateDescription vmstate_spapr =
> > {
> >          &vmstate_spapr_cap_nested_kvm_hv,
> >          &vmstate_spapr_dtb,
> >          &vmstate_spapr_cap_large_decr,
> > +        &vmstate_spapr_cap_ccf_assist,
> >          NULL
> >      }
> >  };
> > @@ -4319,6 +4320,7 @@ static void
> > spapr_machine_class_init(ObjectClass *oc, void *data)
> >      smc->default_caps.caps[SPAPR_CAP_HPT_MAXPAGESIZE] = 16; /* 64kiB
> > */
> >      smc->default_caps.caps[SPAPR_CAP_NESTED_KVM_HV] = SPAPR_CAP_OFF;
> >      smc->default_caps.caps[SPAPR_CAP_LARGE_DECREMENTER] =
> > SPAPR_CAP_ON;
> > +    smc->default_caps.caps[SPAPR_CAP_CCF_ASSIST] = SPAPR_CAP_OFF;
> >      spapr_caps_add_properties(smc, &error_abort);
> >      smc->irq = &spapr_irq_xics;
> >      smc->dr_phb_enabled = true;
> > diff --git a/hw/ppc/spapr_caps.c b/hw/ppc/spapr_caps.c
> > index 74a48a423a..f03f2f64e7 100644
> > --- a/hw/ppc/spapr_caps.c
> > +++ b/hw/ppc/spapr_caps.c
> > @@ -436,6 +436,21 @@ static void
> > cap_large_decr_cpu_apply(sPAPRMachineState *spapr,
> >      ppc_store_lpcr(cpu, lpcr);
> >  }
> >  
> > +static void cap_ccf_assist_apply(sPAPRMachineState *spapr, uint8_t
> > val,
> > +                                 Error **errp)
> > +{
> > +    uint8_t kvm_val = kvmppc_get_cap_count_cache_flush_assist();
> > +
> > +    if (tcg_enabled() && val) {
> > +        /* TODO - for now only allow broken for TCG */
> > +        error_setg(errp,
> > +"Requested count cache flush assist capability level not supported
> > by tcg, try cap-ccf-assist=off");
> > +    } else if (kvm_enabled() && (val > kvm_val)) {
> > +        error_setg(errp,
> > +"Requested count cache flush assist capability level not supported
> > by kvm, try cap-ccf-assist=off");
> > +    }
> 
> Actually, this should probably be non-fatal if the count cache flush
> routine isn't enabled

Since the new cap values aren't enabled by default, I've applied
anyway.  You can make this error non-fatal in a followup.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]