qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/7] spapr, xics: Get number of servers with a XICSFabricClas


From: Greg Kurz
Subject: Re: [PATCH 1/7] spapr, xics: Get number of servers with a XICSFabricClass method
Date: Thu, 3 Oct 2019 15:41:55 +0200

On Thu, 3 Oct 2019 15:19:22 +0200
Cédric Le Goater <address@hidden> wrote:

> On 03/10/2019 15:02, Greg Kurz wrote:
> > On Thu, 3 Oct 2019 14:58:45 +0200
> > Cédric Le Goater <address@hidden> wrote:
> > 
> >> On 03/10/2019 14:49, Greg Kurz wrote:
> >>> On Thu, 3 Oct 2019 14:24:06 +0200
> >>> Cédric Le Goater <address@hidden> wrote:
> >>>
> >>>> On 03/10/2019 14:00, Greg Kurz wrote:
> >>>>> The number of servers, ie. upper bound of the highest VCPU id, is
> >>>>> currently only needed to generate the "interrupt-controller" node
> >>>>> in the DT. Soon it will be needed to inform the XICS-on-XIVE KVM
> >>>>> device that it can allocates less resources in the XIVE HW.
> >>>>>
> >>>>> Add a method to XICSFabricClass for this purpose. 
> >>>>
> >>>> This is sPAPR code and PowerNV does not care.
> >>>>
> >>>
> >>> Then PowerNV doesn't need to implement the method.
> >>>
> >>>> why can not we simply call spapr_max_server_number(spapr) ?
> >>>>
> >>>
> >>> Because the backend shouldn't reach out to sPAPR machine
> >>> internals. XICSFabric is the natural interface for ICS/ICP
> >>> if they need something from the machine.
> >>
> >> From what I can see, xics_nr_servers() is only called by : 
> >>
> >>   spapr_dt_xics(SpaprMachineState *spapr ...)
> >>   xics_kvm_connect(SpaprMachineState *spapr ...)
> >>
> > 
> > Yes... and ?
> 
> so it is spapr only and we can use spapr_max_server_number(spapr)
> without the need of a new XICSFabric handler.
> 

Yet we did care to have an nr_server argument to spapr_dt_xics(), even
though we could have called spapr_max_server_number() since the
beginning. And we also care not to call spapr_max_server_number()
to setup nr_ends in XIVE. Right ?

Ideally spapr_max_server_number() should even be local to spapr.c
IMHO. It happens to be extern because spapr_irq_init() needs it for
sPAPR XIVE, but I think it should rather be passed as an argument.

Anyway, if this patch doesn't reach consensus, I'll switch to using
spapr_max_server_number()... and do the same in sPAPR XIVE for
consistency ;-)

> C. 
> 
> >> C. 
> >>
> >>>>
> >>>>> Implement it
> >>>>> for sPAPR and use it to generate the "interrupt-controller" node.
> >>>>>
> >>>>> Signed-off-by: Greg Kurz <address@hidden>
> >>>>> ---
> >>>>>  hw/intc/xics.c        |    7 +++++++
> >>>>>  hw/intc/xics_spapr.c  |    3 ++-
> >>>>>  hw/ppc/spapr.c        |    8 ++++++++
> >>>>>  include/hw/ppc/xics.h |    2 ++
> >>>>>  4 files changed, 19 insertions(+), 1 deletion(-)
> >>>>>
> >>>>> diff --git a/hw/intc/xics.c b/hw/intc/xics.c
> >>>>> index dfe7dbd254ab..f82072935266 100644
> >>>>> --- a/hw/intc/xics.c
> >>>>> +++ b/hw/intc/xics.c
> >>>>> @@ -716,6 +716,13 @@ ICPState *xics_icp_get(XICSFabric *xi, int server)
> >>>>>      return xic->icp_get(xi, server);
> >>>>>  }
> >>>>>  
> >>>>> +uint32_t xics_nr_servers(XICSFabric *xi)
> >>>>> +{
> >>>>> +    XICSFabricClass *xic = XICS_FABRIC_GET_CLASS(xi);
> >>>>> +
> >>>>> +    return xic->nr_servers(xi);
> >>>>> +}
> >>>>> +
> >>>>>  void ics_set_irq_type(ICSState *ics, int srcno, bool lsi)
> >>>>>  {
> >>>>>      assert(!(ics->irqs[srcno].flags & XICS_FLAGS_IRQ_MASK));
> >>>>> diff --git a/hw/intc/xics_spapr.c b/hw/intc/xics_spapr.c
> >>>>> index 6e5eb24b3cca..aa568ed0dc0d 100644
> >>>>> --- a/hw/intc/xics_spapr.c
> >>>>> +++ b/hw/intc/xics_spapr.c
> >>>>> @@ -311,8 +311,9 @@ static void ics_spapr_realize(DeviceState *dev, 
> >>>>> Error **errp)
> >>>>>  void spapr_dt_xics(SpaprMachineState *spapr, uint32_t nr_servers, void 
> >>>>> *fdt,
> >>>>>                     uint32_t phandle)
> >>>>>  {
> >>>>> +    ICSState *ics = spapr->ics;
> >>>>>      uint32_t interrupt_server_ranges_prop[] = {
> >>>>> -        0, cpu_to_be32(nr_servers),
> >>>>> +        0, cpu_to_be32(xics_nr_servers(ics->xics)),
> >>>>>      };
> >>>>>      int node;
> >>>>>  
> >>>>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> >>>>> index 514a17ae74d6..b8b9796c88e4 100644
> >>>>> --- a/hw/ppc/spapr.c
> >>>>> +++ b/hw/ppc/spapr.c
> >>>>> @@ -4266,6 +4266,13 @@ static ICPState *spapr_icp_get(XICSFabric *xi, 
> >>>>> int vcpu_id)
> >>>>>      return cpu ? spapr_cpu_state(cpu)->icp : NULL;
> >>>>>  }
> >>>>>  
> >>>>> +static uint32_t spapr_nr_servers(XICSFabric *xi)
> >>>>> +{
> >>>>> +    SpaprMachineState *spapr = SPAPR_MACHINE(xi);
> >>>>> +
> >>>>> +    return spapr_max_server_number(spapr);
> >>>>> +}
> >>>>> +
> >>>>>  static void spapr_pic_print_info(InterruptStatsProvider *obj,
> >>>>>                                   Monitor *mon)
> >>>>>  {
> >>>>> @@ -4423,6 +4430,7 @@ static void spapr_machine_class_init(ObjectClass 
> >>>>> *oc, void *data)
> >>>>>      xic->ics_get = spapr_ics_get;
> >>>>>      xic->ics_resend = spapr_ics_resend;
> >>>>>      xic->icp_get = spapr_icp_get;
> >>>>> +    xic->nr_servers = spapr_nr_servers;
> >>>>>      ispc->print_info = spapr_pic_print_info;
> >>>>>      /* Force NUMA node memory size to be a multiple of
> >>>>>       * SPAPR_MEMORY_BLOCK_SIZE (256M) since that's the granularity
> >>>>> diff --git a/include/hw/ppc/xics.h b/include/hw/ppc/xics.h
> >>>>> index 1e6a9300eb2b..e6bb1239e8f8 100644
> >>>>> --- a/include/hw/ppc/xics.h
> >>>>> +++ b/include/hw/ppc/xics.h
> >>>>> @@ -151,9 +151,11 @@ typedef struct XICSFabricClass {
> >>>>>      ICSState *(*ics_get)(XICSFabric *xi, int irq);
> >>>>>      void (*ics_resend)(XICSFabric *xi);
> >>>>>      ICPState *(*icp_get)(XICSFabric *xi, int server);
> >>>>> +    uint32_t (*nr_servers)(XICSFabric *xi);
> >>>>>  } XICSFabricClass;
> >>>>>  
> >>>>>  ICPState *xics_icp_get(XICSFabric *xi, int server);
> >>>>> +uint32_t xics_nr_servers(XICSFabric *xi);
> >>>>>  
> >>>>>  /* Internal XICS interfaces */
> >>>>>  void icp_set_cppr(ICPState *icp, uint8_t cppr);
> >>>>>
> >>>>
> >>>
> >>
> > 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]