[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [RFC PATCH v3 05/24] spapr: Reorganize CPU dt generation
From: |
David Gibson |
Subject: |
Re: [Qemu-ppc] [RFC PATCH v3 05/24] spapr: Reorganize CPU dt generation code |
Date: |
Mon, 4 May 2015 22:01:56 +1000 |
User-agent: |
Mutt/1.5.23 (2014-03-12) |
On Mon, Apr 27, 2015 at 11:06:07AM +0530, Bharata B Rao wrote:
> On Sun, Apr 26, 2015 at 05:17:48PM +0530, Bharata B Rao wrote:
> > On Fri, Apr 24, 2015 at 12:17:27PM +0530, Bharata B Rao wrote:
> > > Reorganize CPU device tree generation code so that it be reused from
> > > hotplug path. CPU dt entries are now generated from spapr_finalize_fdt()
> > > instead of spapr_create_fdt_skel().
> >
> > Creating CPU DT entries from spapr_finalize_fdt() instead of
> > spapr_create_fdt_skel() has an interesting side effect.
> >
> > <snip>
> >
> > In both the cases, I am adding CPU DT nodes from QEMU in the same order,
> > but not sure why the guest kernel discovers them in different orders in
> > each case.
>
> Nikunj and I tracked this down to the difference in device tree APIs that
> we are using in two cases.
>
> When CPU DT nodes are created from spapr_create_fdt_skel(), we are using
> fdt_begin_node() API which does sequential write and hence CPU DT nodes
> end up in the same order in which they are created.
>
> However in my patch when I create CPU DT entries in spapr_finalize_fdt(),
> I am using fdt_add_subnode() which ends up writing the CPU DT node at the
> same parent offset for all the CPUs. This results in CPU DT nodes being
> generated in reverse order in FDT.
>
> >
> > > +static void spapr_populate_cpus_dt_node(void *fdt, sPAPREnvironment
> > > *spapr)
> > > +{
> > > + CPUState *cs;
> > > + int cpus_offset;
> > > + char *nodename;
> > > + int smt = kvmppc_smt_threads();
> > > +
> > > + cpus_offset = fdt_add_subnode(fdt, 0, "cpus");
> > > + _FDT(cpus_offset);
> > > + _FDT((fdt_setprop_cell(fdt, cpus_offset, "#address-cells", 0x1)));
> > > + _FDT((fdt_setprop_cell(fdt, cpus_offset, "#size-cells", 0x0)));
> > > +
> > > + CPU_FOREACH(cs) {
> > > + PowerPCCPU *cpu = POWERPC_CPU(cs);
> > > + int index = ppc_get_vcpu_dt_id(cpu);
> > > + DeviceClass *dc = DEVICE_GET_CLASS(cs);
> > > + int offset;
> > > +
> > > + if ((index % smt) != 0) {
> > > + continue;
> > > + }
> > > +
> > > + nodename = g_strdup_printf("address@hidden", dc->fw_name, index);
> > > + offset = fdt_add_subnode(fdt, cpus_offset, nodename);
> > > + g_free(nodename);
> > > + _FDT(offset);
> > > + spapr_populate_cpu_dt(cs, fdt, offset);
> > > + }
> >
> > I can simply fix this by walking the CPUs in reverse order in the above
> > code which makes the guest kernel to discover the CPU DT nodes in the
> > right order.
> >
> > s/CPU_FOREACH(cs)/CPU_FOREACH_REVERSE(cs) will solve this problem. Would
> > this
> > be the right approach or should we just leave it to the guest kernel to
> > discover and enumerate CPUs in whatever order it finds the DT nodes in FDT ?
>
> So using CPU_FOREACH_REVERSE(cs) appears to be right way to handle this.
Yes, I think so. In theory it shouldn't matter, but I think it's
safer to retain the device tree order.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
pgpyM3qBTHNL8.pgp
Description: PGP signature
- Re: [Qemu-ppc] [RFC PATCH v3 05/24] spapr: Reorganize CPU dt generation code,
David Gibson <=