qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [RFC/PATCH] monitor/ppc: Access all SPRs from the monitor


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-ppc] [RFC/PATCH] monitor/ppc: Access all SPRs from the monitor
Date: Wed, 30 Sep 2015 16:24:07 +1000

On Wed, 2015-09-30 at 16:03 +1000, David Gibson wrote:
> On Sun, Sep 27, 2015 at 04:31:16PM +1000, Benjamin Herrenschmidt wrote:
> > We already have a table with all supported SPRs along with their names,
> > so let's use that rather than a duplicate table that is perpetually
> > out of sync in the monitor code.
> > 
> > This adds a new monitor hook target_extra_monitor_def() which is called
> > if nothing is found is the normal table. We still use the old mechanism
> > for anything that isn't an SPR.
> > 
> > Signed-off-by: Benjamin Herrenschmidt <address@hidden>
> 
> This looks like a good idea, but it seems to be a slightly different
> approach from the one taken by some rather similar patches Alexey
> posted recently.
> 
> Would you care to co-ordinate on which of those approaches to go ahead
> with?

The code upstream has changed quite a bit...

> [snip]
> > @@ -253,3 +180,23 @@ const MonitorDef *target_monitor_defs(void)
> >  {
> >      return monitor_defs;
> >  }
> > +
> > +int target_extra_monitor_def(uint64_t *pval, const char *name)
> > +{
> > +     /* On ppc, search through the SPRs so we can print any of them */
> > +    {
>        ^
> Also, this appears to be a redundant set of braces.

Ah right, that used to be inside the caller (monitor_defs()) and I
moved it to a hook and forgot to take out the extra braces.

I'll respin.

>  +        CPUArchState *env = mon_get_cpu_env();
> > +        ppc_spr_t *spr_cb = env->spr_cb;
> > +        int i;
> > +
> > +        for (i = 0; i < 1024; i++) {
> > +            if (!spr_cb[i].name || strcasecmp(name, spr_cb[i].name)) {
> > +                continue;
> > +            }
> > +            *pval = env->spr[i];
> > +            return 0;
> > +        }
> > +    }
> > +    return -1;
> > +}
> > +
> > 
> > 
> > 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]