qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Slow down with: 'Make "info qom-tree" show children sorted'


From: David Gibson
Subject: Re: Slow down with: 'Make "info qom-tree" show children sorted'
Date: Thu, 16 Jul 2020 09:59:26 +1000

On Mon, 13 Jul 2020 18:13:42 +0200
Markus Armbruster <armbru@redhat.com> wrote:

> David Gibson <dgibson@redhat.com> writes:
> 
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> The surprising part is that n turns out to be large enough for n^2 to
> >> matter *that* much.  
> >
> > Is this another consequence of the ludicrous number of QOM objects we
> > create for LMB DRCs (one for every 256MiB of guest RAM)?  Avoiding that
> > is on my list.  
> 
> You're talking about machine pseries, I presume.

Yes.

>  With
> print_qom_composition() patched to print the number of children, I get
> 
>     $ echo -e 'info qom-tree\nq' | 
> ../qemu/bld/ppc64-softmmu/qemu-system-ppc64 -S -display none -M pseries 
> -accel qtest -monitor stdio | grep '###' | sort | uniq -c | sort -k 3n
>         360 ### 0 children
>           5 ### 1 children
>           5 ### 2 children
>           2 ### 3 children
>           1 ### 4 children
>           1 ### 15 children
>           1 ### 16 children
>           1 ### 18 children
>           1 ### 37 children
>           1 ### 266 children
> 
> The outlier is
> 
>         /device[5] (spapr-pci-host-bridge)
> 
> due to its 256 spapr-drc-pci children.

Right, that's one for each possible PCI slot on the bus.  That will be
reduced by the idea I have in mind for this, but...

> I found quite a few machines with similar outliers.  ARM machines nuri
> and smdkc210 together take the cake: they each have a node with 513
> children.
> 
> My stupid n^2 sort is unnoticable in normal, human usage even for n=513.

... as you say, 256 shouldn't really be a problem.  I was concerned
about LMB DRCs rather than PCI DRCs.  To have that show up, you might
need to create a machine with a large difference between initial memory
and maxmem - I think you'll get a DRC object for every 256MiB in there,
which can easily get into the thousands for large (potential) memory
VMs.

I don't know what the config was that showed up this problem in the
first place, and whether that could be the case there.

> >                 Though avoiding a n^2 behaviour here is probably a good
> > idea anyway.  
> 
> Agreed.

-- 
David Gibson <dgibson@redhat.com>
Principal Software Engineer, Virtualization, Red Hat

Attachment: pgpYFaXOYkwWT.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]