bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#17535: 24.3.91; Problems with profiling memory


From: Eli Zaretskii
Subject: bug#17535: 24.3.91; Problems with profiling memory
Date: Wed, 28 May 2014 19:49:28 +0300

> From: Stefan Monnier <monnier@iro.umontreal.ca>
> Cc: 17535@debbugs.gnu.org
> Date: Tue, 20 May 2014 16:16:13 -0400
> 
> >> > 2. How to interpret the "memory profile"?  What does a line such as
> >> >    this in the profile mean:
> >> >   - execute-extended-command                                  973,272  
> >> > 20%
> >> >    How were the 973,272 bytes counted, and what are they 20% of?  The
> >> >    ELisp manual, where this facility is described, does not explain
> >> >    how to interpret the profiles, and neither can I find anything
> >> >    about that in the doc strings.
> >> It's the number of bytes "allocated from the system" during execution of
> >> this function.
> > But these numbers are huge.  I have hard time believing that all those
> > bytes were allocated in just few minutes of an almost-idle Emacs.
> 
> It only counts allocation, so "alloc+free+alloc" counts as 2 allocs.

Why doesn't it count a call to 'free' as a deallocation?  Ignoring
freed memory makes the memory profiler much less useful than it could
have been; e.g., it would be impossible to look for leaks if releasing
memory is ignored.  Isn't it possible to call malloc_probe with a
negative argument?

Moreover, I don't understand this part of garbage-collect:

  /* Collect profiling data.  */
  if (profiler_memory_running)
    {
      size_t swept = 0;
      size_t tot_after = total_bytes_of_live_objects ();
      if (tot_before > tot_after)
        swept = tot_before - tot_after;
      malloc_probe (swept);
    }

This looks like we count memory we just swept (i.e. released) as an
allocation.  Unless I'm missing something, that makes no sense.

> >> This "allocation" is poorly defined: we don't track allocation of
> >> individual objects but of things like cons_blocks.
> > Do we only track allocations of Lisp objects, or just any calls to
> > xmalloc?
> 
> Can't remember.

I can now answer this myself: we track all calls to xmalloc, and also
all the allocating functions, like lisp_malloc, which call malloc
directly.  IOW, we track all memory allocations except direct calls to
mmap.

> >> >   etc.: I see no percentage numbers except 1%, 0%, and -1%.
> >> This is just most likely a wrap-around due to too-large integers.
> > Definitely.  I thought these were already fixed, but it looks they
> > aren't.  I will try to take a better look.  Are there any reasons not
> > to do this calculation in floating-point?
> 
> The raw counts need to be integers because we can't allocate during the
> sampling, but all the Elisp code could use floating point, I think.

Could someone please fix this?  Without a fix, the memory profiling is
simply useless.  It's too bad we released Emacs 24.3 like that, but at
least let's fix this in 24.4.

I tried a few relatively simple tricks, but couldn't make it work,
because using floats screws the report formatting.  Which brings me to
this snippet from profiler.el:

  (defvar profiler-report-cpu-line-format
    '((50 left)
      (24 right ((19 right)
                 (5 right)))))

  (defvar profiler-report-memory-line-format
    '((55 left)
      (19 right ((14 right profiler-format-number)
                 (5 right)))))

Is it possible to have doc strings for these variables, or at least a
comment that explains this data structure, the meaning of each field,
and its possible values?  Otherwise, there's no way of adjusting them
when the report format is changed in some way.

Finally, the situation with the doc strings (the first issue I
mentioned in my original report) is worse than I thought: there are a
lot of macros in profiler.el that are defined using cl-macs.el, and
they all have the same problem: they have no doc strings, and "C-h f"
cannot find their sources, which makes reading the code unbearably
complicated.

TIA





reply via email to

[Prev in Thread] Current Thread [Next in Thread]