help-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: profiling, what takes so long?


From: Paul D. Smith
Subject: Re: profiling, what takes so long?
Date: Wed, 1 Feb 2006 18:17:38 -0500

%% Torsten Mohr <address@hidden> writes:

  tm> for a project in the office i wrote a Makefile.  There we use
  tm> ClearCase for version control.  We can get a copy of the project
  tm> to the local hard drive, then the Makfile works quite fast and
  tm> without problems.

  tm> We can also get a faked network drive from ClearCase with the
  tm> project files in it.  Both have the same directory structure.

  tm> When we try to compile the sources on the faked network drive,
  tm> everything takes so incredibly long.  In that project previously a
  tm> quite simple script was used to stupidly compile all the sources.
  tm> People now complain that this worked better than the Makefile.

  tm> Is there a way for me to test now what takes so long?

I'm assuming, from your discussion of "faked network drives", that
you're on a Windows platform.

Since profiling tools are very platform-specific, I suggest you ask this
question on the address@hidden mailing list instead of here.


If you are building GNU make with GCC, you can compile it with profiling
enabled by adding -pg to the command line.  Then run make and it'll
leave a file called "gmon.out" behind.  You can use the GNU gprof
program (and/or one of its GUI interfaces) to examine that data.  I
don't know if gprof is ported to Windows but I would assume so.

If you're using GNU/Linux, one of the best profilers out there is
provided with valgrind.

  tm> Could you give me any hints on what could make sense to measure
  tm> and where (by changing the sources) to measure it?

It will be hard to change the source to measure it.  Surely there is
some sort of performance tool you can use.

  tm> - the "stat"s
  tm> - the commands started from "make"
  tm> - searching for dependencies
  tm> - internal calculations

Since you're running into this problem with ClearCase, which is
essentially a replacement filesystem, I'd say that your best bet is to
look at filesystem interactions.  There's absolutely no question that
any filesystem access (including builds) in a ClearCase view will be
SIGNIFICANTLY slower than normal filesystem access.

The concerning thing is that you said that a stupid compile everything
script inside the ClearCase view worked much faster than make.  If so,
that points to some issue with make rather than ClearCase.

So the first thing to do is test that hypothesis (just because the
developers say it doesn't make it true :-)): run a "clean" build using
make, and then run the same commands outside of make using a script (you
can capture make's output, or make -n's output, to get a jump on writing
that script).  Compare the runtimes.

If they're more-or-less the same, then you're done.

If they're very different, it's time to get into profiling make.

-- 
-------------------------------------------------------------------------------
 Paul D. Smith <address@hidden>          Find some GNU make tips at:
 http://www.gnu.org                      http://make.paulandlesley.org
 "Please remain calm...I may be mad, but I am a professional." --Mad Scientist




reply via email to

[Prev in Thread] Current Thread [Next in Thread]