gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Gcl-devel] STABLE, WINDOWS: read_fasd1() and alloc_relblock()


From: Mike Thomas
Subject: RE: [Gcl-devel] STABLE, WINDOWS: read_fasd1() and alloc_relblock()
Date: Wed, 21 Apr 2004 16:14:19 +1000

Hi Camm.

Camm wrote:

| Fantastic!!  Was it the -fno-inline-functions or the getc redefine?
| I'm guessing the former.  In fact, with the former, why do you need
| the latter?

My thoughts exactly, but neither option by itself is sufficient to get a
complete "make test-unixport" - the former is needed to get through the
remainder of the test run once the latter fixes the "rt.o" load hang.

As reported yesterday, the same set of flags was sufficient to compile
Maxima with gcc 3.3.1/GCL, overcoming the so-called "ignore-errors" bug.
That is all good.

Unfortunately the random tester died after about 400 iterations.

Worse, I spent last night trying various combinations of optimisation flags
and am entirely unable to come up with a set which can do both of:

  1. Survive more than 700 cycles of the random tester and
  2. Overcome the "ignore-errors" bug in Maxima.

I can only achieve one or the other.  Many combinations cause a recurrence
of the Maxima path mangling bug as well.

To satisfy my curiosity I tried the older MinGW32 gcc 3.2.1 and 3.2.3 with
whatever binutils came therewith and got in both cases a continuous cyclical
rebuild of PCL caused by a memory error comnpiling "pcl/gazonk1.lsp".

I have yet to try a gcc 2.95 as I haven't got one lying around.


| > 2. GCL/gcc 3.3.3/binutils 2.15.90 still causes the Maxima crash.  It now
| > crashes consistently (which is also good news) while loading
|
|                        ^^^^^^^^^^^^^^^^^^^^^^^^
|
| Indeed.  Any idea of why the inline function optimization produced
| *irregular* crashes?

None other than the guess that memory was being corrupted randomly?


|
| > "binary-gcl/specfn.o" so if we still have the fortitude and time we can
| > probably track it down.
| >
| > The problem occurs in the Maxima source file "src/clmacs.lisp", function
| > "aset-by-cursor" called in "fillarray":
| >
|
| OK, my suggestion here would be to configure gcl with debugging, and
| build maxima with --enable-gcl --enable-gcl-alt-link.  This will give
| you an image fully addressable within gdb.  If the build doesn't
| complete, as is likely, just link in the clmacs.o file with
|
| (compiler::link (list "clmacs.o") "new_gcl")
|
| and restart the problem command with new_gcl under gdb, adjusting the
| command so as not to reload clmacs.o.
|
| You might also want to just try the vanilla maxima build under gdb and
| get a backtrace to the location within aset-by-cursor that causes the
| crash.

Will try this later.

| Separately, it appears from Vadim's email that I've been mistaken in
| thinking gcc 3.3.3 was the latest official gcc on mingw, in which case
| chasing down this problem now would be critical.  Is gcc 3.3.3 on
| mingw only a 'candidate'?  If so, the bug is still important, but not
| so crtitical as to further delay the release IMHO if the information
| gleaned from the above steps reveals a time consuming job ahead of us.

In principal I agree except that the problems with 3.3.1 appear to be just
as great as with 3.3.3 if we expect to make a release which passes all the
tests - there is (at least one) something fundamentally wrong and we haven't
found it yet.

Incidentally, I note that there are many functions in "h/cmpinclude.h" which
include a return value type but not the types of their arguments.  Thinking
that the path coercion functions might be at the heart of the Maxima path
problem I changed the relevant declarations but to no avail:

object coerce_to_pathname(object);
/* object default_device(); */
object merge_pathnames(object,object,object);
object namestring(object);
object coerce_to_namestring(object);

However there are many functions still to go and I am curious whether you
think that this might be confusing the compoiler or linker?

| Thanks again to both of you for your *fantastic* work here!

Thanks.

Cheers

Mike Thomas.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]