gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] Re: GCL allocation


From: Camm Maguire
Subject: [Gcl-devel] Re: GCL allocation
Date: 27 Aug 2003 16:17:54 -0400
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Greetings!

OK, here's the short version:

It was broken. Now its fixed :-).

Slightly longer than this, current GCL never marks contiguous pages as
SGC pages, and only sweeps the latter when SGC is on, leading to the
massive leak.  The extra reloc pages in the example put it over the
top.  

One can of course address this in several ways.  One is to turn off
SGC on contiguous pages with (si::allocate-sgc 'contiguous 0 3000 0).
But this is obviously not optimal.

Instead I've constructed a patch which implements SGC for contiguous
pages.  Its quite tricky, being close to the most involved change yet
I've made to GCL.  I've tried to document all the details in the
comments.  You can read them in the patch below if you'd like.

A patch of this import of course needs to be well tested.  All goes
well with maxima, self-build, and ansi thus far.  Am presently testing
the acl2 book certification.  Then it probably needs to be run by
axiom.  I've tried it on the test below using quite a few permutations
of (allocate, allocate-sgc) (contiguous,relblock,cfun(==bignum)),
sgc-on, and even si::SET-GMP-ALLOCATE-RELOCATABLE successfully,
although nothing exhaustive as yet.  

Just as a reminder, gmp bignums are allocated on contiguous pages by
default, as these reproduce malloc semantics (i.e. they don't move),
and one is thus assured that no caching in the external gmp library
will be corrupted.  Dr. Schelter apparently audited the gmp code at
the point when support for it was added, identifying and removing
precisely one malloc in a bad place with a safe alloca, allowing
bignums to be allocated on faster relocatable pages instead.  I have
never repeated this analysis, but we do overwrite said malloc with the
new alloca even when linking gmp in dynamically.  gmp could introduce
another bad malloc without our noticing conceivably, but as of right
now, relocatable bignums work fine at least in this test.  Of course
building GCL with its own copy of gmp will always work as it ever
has.  (si::set-gmp-allocate-relocatable t) to try it out.

Separately, several page types had no SGC pages allocated by default,
including bignums, leading to a thrashing of sgc-on, sgc-off in the
test below when the bignum header underwent GC.  I've remedied this
default situation here as well.

I've not even committed this change yet as it still might need
a few minor adjustments, but it basically appears to be working.
Feedback from GC gurus of course appreciated as always :-).  Hammer on
it and find the bugs if you are so inclined!

To the list -- sorry about being delayed on this time consuming
project, but I feel it takes precedence over things I'd rather get to,
like ansi support.

Take care,

=============================================================================
Index: h/object.h
===================================================================
RCS file: /cvsroot/gcl/gcl/h/object.h,v
retrieving revision 1.18.4.1
diff -u -r1.18.4.1 object.h
--- h/object.h  16 Jul 2003 02:02:49 -0000      1.18.4.1
+++ h/object.h  27 Aug 2003 19:21:52 -0000
@@ -759,6 +759,7 @@
        short   tm_max_grow;    /* max amount to grow when growing */
        short   tm_growth_percent;  /* percent to increase maxpages */
        short   tm_percent_free;  /* percent which must be free after a gc for 
this type */
+        short   tm_distinct;       /* pages of this type are distinct */
 
 };
 
Index: h/page.h
===================================================================
RCS file: /cvsroot/gcl/gcl/h/page.h,v
retrieving revision 1.4.4.1
diff -u -r1.4.4.1 page.h
--- h/page.h    21 Aug 2003 04:17:47 -0000      1.4.4.1
+++ h/page.h    27 Aug 2003 19:21:52 -0000
@@ -29,6 +29,12 @@
 #define ROUND_UP_PTR(n)        (((long)(n) + (PTR_ALIGN-1)) & ~(PTR_ALIGN-1))
 #define ROUND_DOWN_PTR(n) (((long)(n)  & ~(PTR_ALIGN-1)))
 
+/* alignment required for contiguous pointers */
+#define CPTR_ALIGN (PTR_ALIGN < sizeof(struct contblock) ? sizeof(struct 
contblock) : PTR_ALIGN)
+
+#define ROUND_UP_PTR_CONT(n)   (((long)(n) + (CPTR_ALIGN-1)) & ~(CPTR_ALIGN-1))
+#define ROUND_DOWN_PTR_CONT(n) (((long)(n)  & ~(CPTR_ALIGN-1)))
+
 
 #ifdef SGC
 
Index: o/alloc.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/alloc.c,v
retrieving revision 1.19
diff -u -r1.19 alloc.c
--- o/alloc.c   1 Mar 2003 22:37:37 -0000       1.19
+++ o/alloc.c   27 Aug 2003 19:21:52 -0000
@@ -425,9 +425,19 @@
 /*
        printf("allocating %d-byte contiguous block...\n", n);
 */
+        /* SGC cont pages: contiguous pointers must be aligned at
+           CPTR_ALIGN, no smaller than sizeof (struct contblock).
+           Here we allocate a bigger block, and rely on the fact that
+           allocate_page returns pointers appropriately aligned,
+           being also aligned on page boundaries.  Protection against
+           a too small contblock was aforded before by a minimum
+           contblock size enforced by CBMINSIZE in insert_contblock.
+           However, this leads to a leak when many small cont blocks
+           are allocated, e.g. with bignums, so is now removed.  CM
+           20030827 */
 
        g = FALSE;
-       n = ROUND_UP_PTR(n);
+       n = ROUND_UP_PTR_CONT(n);
 
 ONCE_MORE:
         CHECK_INTERRUPT;
@@ -472,8 +482,16 @@
       }
        p = alloc_page(m);
 
-       for (i = 0;  i < m;  i++)
+       for (i = 0;  i < m;  i++) {
                type_map[page(p) + i] = (char)t_contiguous;
+
+               /* SGC cont pages: Before this point, GCL never marked 
contiguous
+                  pages for SGC, causing no contiguous pages to be
+                  swept when SGC was on.  Here we follow the behavior
+                  for other pages in add_to_freelist. CM 20030827  */
+               if (sgc_enabled && tm_table[t_contiguous].tm_sgc)
+                 sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
+       }
        ncbpage += m;
        insert_contblock(p+n, PAGESIZE*m - n);
        return(p);
@@ -484,11 +502,17 @@
 
   struct contblock **cbpp, *cbp;
   
-  if (s < CBMINSIZE)
+  /* SGC cont pages: This used to return when s<CBMINSIZE, but we need
+     to be able to sweep small (e.g. bignum) contblocks.  FIXME:
+     should never be called with s<=0 to begin with.  CM 20030827*/
+  if (s<=0)
     return;
   ncb++;
   cbp = (struct contblock *)p;
-  cbp->cb_size = s;
+  /* SGC cont pages: allocated sizes may not be zero mod CPTR_SIZE,
+     e.g. string fillp, but alloc_contblock rounded up the allocation
+     like this, which we follow here.  CM 20030827 */
+  cbp->cb_size = ROUND_UP_PTR_CONT(s);
   for (cbpp = &cb_pointer;  *cbpp;  cbpp = &((*cbpp)->cb_link))
     if ((*cbpp)->cb_size >= s) {
       cbp->cb_link = *cbpp;
@@ -568,19 +592,30 @@
        return(p);
 }
 
+/* Add a tm_distinct field to prevent page type sharing if desired.
+   Not used now, as its never desirable from an efficiency point of
+   view, and as the only known place one must separate is cons and
+   fixnum, which are of different sizes unless PTR_ALIGN is set too
+   high (e.g. 16 on a 32bit machine).  See the ordering of init_tm
+   calls for these types below -- reversing would wind up merging the
+   types with the current algorithm.  CM 20030827 */
+
 static void
-init_tm(enum type t, char *name, int elsize, int nelts, int sgc) {
+init_tm(enum type t, char *name, int elsize, int nelts, int sgc,int distinct) {
 
   int i, j;
   int maxpage;
   /* round up to next number of pages */
   maxpage = (((nelts * elsize) + PAGESIZE -1)/PAGESIZE);
   tm_table[(int)t].tm_name = name;
-  for (j = -1, i = 0;  i < (int)t_end;  i++)
-    if (tm_table[i].tm_size != 0 &&
-       tm_table[i].tm_size >= elsize &&
-       (j < 0 || tm_table[j].tm_size > tm_table[i].tm_size))
-      j = i;
+  j=-1;
+  if (!distinct)
+    for (i = 0;  i < (int)t_end;  i++)
+      if (tm_table[i].tm_size != 0 &&
+         tm_table[i].tm_size >= elsize &&
+         !tm_table[i].tm_distinct &&
+         (j < 0 || tm_table[j].tm_size > tm_table[i].tm_size))
+       j = i;
   if (j >= 0) {
     tm_table[(int)t].tm_type = (enum type)j;
     tm_table[j].tm_maxpage += maxpage;
@@ -598,6 +633,7 @@
   /*tm_table[(int)t].tm_npage = 0; */  /* dont zero nrbpage.. */
   tm_table[(int)t].tm_maxpage = maxpage;
   tm_table[(int)t].tm_gbccount = 0;
+  tm_table[(int)t].tm_distinct=distinct;
 #ifdef SGC     
   tm_table[(int)t].tm_sgc = sgc;
   tm_table[(int)t].tm_sgc_max = 3000;
@@ -688,40 +724,46 @@
   for (i = 0;  i < MAXPAGE;  i++)
     type_map[i] = (char)t_other;
   
+  /* Unused (at present) tm_distinct flag added.  Note that if cons
+     and fixnum share page types, errors will be introduced.
+
+     Gave each page type at least some sgc pages by default.  Of
+     course changeable by allocate-sgc.  CM 20030827 */
+
   init_tm(t_fixnum, "NFIXNUM",
-         sizeof(struct fixnum_struct), 8192,20);
-  init_tm(t_cons, ".CONS", sizeof(struct cons), 65536 ,50 );
-  init_tm(t_structure, "SSTRUCTURE", sizeof(struct structure), 5461,0 );
-  init_tm(t_cfun, "fCFUN", sizeof(struct cfun), 4096,0  );
-  init_tm(t_sfun, "gSFUN", sizeof(struct sfun),409,0 );
-  init_tm(t_string, "\"STRING", sizeof(struct string), 5461,1  );
-  init_tm(t_array, "aARRAY", sizeof(struct array), 4681,1 );
-  init_tm(t_symbol, "|SYMBOL", sizeof(struct symbol), 3640,1 );
-  init_tm(t_bignum, "BBIGNUM", sizeof(struct bignum), 2730,0 );
-  init_tm(t_ratio, "RRATIONAL", sizeof(struct ratio), 170,0 );
+         sizeof(struct fixnum_struct), 8192,20,0);
+  init_tm(t_cons, ".CONS", sizeof(struct cons), 65536 ,50,0 );
+  init_tm(t_structure, "SSTRUCTURE", sizeof(struct structure), 5461,1,0 );
+  init_tm(t_cfun, "fCFUN", sizeof(struct cfun), 4096,1,0  );
+  init_tm(t_sfun, "gSFUN", sizeof(struct sfun),409,1,0 );
+  init_tm(t_string, "\"STRING", sizeof(struct string), 5461,1,0  );
+  init_tm(t_array, "aARRAY", sizeof(struct array), 4681,1,0 );
+  init_tm(t_symbol, "|SYMBOL", sizeof(struct symbol), 3640,1,0 );
+  init_tm(t_bignum, "BBIGNUM", sizeof(struct bignum), 2730,1,0 );
+  init_tm(t_ratio, "RRATIONAL", sizeof(struct ratio), 170,1,0 );
   init_tm(t_shortfloat, "FSHORT-FLOAT",
-         sizeof(struct shortfloat_struct), 256 ,1);
+         sizeof(struct shortfloat_struct), 256 ,1,0);
   init_tm(t_longfloat, "LLONG-FLOAT",
-         sizeof(struct longfloat_struct), 170 ,0);
-  init_tm(t_complex, "CCOMPLEX", sizeof(struct complex), 170 ,0);
-  init_tm(t_character,"#CHARACTER",sizeof(struct character), 256 ,0);
-  init_tm(t_package, ":PACKAGE", sizeof(struct package), 2*PAGESIZE / 
sizeof(struct package),0);
-  init_tm(t_hashtable, "hHASH-TABLE", sizeof(struct hashtable), 78,0 );
-  init_tm(t_vector, "vVECTOR", sizeof(struct vector), 146 ,0);
-  init_tm(t_bitvector, "bBIT-VECTOR", sizeof(struct bitvector), 73 ,0);
-  init_tm(t_stream, "sSTREAM", sizeof(struct stream), 78 ,0);
-  init_tm(t_random, "$RANDOM-STATE", sizeof(struct random), 256 ,0);
-  init_tm(t_readtable, "rREADTABLE", sizeof(struct readtable), 256 ,0);
-  init_tm(t_pathname, "pPATHNAME", sizeof(struct pathname), 73 ,0);
-  init_tm(t_cclosure, "cCCLOSURE", sizeof(struct cclosure), 85 ,0);
-  init_tm(t_closure, "cCLOSURE", sizeof(struct cclosure), 85 ,0);
-  init_tm(t_vfun, "VVFUN", sizeof(struct vfun), 102 ,0);
-  init_tm(t_gfun, "gGFUN", sizeof(struct sfun), 0 ,0);
-  init_tm(t_afun, "AAFUN", sizeof(struct sfun), 0 ,0);
-  init_tm(t_cfdata, "cCFDATA", sizeof(struct cfdata), 102 ,0);
-  init_tm(t_spice, "!SPICE", sizeof(struct spice), 4096 ,0);
-  init_tm(t_relocatable, "%RELOCATABLE-BLOCKS", 1000,0,20);
-  init_tm(t_contiguous, "_CONTIGUOUS-BLOCKS", 1001,0,20);
+         sizeof(struct longfloat_struct), 170 ,1,0);
+  init_tm(t_complex, "CCOMPLEX", sizeof(struct complex), 170 ,1,0);
+  init_tm(t_character,"#CHARACTER",sizeof(struct character), 256 ,1,0);
+  init_tm(t_package, ":PACKAGE", sizeof(struct package), 2*PAGESIZE / 
sizeof(struct package),1,0);
+  init_tm(t_hashtable, "hHASH-TABLE", sizeof(struct hashtable), 78,1,0 );
+  init_tm(t_vector, "vVECTOR", sizeof(struct vector), 146 ,1,0);
+  init_tm(t_bitvector, "bBIT-VECTOR", sizeof(struct bitvector), 73 ,1,0);
+  init_tm(t_stream, "sSTREAM", sizeof(struct stream), 78 ,1,0);
+  init_tm(t_random, "$RANDOM-STATE", sizeof(struct random), 256 ,1,0);
+  init_tm(t_readtable, "rREADTABLE", sizeof(struct readtable), 256 ,1,0);
+  init_tm(t_pathname, "pPATHNAME", sizeof(struct pathname), 73 ,1,0);
+  init_tm(t_cclosure, "cCCLOSURE", sizeof(struct cclosure), 85 ,1,0);
+  init_tm(t_closure, "cCLOSURE", sizeof(struct cclosure), 85 ,1,0);
+  init_tm(t_vfun, "VVFUN", sizeof(struct vfun), 102 ,1,0);
+  init_tm(t_gfun, "gGFUN", sizeof(struct sfun), 0 ,1,0);
+  init_tm(t_afun, "AAFUN", sizeof(struct sfun), 0 ,1,0);
+  init_tm(t_cfdata, "cCFDATA", sizeof(struct cfdata), 102 ,1,0);
+  init_tm(t_spice, "!SPICE", sizeof(struct spice), 4096 ,1,0);
+  init_tm(t_relocatable, "%RELOCATABLE-BLOCKS", 1000,0,20,0);
+  init_tm(t_contiguous, "_CONTIGUOUS-BLOCKS", 1001,0,20,0);
   tm_table[t_relocatable].tm_nppage = PAGESIZE;
   tm_table[t_contiguous].tm_nppage = PAGESIZE;
   
Index: o/gbc.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/gbc.c,v
retrieving revision 1.13.4.1
diff -u -r1.13.4.1 gbc.c
--- o/gbc.c     30 Jul 2003 15:11:12 -0000      1.13.4.1
+++ o/gbc.c     27 Aug 2003 19:21:52 -0000
@@ -1012,19 +1012,24 @@
     e = pagetochar(j);
     for (p = s;  p < e;) {
       if (get_mark_bit((int *)p)) {
-       p += PTR_ALIGN;
+       /* SGC cont pages: cont blocks must be no smaller than
+          sizeof(struct contblock), and must not have a sweep
+          granularity greater than this amount (e.g. CPTR_ALIGN) if
+          contblock leaks are to be avoided.  Used to be aligned at
+          PTR_ALIGN. CM 20030827 */
+       p += CPTR_ALIGN;
        continue;
       }
-      q = p + PTR_ALIGN;
+      q = p + CPTR_ALIGN;
       while (q < e) {
        if (!get_mark_bit((int *)q)) {
-         q += PTR_ALIGN;
+         q += CPTR_ALIGN;
          continue;
        }
        break;
       }
       insert_contblock(p, q - p);
-      p = q + PTR_ALIGN;
+      p = q + CPTR_ALIGN;
     }
     i = j + 1;
   }
@@ -1067,8 +1072,8 @@
     if(sgc_enabled) sgc_quit();
     
     }
-  
-  
+
+
 #ifdef DEBUG
   debug = symbol_value(sSAgbc_messageA) != Cnil;
 #endif
@@ -1423,8 +1428,10 @@
   if (!MAYBE_DATA_P(p) || (enum type)type_map[page(p)] != t_contiguous)
     return;
   q = p + s;
-  x = (int *)ROUND_DOWN_PTR(p);
-  y = (int *)ROUND_UP_PTR(q);
+  /* SGC cont pages: contblock pages must be no smaller than
+     sizeof(struct contblock).  CM 20030827 */
+  x = (int *)ROUND_DOWN_PTR_CONT(p);
+  y = (int *)ROUND_UP_PTR_CONT(q);
   for (;  x < y;  x++)
     set_mark_bit(x);
 }
Index: o/sgbc.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/sgbc.c,v
retrieving revision 1.9
diff -u -r1.9 sgbc.c
--- o/sgbc.c    15 Feb 2003 00:38:28 -0000      1.9
+++ o/sgbc.c    27 Aug 2003 19:21:53 -0000
@@ -887,19 +887,24 @@
     e = pagetochar(j);
     for (p = s;  p < e;) {
       if (get_mark_bit((int *)p)) {
-       p += PTR_ALIGN;
+       /* SGC cont pages: cont blocks must be no smaller than
+          sizeof(struct contblock), and must not have a sweep
+          granularity greater than this amount (e.g. CPTR_ALIGN) if
+          contblock leaks are to be avoided.  Used to be aligned at
+          PTR_ALIGN. CM 20030827 */
+       p += CPTR_ALIGN;
        continue;
       }
-      q = p + PTR_ALIGN;
+      q = p + CPTR_ALIGN;
       while (q < e) {
        if (!get_mark_bit((int *)q)) {
-         q += PTR_ALIGN;
+         q += CPTR_ALIGN;
          continue;
        }
        break;
       }
       insert_contblock(p, q - p);
-      p = q + PTR_ALIGN;
+      p = q + CPTR_ALIGN;
     }
     i = j + 1;
   }
@@ -961,6 +966,11 @@
   return count;
 }
 
+   /* SGC cont pages: After SGC_start, old_cb_pointer will be a linked
+   list of free blocks on non-SGC pages, and cb_pointer will be
+   likewise for SGC pages.  CM 20030827*/
+static struct contblock *old_cb_pointer;
+
 int
 sgc_start(void) {
 
@@ -1005,7 +1015,10 @@
               count);fflush(stdout);
 #endif       
       for(j=0,count=0; j <= maxp ;j++) {
-       if (free_map[j] >= minfree) {
+       /* SGC cont pages: This used to be >=, which is a definite
+          bug, as minfree could be zero, leading this type to claim
+          SGC pages not of its type in type_map.  CM 20030827*/
+       if (free_map[j] > minfree) {
          sgc_type_map[j] |= (SGC_PAGE_FLAG | SGC_TEMP_WRITABLE);
          ++count;
          if (count >= tm->tm_sgc_max)
@@ -1031,6 +1044,101 @@
          goto FIND_FREE_PAGES;  
       }
     }
+
+/* SGC cont pages: Here we implement the contblock page division into
+   SGC and non-SGC types.  Unlike the other types, we need *whole*
+   free pages for contblock SGC, as there is no psersistent data
+   element (e.g. .m) on an allocated block itself which can indicate
+   its live status.  If anything on a page which is to be marked
+   read-only points to a live object on an SGC cont page, it will
+   never be marked and will be erroneously swept.  It is also possible
+   for dead objects to unnecessarily mark dead regions on SGC pages
+   and delay sweeping until the pointing type is GC'ed if SGC is
+   turned off for the pointing type, e.g. tm_sgc=0. (This was so by
+   default for a number of types, including bignums, and has now been
+   corrected in init_alloc in alloc.c.) We can't get around this
+   AFAICT, as old data on (writable) SGC pages must be marked lest it
+   is lost, and (old) data on now writable non-SGC pages might point
+   to live regions on SGC pages, yet might not themselves be reachable
+   from the mark origin through an unbroken chain of writable pages.
+   In any case, the possibility of a lot of garbage marks on contblock
+   pages, especially when the blocks are small as in bignums, makes
+   necessary the sweeping of minimal contblocks to prevent leaks. CM
+   20030827 */
+  {
+    void *p=NULL;
+    unsigned i,j,k,count;
+    struct contblock *new_cb_pointer=NULL,*tmp_cb_pointer=NULL,**cbpp;
+
+    tm=tm_of(t_contiguous);
+
+    /* SGC cont pages:  First count whole free pages available.  CM 20030827 */
+    for (cbpp=&cb_pointer,count=0;*cbpp;cbpp=&(*cbpp)->cb_link) {
+      p=PAGE_ROUND_UP((void *)(*cbpp));
+      k=p-((void *)(*cbpp));
+      if ((*cbpp)->cb_size<k || (*cbpp)->cb_size-k<PAGESIZE) 
+       continue;
+      i=((*cbpp)->cb_size-k)/PAGESIZE;
+      count+=i;
+    }
+    count=tm->tm_sgc>count ? tm->tm_sgc - count : 0;
+
+    if (count>0) {
+      /* SGC cont pages: allocate more if necessary, dumping possible
+        GBC freed pages onto the old contblock list.  CM 20030827*/
+      p=alloc_contblock(count*PAGESIZE);
+      tmp_cb_pointer=cb_pointer;
+      cb_pointer=new_cb_pointer;
+      /* SGC cont pages: add new pages to new contblock list. p is not
+        already on any list as ensured by alloc_contblock.  CM
+        20030827 */
+      insert_contblock(p,PAGESIZE*count);
+      new_cb_pointer=cb_pointer;
+      cb_pointer=tmp_cb_pointer;
+      for (i=0;i<count;i++) 
+       sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
+    }
+
+    for (cbpp=&cb_pointer;*cbpp;) {
+      p=PAGE_ROUND_UP((void *)(*cbpp));
+      k=p-((void *)(*cbpp));
+      if ((*cbpp)->cb_size<k || (*cbpp)->cb_size-k<PAGESIZE) {
+       cbpp=&(*cbpp)->cb_link;
+       continue;
+      }
+      i=((*cbpp)->cb_size-k)/PAGESIZE;
+      i*=PAGESIZE;
+      j=(*cbpp)->cb_size-i-k;
+      /* SGC contblock pages:  remove this block from old list CM 20030827 */
+      *cbpp=(*cbpp)->cb_link;
+      /* SGC contblock pages:  add fragments old list CM 20030827 */
+      if (k) {
+       ncb--;
+       insert_contblock(p-k,k);
+      }
+      if (j) {
+       ncb--;
+       insert_contblock(p+i,j);
+      }
+      tmp_cb_pointer=cb_pointer;
+      cb_pointer=new_cb_pointer;
+      /* SGC contblock pages: add whole pages to new list, p p-k, and
+        p+i are guaranteed to be distinct when used. CM 20030827 */
+      insert_contblock(p,i);
+      new_cb_pointer=cb_pointer;
+      cb_pointer=tmp_cb_pointer;
+      i/=PAGESIZE;
+      for (j=0;j<i;j++)
+       sgc_type_map[page(p)+j]|= SGC_PAGE_FLAG;
+    }
+
+    /* SGC contblock pages: switch to new free SGC contblock list. CM
+       20030827 */
+    old_cb_pointer=cb_pointer;
+    cb_pointer=new_cb_pointer;
+
+  }
+
   /* Now  allocate the sgc relblock.   We do this as the tail
      end of the ordinary rb.     */  
   {
@@ -1117,6 +1225,22 @@
     return 0;
   sgc_enabled=0;
   rb_start = old_rb_start;
+
+  /* SGC cont pages: restore contblocks, each tmp_cb_pointer coming
+     from the new list is guaranteed not to be on the old. Need to
+     grab 'next' before insert_contblock writes is.  CM 20030827 */
+  {
+    struct contblock *tmp_cb_pointer,*next;
+    if (old_cb_pointer) {
+      tmp_cb_pointer=cb_pointer;
+      cb_pointer=old_cb_pointer;
+      for (;tmp_cb_pointer;  tmp_cb_pointer=next) {
+       next=tmp_cb_pointer->cb_link;
+       insert_contblock((void *)tmp_cb_pointer,tmp_cb_pointer->cb_size);
+      }
+    }
+  }
+
   for (i= t_start; i < t_contiguous ; i++)
     if (TM_BASE_TYPE_P(i)) {
       tm=tm_of(i);
=============================================================================

Matt Kaufmann <address@hidden> writes:

> Hi, Camm --
> 
> Below is an example where GCL 2.5.0 reports the following:
> 
>   Error: Contiguous blocks exhausted.
>        Currently, 29486 pages are allocated.
>        Use ALLOCATE-CONTIGUOUS-PAGES to expand the space.
>   Fast links are on: do (si::use-fast-links nil) for debugging
> 
> In fact, GCL appears to go into an infinite loop at this point, until getting
> to this:
> 
>   Error: Caught fatal error [memory may be damaged]
>   Fast links are on: do (si::use-fast-links nil) for debugging
>   Error signalled by SYSTEM:UNIVERSAL-ERROR-HANDLER.
>   Broken at SYSTEM:UNIVERSAL-ERROR-HANDLER.  Type :H for Help.
>   >>
> 
> The following six forms cause the error to happen.  However, if either of the
> first two forms is omitted, then the error goes away.  Is this expected
> behavior?  This came up because an ACL2 user got the above error using the 
> file
> test3.lisp shown below.  It turns out that GCL si::sgc-on is called before the
> ACL2 image is saved, and that si::*top-level-hook* is set to call
> si::allocate-relocatable-pages when ACL2 is started up.
> 
> (si::sgc-on t)
> (si::allocate-relocatable-pages 500)
> (in-package "USER")
> (compile-file "test3.lisp") ; test3.lisp is shown below
> (load "test3")
> (testfun 1000000 3)
> 
> ++++++++++++++++++++++++++++++ test3.lisp ++++++++++++++++++++++++++++++
> 
> (in-package 'user)
> (defconstant *A* #x5A39BFA0E42A3D15)
> (defconstant *M* (expt 2 63))
> (defconstant *C* 1)
> 
> 
> (defun genseed (seed)
>   (mod (+ (* *A* seed) *C*) *M*))
> 
> 
> (defun testfun (n seed)
>   (if (or (not (integerp n)) (<= n 0))
>       seed
>       (let* ((s0 (genseed seed))
>            (s1 (genseed s0)))
>       (testfun (1- n) s1))))
> 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> Thanks --
> -- Matt
> 
> 
> 

-- 
Camm Maguire                                            address@hidden
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah




reply via email to

[Prev in Thread] Current Thread [Next in Thread]