gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gcl-devel] Re: GCL allocation


From: Camm Maguire
Subject: Re: [Gcl-devel] Re: GCL allocation
Date: 28 Aug 2003 19:41:32 -0400
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Greetings!  I see it too -- perhaps I should have waited until my acl2
tests finished.  I found an artifact appearing when saving the system
in which the newly added code could push the heap_end past the
core_end.  I'm testing a fix now.  The saving_system code really needs
to be thought through, IMHO, but as I'm short on time, I'm keeping the
existing algorithm.  I just basically moved the sgc_start in GBC ahead
of the funky saving_system manipulations of heap_end and core_end.
Fixes the bugs I've seen so far.  Comments of course are always
welcome.

As for your performance observations, as you know I still have a bit
of profiling on my todo list concerning acl2, so a definitive
statement will have to wait until then.  But I noticed in the existing
acl2 code a comment in which SGC is turned on "at the suggestion of
wfs" at a certain point.  Just to make sure we all understand, SGC is
a GC *write barrier*, it is only efficient if most of the data behind
the barrier (before executing (sgc-on t)) is static.  With your
enormous image, you should make sure that sgc is not turned on too
early.  As my patches only affect sgc contiguous pages, and as these
seem to affect your results, this may be a factor in your poor
performance.  

Here is what I'm testing now.  If you'd like to test with me, that's
great!  If you don't have time for stuff converging back from the
bleeding edge, you can wait a few days until my tests finish.

Take care,

=============================================================================
Index: h/object.h
===================================================================
RCS file: /cvsroot/gcl/gcl/h/object.h,v
retrieving revision 1.18.4.1
diff -u -r1.18.4.1 object.h
--- h/object.h  16 Jul 2003 02:02:49 -0000      1.18.4.1
+++ h/object.h  28 Aug 2003 23:27:29 -0000
@@ -759,6 +759,7 @@
        short   tm_max_grow;    /* max amount to grow when growing */
        short   tm_growth_percent;  /* percent to increase maxpages */
        short   tm_percent_free;  /* percent which must be free after a gc for 
this type */
+        short   tm_distinct;       /* pages of this type are distinct */
 
 };
 
Index: h/page.h
===================================================================
RCS file: /cvsroot/gcl/gcl/h/page.h,v
retrieving revision 1.4.4.1
diff -u -r1.4.4.1 page.h
--- h/page.h    21 Aug 2003 04:17:47 -0000      1.4.4.1
+++ h/page.h    28 Aug 2003 23:27:29 -0000
@@ -29,6 +29,12 @@
 #define ROUND_UP_PTR(n)        (((long)(n) + (PTR_ALIGN-1)) & ~(PTR_ALIGN-1))
 #define ROUND_DOWN_PTR(n) (((long)(n)  & ~(PTR_ALIGN-1)))
 
+/* alignment required for contiguous pointers */
+#define CPTR_ALIGN (PTR_ALIGN < sizeof(struct contblock) ? sizeof(struct 
contblock) : PTR_ALIGN)
+
+#define ROUND_UP_PTR_CONT(n)   (((long)(n) + (CPTR_ALIGN-1)) & ~(CPTR_ALIGN-1))
+#define ROUND_DOWN_PTR_CONT(n) (((long)(n)  & ~(CPTR_ALIGN-1)))
+
 
 #ifdef SGC
 
Index: o/alloc.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/alloc.c,v
retrieving revision 1.19
diff -u -r1.19 alloc.c
--- o/alloc.c   1 Mar 2003 22:37:37 -0000       1.19
+++ o/alloc.c   28 Aug 2003 23:27:29 -0000
@@ -425,9 +425,19 @@
 /*
        printf("allocating %d-byte contiguous block...\n", n);
 */
+        /* SGC cont pages: contiguous pointers must be aligned at
+           CPTR_ALIGN, no smaller than sizeof (struct contblock).
+           Here we allocate a bigger block, and rely on the fact that
+           allocate_page returns pointers appropriately aligned,
+           being also aligned on page boundaries.  Protection against
+           a too small contblock was aforded before by a minimum
+           contblock size enforced by CBMINSIZE in insert_contblock.
+           However, this leads to a leak when many small cont blocks
+           are allocated, e.g. with bignums, so is now removed.  CM
+           20030827 */
 
        g = FALSE;
-       n = ROUND_UP_PTR(n);
+       n = ROUND_UP_PTR_CONT(n);
 
 ONCE_MORE:
         CHECK_INTERRUPT;
@@ -472,8 +482,16 @@
       }
        p = alloc_page(m);
 
-       for (i = 0;  i < m;  i++)
+       for (i = 0;  i < m;  i++) {
                type_map[page(p) + i] = (char)t_contiguous;
+
+               /* SGC cont pages: Before this point, GCL never marked 
contiguous
+                  pages for SGC, causing no contiguous pages to be
+                  swept when SGC was on.  Here we follow the behavior
+                  for other pages in add_to_freelist. CM 20030827  */
+               if (sgc_enabled && tm_table[t_contiguous].tm_sgc)
+                 sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
+       }
        ncbpage += m;
        insert_contblock(p+n, PAGESIZE*m - n);
        return(p);
@@ -484,13 +502,27 @@
 
   struct contblock **cbpp, *cbp;
   
-  if (s < CBMINSIZE)
+  /* SGC cont pages: This used to return when s<CBMINSIZE, but we need
+     to be able to sweep small (e.g. bignum) contblocks.  FIXME:
+     should never be called with s<=0 to begin with.  CM 20030827*/
+  if (s<=0)
     return;
   ncb++;
   cbp = (struct contblock *)p;
-  cbp->cb_size = s;
+  /* SGC cont pages: allocated sizes may not be zero mod CPTR_SIZE,
+     e.g. string fillp, but alloc_contblock rounded up the allocation
+     like this, which we follow here.  CM 20030827 */
+  cbp->cb_size = ROUND_UP_PTR_CONT(s);
   for (cbpp = &cb_pointer;  *cbpp;  cbpp = &((*cbpp)->cb_link))
     if ((*cbpp)->cb_size >= s) {
+#undef DEBUG
+#define DEBUG
+#ifdef DEBUG
+      if (*cbpp==cbp) {
+       fprintf(stderr,"Trying to install a circle at %p\n",cbp);
+       exit(1);
+      }
+#endif
       cbp->cb_link = *cbpp;
       *cbpp = cbp;
       return;
@@ -568,19 +600,30 @@
        return(p);
 }
 
+/* Add a tm_distinct field to prevent page type sharing if desired.
+   Not used now, as its never desirable from an efficiency point of
+   view, and as the only known place one must separate is cons and
+   fixnum, which are of different sizes unless PTR_ALIGN is set too
+   high (e.g. 16 on a 32bit machine).  See the ordering of init_tm
+   calls for these types below -- reversing would wind up merging the
+   types with the current algorithm.  CM 20030827 */
+
 static void
-init_tm(enum type t, char *name, int elsize, int nelts, int sgc) {
+init_tm(enum type t, char *name, int elsize, int nelts, int sgc,int distinct) {
 
   int i, j;
   int maxpage;
   /* round up to next number of pages */
   maxpage = (((nelts * elsize) + PAGESIZE -1)/PAGESIZE);
   tm_table[(int)t].tm_name = name;
-  for (j = -1, i = 0;  i < (int)t_end;  i++)
-    if (tm_table[i].tm_size != 0 &&
-       tm_table[i].tm_size >= elsize &&
-       (j < 0 || tm_table[j].tm_size > tm_table[i].tm_size))
-      j = i;
+  j=-1;
+  if (!distinct)
+    for (i = 0;  i < (int)t_end;  i++)
+      if (tm_table[i].tm_size != 0 &&
+         tm_table[i].tm_size >= elsize &&
+         !tm_table[i].tm_distinct &&
+         (j < 0 || tm_table[j].tm_size > tm_table[i].tm_size))
+       j = i;
   if (j >= 0) {
     tm_table[(int)t].tm_type = (enum type)j;
     tm_table[j].tm_maxpage += maxpage;
@@ -598,6 +641,7 @@
   /*tm_table[(int)t].tm_npage = 0; */  /* dont zero nrbpage.. */
   tm_table[(int)t].tm_maxpage = maxpage;
   tm_table[(int)t].tm_gbccount = 0;
+  tm_table[(int)t].tm_distinct=distinct;
 #ifdef SGC     
   tm_table[(int)t].tm_sgc = sgc;
   tm_table[(int)t].tm_sgc_max = 3000;
@@ -688,40 +732,46 @@
   for (i = 0;  i < MAXPAGE;  i++)
     type_map[i] = (char)t_other;
   
+  /* Unused (at present) tm_distinct flag added.  Note that if cons
+     and fixnum share page types, errors will be introduced.
+
+     Gave each page type at least some sgc pages by default.  Of
+     course changeable by allocate-sgc.  CM 20030827 */
+
   init_tm(t_fixnum, "NFIXNUM",
-         sizeof(struct fixnum_struct), 8192,20);
-  init_tm(t_cons, ".CONS", sizeof(struct cons), 65536 ,50 );
-  init_tm(t_structure, "SSTRUCTURE", sizeof(struct structure), 5461,0 );
-  init_tm(t_cfun, "fCFUN", sizeof(struct cfun), 4096,0  );
-  init_tm(t_sfun, "gSFUN", sizeof(struct sfun),409,0 );
-  init_tm(t_string, "\"STRING", sizeof(struct string), 5461,1  );
-  init_tm(t_array, "aARRAY", sizeof(struct array), 4681,1 );
-  init_tm(t_symbol, "|SYMBOL", sizeof(struct symbol), 3640,1 );
-  init_tm(t_bignum, "BBIGNUM", sizeof(struct bignum), 2730,0 );
-  init_tm(t_ratio, "RRATIONAL", sizeof(struct ratio), 170,0 );
+         sizeof(struct fixnum_struct), 8192,20,0);
+  init_tm(t_cons, ".CONS", sizeof(struct cons), 65536 ,50,0 );
+  init_tm(t_structure, "SSTRUCTURE", sizeof(struct structure), 5461,1,0 );
+  init_tm(t_cfun, "fCFUN", sizeof(struct cfun), 4096,1,0  );
+  init_tm(t_sfun, "gSFUN", sizeof(struct sfun),409,1,0 );
+  init_tm(t_string, "\"STRING", sizeof(struct string), 5461,1,0  );
+  init_tm(t_array, "aARRAY", sizeof(struct array), 4681,1,0 );
+  init_tm(t_symbol, "|SYMBOL", sizeof(struct symbol), 3640,1,0 );
+  init_tm(t_bignum, "BBIGNUM", sizeof(struct bignum), 2730,1,0 );
+  init_tm(t_ratio, "RRATIONAL", sizeof(struct ratio), 170,1,0 );
   init_tm(t_shortfloat, "FSHORT-FLOAT",
-         sizeof(struct shortfloat_struct), 256 ,1);
+         sizeof(struct shortfloat_struct), 256 ,1,0);
   init_tm(t_longfloat, "LLONG-FLOAT",
-         sizeof(struct longfloat_struct), 170 ,0);
-  init_tm(t_complex, "CCOMPLEX", sizeof(struct complex), 170 ,0);
-  init_tm(t_character,"#CHARACTER",sizeof(struct character), 256 ,0);
-  init_tm(t_package, ":PACKAGE", sizeof(struct package), 2*PAGESIZE / 
sizeof(struct package),0);
-  init_tm(t_hashtable, "hHASH-TABLE", sizeof(struct hashtable), 78,0 );
-  init_tm(t_vector, "vVECTOR", sizeof(struct vector), 146 ,0);
-  init_tm(t_bitvector, "bBIT-VECTOR", sizeof(struct bitvector), 73 ,0);
-  init_tm(t_stream, "sSTREAM", sizeof(struct stream), 78 ,0);
-  init_tm(t_random, "$RANDOM-STATE", sizeof(struct random), 256 ,0);
-  init_tm(t_readtable, "rREADTABLE", sizeof(struct readtable), 256 ,0);
-  init_tm(t_pathname, "pPATHNAME", sizeof(struct pathname), 73 ,0);
-  init_tm(t_cclosure, "cCCLOSURE", sizeof(struct cclosure), 85 ,0);
-  init_tm(t_closure, "cCLOSURE", sizeof(struct cclosure), 85 ,0);
-  init_tm(t_vfun, "VVFUN", sizeof(struct vfun), 102 ,0);
-  init_tm(t_gfun, "gGFUN", sizeof(struct sfun), 0 ,0);
-  init_tm(t_afun, "AAFUN", sizeof(struct sfun), 0 ,0);
-  init_tm(t_cfdata, "cCFDATA", sizeof(struct cfdata), 102 ,0);
-  init_tm(t_spice, "!SPICE", sizeof(struct spice), 4096 ,0);
-  init_tm(t_relocatable, "%RELOCATABLE-BLOCKS", 1000,0,20);
-  init_tm(t_contiguous, "_CONTIGUOUS-BLOCKS", 1001,0,20);
+         sizeof(struct longfloat_struct), 170 ,1,0);
+  init_tm(t_complex, "CCOMPLEX", sizeof(struct complex), 170 ,1,0);
+  init_tm(t_character,"#CHARACTER",sizeof(struct character), 256 ,1,0);
+  init_tm(t_package, ":PACKAGE", sizeof(struct package), 2*PAGESIZE / 
sizeof(struct package),1,0);
+  init_tm(t_hashtable, "hHASH-TABLE", sizeof(struct hashtable), 78,1,0 );
+  init_tm(t_vector, "vVECTOR", sizeof(struct vector), 146 ,1,0);
+  init_tm(t_bitvector, "bBIT-VECTOR", sizeof(struct bitvector), 73 ,1,0);
+  init_tm(t_stream, "sSTREAM", sizeof(struct stream), 78 ,1,0);
+  init_tm(t_random, "$RANDOM-STATE", sizeof(struct random), 256 ,1,0);
+  init_tm(t_readtable, "rREADTABLE", sizeof(struct readtable), 256 ,1,0);
+  init_tm(t_pathname, "pPATHNAME", sizeof(struct pathname), 73 ,1,0);
+  init_tm(t_cclosure, "cCCLOSURE", sizeof(struct cclosure), 85 ,1,0);
+  init_tm(t_closure, "cCLOSURE", sizeof(struct cclosure), 85 ,1,0);
+  init_tm(t_vfun, "VVFUN", sizeof(struct vfun), 102 ,1,0);
+  init_tm(t_gfun, "gGFUN", sizeof(struct sfun), 0 ,1,0);
+  init_tm(t_afun, "AAFUN", sizeof(struct sfun), 0 ,1,0);
+  init_tm(t_cfdata, "cCFDATA", sizeof(struct cfdata), 102 ,1,0);
+  init_tm(t_spice, "!SPICE", sizeof(struct spice), 4096 ,1,0);
+  init_tm(t_relocatable, "%RELOCATABLE-BLOCKS", 1000,0,20,0);
+  init_tm(t_contiguous, "_CONTIGUOUS-BLOCKS", 1001,0,20,0);
   tm_table[t_relocatable].tm_nppage = PAGESIZE;
   tm_table[t_contiguous].tm_nppage = PAGESIZE;
   
@@ -895,8 +945,15 @@
     FEerror("Can't allocate ~D pages for contiguous blocks.",
            1, make_fixnum(npages));
 
-  for (i = 0;  i < m;  i++)
+  for (i = 0;  i < m;  i++) {
     type_map[page(p + PAGESIZE*i)] = (char)t_contiguous;
+    /* SGC cont pages: Before this point, GCL never marked contiguous
+       pages for SGC, causing no contiguous pages to be
+       swept when SGC was on.  Here we follow the behavior
+       for other pages in add_to_freelist. CM 20030827  */
+    if (sgc_enabled && tm_table[t_contiguous].tm_sgc)
+      sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
+  }
 
   ncbpage += m;
   insert_contblock(p, PAGESIZE*m);
@@ -1140,8 +1197,11 @@
 #endif 
        for (p = &malloc_list; *p && !endp(*p);  p = &((*p)->c.c_cdr))
                if ((*p)->c.c_car->st.st_self == ptr) {
-                       insert_contblock((*p)->c.c_car->st.st_self,
-                                        (*p)->c.c_car->st.st_dim);
+/* SGC contblock pages: leave sweeping to GBC.  Could also try
+   protecting this with sgc_enabled && strm->d.s==SGC_NORMAL and a
+   switch to old_cb_pointer as appropriate  */
+/*                     insert_contblock((*p)->c.c_car->st.st_self, */
+/*                                      (*p)->c.c_car->st.st_dim); */
                        (*p)->c.c_car->st.st_self = NULL;
                        *p = (*p)->c.c_cdr;
                        return ;
@@ -1189,7 +1249,10 @@
        x->st.st_fillp = x->st.st_dim = size;
        for (i = 0;  i < size;  i++)
          x->st.st_self[i] = ((char *)ptr)[i];
-       insert_contblock(ptr, j);
+/* SGC contblock pages: leave sweeping to GBC.  Could also try
+   protecting this with sgc_enabled && strm->d.s==SGC_NORMAL and a
+   switch to old_cb_pointer as appropriate  */
+/*     insert_contblock(ptr, j); */
        return(x->st.st_self);
       }
     }
Index: o/file.d
===================================================================
RCS file: /cvsroot/gcl/gcl/o/file.d,v
retrieving revision 1.21
diff -u -r1.21 file.d
--- o/file.d    18 Feb 2003 02:32:03 -0000      1.21
+++ o/file.d    28 Aug 2003 23:27:30 -0000
@@ -303,11 +303,16 @@
 deallocate_stream_buffer(strm)
 object strm;
 {
-  if (strm->sm.sm_buffer)
-    {insert_contblock(strm->sm.sm_buffer, BUFSIZ);
-     strm->sm.sm_buffer = 0;}
- else
-    printf("no buffer? %p  \n",strm->sm.sm_fp);
+
+/* SGC contblock pages: leave sweeping to GBC.  Could also try
+   protecting this with sgc_enabled && strm->d.s==SGC_NORMAL and a
+   switch to old_cb_pointer as appropriate  */
+
+/*   if (strm->sm.sm_buffer) */
+/*     {insert_contblock(strm->sm.sm_buffer, BUFSIZ); */
+/*      strm->sm.sm_buffer = 0;} */
+/*  else */
+/*     printf("no buffer? %p  \n",strm->sm.sm_fp); */
 
 #ifndef FCLOSE_SETBUF_OK
   strm->sm.sm_fp->_base = NULL;
Index: o/gbc.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/gbc.c,v
retrieving revision 1.13.4.1
diff -u -r1.13.4.1 gbc.c
--- o/gbc.c     30 Jul 2003 15:11:12 -0000      1.13.4.1
+++ o/gbc.c     28 Aug 2003 23:27:30 -0000
@@ -1012,19 +1012,24 @@
     e = pagetochar(j);
     for (p = s;  p < e;) {
       if (get_mark_bit((int *)p)) {
-       p += PTR_ALIGN;
+       /* SGC cont pages: cont blocks must be no smaller than
+          sizeof(struct contblock), and must not have a sweep
+          granularity greater than this amount (e.g. CPTR_ALIGN) if
+          contblock leaks are to be avoided.  Used to be aligned at
+          PTR_ALIGN. CM 20030827 */
+       p += CPTR_ALIGN;
        continue;
       }
-      q = p + PTR_ALIGN;
+      q = p + CPTR_ALIGN;
       while (q < e) {
        if (!get_mark_bit((int *)q)) {
-         q += PTR_ALIGN;
+         q += CPTR_ALIGN;
          continue;
        }
        break;
       }
       insert_contblock(p, q - p);
-      p = q + PTR_ALIGN;
+      p = q + CPTR_ALIGN;
     }
     i = j + 1;
   }
@@ -1067,8 +1072,8 @@
     if(sgc_enabled) sgc_quit();
     
     }
-  
-  
+
+
 #ifdef DEBUG
   debug = symbol_value(sSAgbc_messageA) != Cnil;
 #endif
@@ -1278,6 +1283,9 @@
   
   interrupt_enable = TRUE;
   
+  if (in_sgc && sgc_enabled==0)
+    sgc_start();
+  
   if (saving_system) {
     j = (rb_pointer-rb_start+PAGESIZE-1) / PAGESIZE;
     
@@ -1323,10 +1331,6 @@
   if (GBC_exit_hook != NULL)
     (*GBC_exit_hook)();
   
-  
-  if (in_sgc && sgc_enabled==0)
-    sgc_start();
-  
   if(gc_time>=0 && !--gc_recursive) 
{gc_time=gc_time+(gc_start=(runtime()-gc_start));}
   
   if (sSAnotify_gbcA->s.s_dbind != Cnil) {
@@ -1423,8 +1427,10 @@
   if (!MAYBE_DATA_P(p) || (enum type)type_map[page(p)] != t_contiguous)
     return;
   q = p + s;
-  x = (int *)ROUND_DOWN_PTR(p);
-  y = (int *)ROUND_UP_PTR(q);
+  /* SGC cont pages: contblock pages must be no smaller than
+     sizeof(struct contblock).  CM 20030827 */
+  x = (int *)ROUND_DOWN_PTR_CONT(p);
+  y = (int *)ROUND_UP_PTR_CONT(q);
   for (;  x < y;  x++)
     set_mark_bit(x);
 }
Index: o/gmp.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/gmp.c,v
retrieving revision 1.3
diff -u -r1.3 gmp.c
--- o/gmp.c     15 Feb 2003 00:38:28 -0000      1.3
+++ o/gmp.c     28 Aug 2003 23:27:30 -0000
@@ -15,7 +15,10 @@
   old = oldmem;
   bcopy(MP_SELF(big_gcprotect),new,oldsize);
   MP_SELF(big_gcprotect)=0;
-  if (inheap(oldmem)) insert_contblock(oldmem,oldsize);
+/* SGC contblock pages: leave sweeping to GBC.  Could also try
+   protecting this with sgc_enabled && strm->d.s==SGC_NORMAL and a
+   switch to old_cb_pointer as appropriate  */
+/*   if (inheap(oldmem)) insert_contblock(oldmem,oldsize); */
   return new;
 }
 
Index: o/sgbc.c
===================================================================
RCS file: /cvsroot/gcl/gcl/o/sgbc.c,v
retrieving revision 1.9
diff -u -r1.9 sgbc.c
--- o/sgbc.c    15 Feb 2003 00:38:28 -0000      1.9
+++ o/sgbc.c    28 Aug 2003 23:27:31 -0000
@@ -887,19 +887,24 @@
     e = pagetochar(j);
     for (p = s;  p < e;) {
       if (get_mark_bit((int *)p)) {
-       p += PTR_ALIGN;
+       /* SGC cont pages: cont blocks must be no smaller than
+          sizeof(struct contblock), and must not have a sweep
+          granularity greater than this amount (e.g. CPTR_ALIGN) if
+          contblock leaks are to be avoided.  Used to be aligned at
+          PTR_ALIGN. CM 20030827 */
+       p += CPTR_ALIGN;
        continue;
       }
-      q = p + PTR_ALIGN;
+      q = p + CPTR_ALIGN;
       while (q < e) {
        if (!get_mark_bit((int *)q)) {
-         q += PTR_ALIGN;
+         q += CPTR_ALIGN;
          continue;
        }
        break;
       }
       insert_contblock(p, q - p);
-      p = q + PTR_ALIGN;
+      p = q + CPTR_ALIGN;
     }
     i = j + 1;
   }
@@ -961,6 +966,56 @@
   return count;
 }
 
+   /* SGC cont pages: After SGC_start, old_cb_pointer will be a linked
+   list of free blocks on non-SGC pages, and cb_pointer will be
+   likewise for SGC pages.  CM 20030827*/
+static struct contblock *old_cb_pointer;
+
+#undef MDEBUG
+#define MDEBUG
+#ifdef MDEBUG
+static void
+overlap_check(struct contblock *t1,struct contblock *t2) {
+
+  struct contblock *p;
+
+  for (;t1;t1=t1->cb_link) {
+
+    if (!inheap(t1)) {
+      fprintf(stderr,"%p not in heap\n",t1);
+      exit(1);
+    }
+
+    for (p=t2;p;p=p->cb_link) {
+
+      if (!inheap(p)) {
+       fprintf(stderr,"%p not in heap\n",t1);
+       exit(1);
+      }
+
+      if ((p<=t1 && (void *)p+p->cb_size>(void *)t1) ||
+         (t1<=p && (void *)t1+t1->cb_size>(void *)p)) {
+       fprintf(stderr,"Overlap %u %p  %u %p\n",t1->cb_size,t1,p->cb_size,p);
+       exit(1);
+      }
+      
+      if (p==p->cb_link) {
+       fprintf(stderr,"circle detected at %p\n",p);
+       exit(1);
+      }
+
+    }
+       
+    if (t1==t1->cb_link) {
+      fprintf(stderr,"circle detected at %p\n",t1);
+      exit(1);
+    }
+
+  }
+
+}
+#endif   
+
 int
 sgc_start(void) {
 
@@ -985,7 +1040,11 @@
     {
       int maxp=0;
       int j;
-      int minfree = tm->tm_sgc_minfree;
+      /* SGC cont pages: This used to be simply set to tm_sgc_minfree,
+        which is a definite bug, as minfree could then be zero,
+        leading this type to claim SGC pages not of its type as
+        specified in type_map.  CM 20030827*/
+      int minfree = tm->tm_sgc_minfree > 0 ? tm->tm_sgc_minfree : 1 ;
       int count;
       bzero(free_map,npages*sizeof(short));
       f = tm->tm_free;
@@ -1031,6 +1090,105 @@
          goto FIND_FREE_PAGES;  
       }
     }
+
+/* SGC cont pages: Here we implement the contblock page division into
+   SGC and non-SGC types.  Unlike the other types, we need *whole*
+   free pages for contblock SGC, as there is no psersistent data
+   element (e.g. .m) on an allocated block itself which can indicate
+   its live status.  If anything on a page which is to be marked
+   read-only points to a live object on an SGC cont page, it will
+   never be marked and will be erroneously swept.  It is also possible
+   for dead objects to unnecessarily mark dead regions on SGC pages
+   and delay sweeping until the pointing type is GC'ed if SGC is
+   turned off for the pointing type, e.g. tm_sgc=0. (This was so by
+   default for a number of types, including bignums, and has now been
+   corrected in init_alloc in alloc.c.) We can't get around this
+   AFAICT, as old data on (writable) SGC pages must be marked lest it
+   is lost, and (old) data on now writable non-SGC pages might point
+   to live regions on SGC pages, yet might not themselves be reachable
+   from the mark origin through an unbroken chain of writable pages.
+   In any case, the possibility of a lot of garbage marks on contblock
+   pages, especially when the blocks are small as in bignums, makes
+   necessary the sweeping of minimal contblocks to prevent leaks. CM
+   20030827 */
+  {
+    void *p=NULL;
+    unsigned i,j,k,count;
+    struct contblock *new_cb_pointer=NULL,*tmp_cb_pointer=NULL,**cbpp;
+
+    tm=tm_of(t_contiguous);
+
+    /* SGC cont pages:  First count whole free pages available.  CM 20030827 */
+    for (cbpp=&cb_pointer,count=0;*cbpp;cbpp=&(*cbpp)->cb_link) {
+      p=PAGE_ROUND_UP((void *)(*cbpp));
+      k=p-((void *)(*cbpp));
+      if ((*cbpp)->cb_size<k || (*cbpp)->cb_size-k<PAGESIZE) 
+       continue;
+      i=((*cbpp)->cb_size-k)/PAGESIZE;
+      count+=i;
+    }
+    count=tm->tm_sgc>count ? tm->tm_sgc - count : 0;
+
+    if (count>0) {
+      /* SGC cont pages: allocate more if necessary, dumping possible
+        GBC freed pages onto the old contblock list.  CM 20030827*/
+      p=alloc_contblock(count*PAGESIZE);
+      tmp_cb_pointer=cb_pointer;
+      cb_pointer=new_cb_pointer;
+      /* SGC cont pages: add new pages to new contblock list. p is not
+        already on any list as ensured by alloc_contblock.  CM
+        20030827 */
+      insert_contblock(p,PAGESIZE*count);
+      new_cb_pointer=cb_pointer;
+      cb_pointer=tmp_cb_pointer;
+      for (i=0;i<count;i++) 
+       sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
+    }
+
+    for (cbpp=&cb_pointer;*cbpp;) {
+      p=PAGE_ROUND_UP((void *)(*cbpp));
+      k=p-((void *)(*cbpp));
+      if ((*cbpp)->cb_size<k || (*cbpp)->cb_size-k<PAGESIZE) {
+       cbpp=&(*cbpp)->cb_link;
+       continue;
+      }
+      i=((*cbpp)->cb_size-k)/PAGESIZE;
+      i*=PAGESIZE;
+      j=(*cbpp)->cb_size-i-k;
+      /* SGC contblock pages:  remove this block from old list CM 20030827 */
+      *cbpp=(*cbpp)->cb_link;
+      /* SGC contblock pages:  add fragments old list CM 20030827 */
+      if (k) {
+       ncb--;
+       insert_contblock(p-k,k);
+      }
+      if (j) {
+       ncb--;
+       insert_contblock(p+i,j);
+      }
+      tmp_cb_pointer=cb_pointer;
+      cb_pointer=new_cb_pointer;
+      /* SGC contblock pages: add whole pages to new list, p p-k, and
+        p+i are guaranteed to be distinct when used. CM 20030827 */
+      insert_contblock(p,i);
+      new_cb_pointer=cb_pointer;
+      cb_pointer=tmp_cb_pointer;
+      i/=PAGESIZE;
+      for (j=0;j<i;j++)
+       sgc_type_map[page(p)+j]|= SGC_PAGE_FLAG;
+    }
+
+    /* SGC contblock pages: switch to new free SGC contblock list. CM
+       20030827 */
+    old_cb_pointer=cb_pointer;
+    cb_pointer=new_cb_pointer;
+
+#ifdef MDEBUG
+    overlap_check(old_cb_pointer,cb_pointer);
+#endif
+
+  }
+
   /* Now  allocate the sgc relblock.   We do this as the tail
      end of the ordinary rb.     */  
   {
@@ -1117,6 +1275,25 @@
     return 0;
   sgc_enabled=0;
   rb_start = old_rb_start;
+
+  /* SGC cont pages: restore contblocks, each tmp_cb_pointer coming
+     from the new list is guaranteed not to be on the old. Need to
+     grab 'next' before insert_contblock writes is.  CM 20030827 */
+  {
+    struct contblock *tmp_cb_pointer,*next;
+#ifdef MDEBUG
+    overlap_check(old_cb_pointer,cb_pointer);
+#endif
+    if (old_cb_pointer) {
+      tmp_cb_pointer=cb_pointer;
+      cb_pointer=old_cb_pointer;
+      for (;tmp_cb_pointer;  tmp_cb_pointer=next) {
+       next=tmp_cb_pointer->cb_link;
+       insert_contblock((void *)tmp_cb_pointer,tmp_cb_pointer->cb_size);
+      }
+    }
+  }
+
   for (i= t_start; i < t_contiguous ; i++)
     if (TM_BASE_TYPE_P(i)) {
       tm=tm_of(i);
=============================================================================
"Matt Kaufmann" <address@hidden> writes:

> Hi, Camm --
> 
> I applied your patches to the GCL version we have at AMD (which incorporates
> the other patches you've sent) and got a segmentation violation during GC.  
> The
> last few lines are as shown below.  I'm afraid I can't send out the source
> files, but if there's some way you'd like me to re-run this test, let me know.
> (Maybe you want to send me a tarball of gcl, or point to it on the web for me
> to fetch, in case I messed up in applying the patches, and in case you've made
> other patches that I don't have.)  Interestingly, the wall times for the first
> two parts of the test were significantly different between this run and the
> latest one before the new patches were applied.
> 
> In minutes,
> new vs. old:
> 
>  9 vs. 18 [model-raw]
> 26 vs. 16 [bvecp-raw]
> 
> Here are those last few lines.
> 
> [SGC for 58 STRING pages..(3398 writable)..(T=5).GC finished]
> [SGC for 58 STRING pages..(3399 writable)..(T=5).GC finished]
> [SGC for 53 CONTIGUOUS-BLOCKS pages..(3400 writable)..(T=5).GC finished]
> [SGC for 58 STRING pages..(3405 writable)..(T=5).GC finished]
> [SGC for 58 STRING pages..(3406 writable)..(T=5).GC finished]
> [SGC for 58 STRING pages..(3406 writable)..(T=6).GC finished]
> [SGC for 58 STRING pages..(3426 writable)..(T=5).GC finished]
> [SGC for 918 CONS pages..(3443 writable)..(T=6).GC finished]
> [SGC for 918 CONS pages..(3443 writable)..(T=6).GC finished]
> [SGC for 918 CONS pages..(3444 writable)..(T=6).GC finished]
> [SGC for 53 CONTIGUOUS-BLOCKS pages..(3445 writable)..(T=7).GC finished]
> [SGC for 58 STRING pages..(3556 writable)..(T=7).GC finished]
> [SGC for 58 STRING pages..(3592 writable)..(T=7).GC finished]
> [SGC for 58 STRING pages..(3627 writable)..(T=6).GC finished]
> [SGC for 58 STRING pages..(3663 writable)..(T=7).GC finished]
> [SGC for 95 SYMBOL pages..(3664 writable)..(T=7).GC finished]
> [SGC for 58 STRING pages..(3726 writable)..(T=7).GC finished]
> [SGC for 53 CONTIGUOUS-BLOCKS pages..(3764 writable)..(T=7).GC finished]
> [SGC for 58 STRING pages..(3814 writable)..(T=8).GC finished]
> [SGC off][GC for 500 RELOCATABLE-BLOCKS pages..
> Unrecoverable error: Segmentation violation..
> 
> -- Matt
>    Resent-From: address@hidden
>    Resent-To: address@hidden
>    cc: address@hidden, address@hidden, address@hidden
>    From: "Camm Maguire" <address@hidden>
>    Date: 27 Aug 2003 16:17:54 -0400
>    User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2
>    X-WSS-ID: 1350D47E1239945-01-01
>    Content-Type: text/plain;
>     charset=us-ascii
> 
>    Greetings!
> 
>    OK, here's the short version:
> 
>    It was broken. Now its fixed :-).
> 
>    Slightly longer than this, current GCL never marks contiguous pages as
>    SGC pages, and only sweeps the latter when SGC is on, leading to the
>    massive leak.  The extra reloc pages in the example put it over the
>    top.  
> 
>    One can of course address this in several ways.  One is to turn off
>    SGC on contiguous pages with (si::allocate-sgc 'contiguous 0 3000 0).
>    But this is obviously not optimal.
> 
>    Instead I've constructed a patch which implements SGC for contiguous
>    pages.  Its quite tricky, being close to the most involved change yet
>    I've made to GCL.  I've tried to document all the details in the
>    comments.  You can read them in the patch below if you'd like.
> 
>    A patch of this import of course needs to be well tested.  All goes
>    well with maxima, self-build, and ansi thus far.  Am presently testing
>    the acl2 book certification.  Then it probably needs to be run by
>    axiom.  I've tried it on the test below using quite a few permutations
>    of (allocate, allocate-sgc) (contiguous,relblock,cfun(==bignum)),
>    sgc-on, and even si::SET-GMP-ALLOCATE-RELOCATABLE successfully,
>    although nothing exhaustive as yet.  
> 
>    Just as a reminder, gmp bignums are allocated on contiguous pages by
>    default, as these reproduce malloc semantics (i.e. they don't move),
>    and one is thus assured that no caching in the external gmp library
>    will be corrupted.  Dr. Schelter apparently audited the gmp code at
>    the point when support for it was added, identifying and removing
>    precisely one malloc in a bad place with a safe alloca, allowing
>    bignums to be allocated on faster relocatable pages instead.  I have
>    never repeated this analysis, but we do overwrite said malloc with the
>    new alloca even when linking gmp in dynamically.  gmp could introduce
>    another bad malloc without our noticing conceivably, but as of right
>    now, relocatable bignums work fine at least in this test.  Of course
>    building GCL with its own copy of gmp will always work as it ever
>    has.  (si::set-gmp-allocate-relocatable t) to try it out.
> 
>    Separately, several page types had no SGC pages allocated by default,
>    including bignums, leading to a thrashing of sgc-on, sgc-off in the
>    test below when the bignum header underwent GC.  I've remedied this
>    default situation here as well.
> 
>    I've not even committed this change yet as it still might need
>    a few minor adjustments, but it basically appears to be working.
>    Feedback from GC gurus of course appreciated as always :-).  Hammer on
>    it and find the bugs if you are so inclined!
> 
>    To the list -- sorry about being delayed on this time consuming
>    project, but I feel it takes precedence over things I'd rather get to,
>    like ansi support.
> 
>    Take care,
> 
>    
> =============================================================================
>    Index: h/object.h
>    ===================================================================
>    RCS file: /cvsroot/gcl/gcl/h/object.h,v
>    retrieving revision 1.18.4.1
>    diff -u -r1.18.4.1 object.h
>    --- h/object.h     16 Jul 2003 02:02:49 -0000      1.18.4.1
>    +++ h/object.h     27 Aug 2003 19:21:52 -0000
>    @@ -759,6 +759,7 @@
>          short   tm_max_grow;    /* max amount to grow when growing */
>          short   tm_growth_percent;  /* percent to increase maxpages */
>          short   tm_percent_free;  /* percent which must be free after a gc 
> for this type */
>    +        short   tm_distinct;       /* pages of this type are distinct */
> 
>     };
> 
>    Index: h/page.h
>    ===================================================================
>    RCS file: /cvsroot/gcl/gcl/h/page.h,v
>    retrieving revision 1.4.4.1
>    diff -u -r1.4.4.1 page.h
>    --- h/page.h       21 Aug 2003 04:17:47 -0000      1.4.4.1
>    +++ h/page.h       27 Aug 2003 19:21:52 -0000
>    @@ -29,6 +29,12 @@
>     #define ROUND_UP_PTR(n)   (((long)(n) + (PTR_ALIGN-1)) & ~(PTR_ALIGN-1))
>     #define ROUND_DOWN_PTR(n) (((long)(n)  & ~(PTR_ALIGN-1)))
> 
>    +/* alignment required for contiguous pointers */
>    +#define CPTR_ALIGN (PTR_ALIGN < sizeof(struct contblock) ? sizeof(struct 
> contblock) : PTR_ALIGN)
>    +
>    +#define ROUND_UP_PTR_CONT(n)      (((long)(n) + (CPTR_ALIGN-1)) & 
> ~(CPTR_ALIGN-1))
>    +#define ROUND_DOWN_PTR_CONT(n) (((long)(n)  & ~(CPTR_ALIGN-1)))
>    +
> 
>     #ifdef SGC
> 
>    Index: o/alloc.c
>    ===================================================================
>    RCS file: /cvsroot/gcl/gcl/o/alloc.c,v
>    retrieving revision 1.19
>    diff -u -r1.19 alloc.c
>    --- o/alloc.c      1 Mar 2003 22:37:37 -0000       1.19
>    +++ o/alloc.c      27 Aug 2003 19:21:52 -0000
>    @@ -425,9 +425,19 @@
>     /*
>          printf("allocating %d-byte contiguous block...\n", n);
>     */
>    +   /* SGC cont pages: contiguous pointers must be aligned at
>    +      CPTR_ALIGN, no smaller than sizeof (struct contblock).
>    +      Here we allocate a bigger block, and rely on the fact that
>    +      allocate_page returns pointers appropriately aligned,
>    +      being also aligned on page boundaries.  Protection against
>    +      a too small contblock was aforded before by a minimum
>    +      contblock size enforced by CBMINSIZE in insert_contblock.
>    +      However, this leads to a leak when many small cont blocks
>    +      are allocated, e.g. with bignums, so is now removed.  CM
>    +      20030827 */
> 
>          g = FALSE;
>    -  n = ROUND_UP_PTR(n);
>    +  n = ROUND_UP_PTR_CONT(n);
> 
>     ONCE_MORE:
>           CHECK_INTERRUPT;
>    @@ -472,8 +482,16 @@
>         }
>          p = alloc_page(m);
> 
>    -  for (i = 0;  i < m;  i++)
>    +  for (i = 0;  i < m;  i++) {
>                  type_map[page(p) + i] = (char)t_contiguous;
>    +
>    +          /* SGC cont pages: Before this point, GCL never marked 
> contiguous
>    +             pages for SGC, causing no contiguous pages to be
>    +             swept when SGC was on.  Here we follow the behavior
>    +             for other pages in add_to_freelist. CM 20030827  */
>    +          if (sgc_enabled && tm_table[t_contiguous].tm_sgc)
>    +            sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
>    +  }
>          ncbpage += m;
>          insert_contblock(p+n, PAGESIZE*m - n);
>          return(p);
>    @@ -484,11 +502,17 @@
> 
>       struct contblock **cbpp, *cbp;
> 
>    -  if (s < CBMINSIZE)
>    +  /* SGC cont pages: This used to return when s<CBMINSIZE, but we need
>    +     to be able to sweep small (e.g. bignum) contblocks.  FIXME:
>    +     should never be called with s<=0 to begin with.  CM 20030827*/
>    +  if (s<=0)
>       return;
>       ncb++;
>       cbp = (struct contblock *)p;
>    -  cbp->cb_size = s;
>    +  /* SGC cont pages: allocated sizes may not be zero mod CPTR_SIZE,
>    +     e.g. string fillp, but alloc_contblock rounded up the allocation
>    +     like this, which we follow here.  CM 20030827 */
>    +  cbp->cb_size = ROUND_UP_PTR_CONT(s);
>       for (cbpp = &cb_pointer;  *cbpp;  cbpp = &((*cbpp)->cb_link))
>       if ((*cbpp)->cb_size >= s) {
>         cbp->cb_link = *cbpp;
>    @@ -568,19 +592,30 @@
>          return(p);
>     }
> 
>    +/* Add a tm_distinct field to prevent page type sharing if desired.
>    +   Not used now, as its never desirable from an efficiency point of
>    +   view, and as the only known place one must separate is cons and
>    +   fixnum, which are of different sizes unless PTR_ALIGN is set too
>    +   high (e.g. 16 on a 32bit machine).  See the ordering of init_tm
>    +   calls for these types below -- reversing would wind up merging the
>    +   types with the current algorithm.  CM 20030827 */
>    +
>     static void
>    -init_tm(enum type t, char *name, int elsize, int nelts, int sgc) {
>    +init_tm(enum type t, char *name, int elsize, int nelts, int sgc,int 
> distinct) {
> 
>       int i, j;
>       int maxpage;
>       /* round up to next number of pages */
>       maxpage = (((nelts * elsize) + PAGESIZE -1)/PAGESIZE);
>       tm_table[(int)t].tm_name = name;
>    -  for (j = -1, i = 0;  i < (int)t_end;  i++)
>    -    if (tm_table[i].tm_size != 0 &&
>    -  tm_table[i].tm_size >= elsize &&
>    -  (j < 0 || tm_table[j].tm_size > tm_table[i].tm_size))
>    -      j = i;
>    +  j=-1;
>    +  if (!distinct)
>    +    for (i = 0;  i < (int)t_end;  i++)
>    +      if (tm_table[i].tm_size != 0 &&
>    +    tm_table[i].tm_size >= elsize &&
>    +    !tm_table[i].tm_distinct &&
>    +    (j < 0 || tm_table[j].tm_size > tm_table[i].tm_size))
>    +  j = i;
>       if (j >= 0) {
>       tm_table[(int)t].tm_type = (enum type)j;
>       tm_table[j].tm_maxpage += maxpage;
>    @@ -598,6 +633,7 @@
>       /*tm_table[(int)t].tm_npage = 0; */  /* dont zero nrbpage.. */
>       tm_table[(int)t].tm_maxpage = maxpage;
>       tm_table[(int)t].tm_gbccount = 0;
>    +  tm_table[(int)t].tm_distinct=distinct;
>     #ifdef SGC        
>       tm_table[(int)t].tm_sgc = sgc;
>       tm_table[(int)t].tm_sgc_max = 3000;
>    @@ -688,40 +724,46 @@
>       for (i = 0;  i < MAXPAGE;  i++)
>       type_map[i] = (char)t_other;
> 
>    +  /* Unused (at present) tm_distinct flag added.  Note that if cons
>    +     and fixnum share page types, errors will be introduced.
>    +
>    +     Gave each page type at least some sgc pages by default.  Of
>    +     course changeable by allocate-sgc.  CM 20030827 */
>    +
>       init_tm(t_fixnum, "NFIXNUM",
>    -    sizeof(struct fixnum_struct), 8192,20);
>    -  init_tm(t_cons, ".CONS", sizeof(struct cons), 65536 ,50 );
>    -  init_tm(t_structure, "SSTRUCTURE", sizeof(struct structure), 5461,0 );
>    -  init_tm(t_cfun, "fCFUN", sizeof(struct cfun), 4096,0  );
>    -  init_tm(t_sfun, "gSFUN", sizeof(struct sfun),409,0 );
>    -  init_tm(t_string, "\"STRING", sizeof(struct string), 5461,1  );
>    -  init_tm(t_array, "aARRAY", sizeof(struct array), 4681,1 );
>    -  init_tm(t_symbol, "|SYMBOL", sizeof(struct symbol), 3640,1 );
>    -  init_tm(t_bignum, "BBIGNUM", sizeof(struct bignum), 2730,0 );
>    -  init_tm(t_ratio, "RRATIONAL", sizeof(struct ratio), 170,0 );
>    +    sizeof(struct fixnum_struct), 8192,20,0);
>    +  init_tm(t_cons, ".CONS", sizeof(struct cons), 65536 ,50,0 );
>    +  init_tm(t_structure, "SSTRUCTURE", sizeof(struct structure), 5461,1,0 );
>    +  init_tm(t_cfun, "fCFUN", sizeof(struct cfun), 4096,1,0  );
>    +  init_tm(t_sfun, "gSFUN", sizeof(struct sfun),409,1,0 );
>    +  init_tm(t_string, "\"STRING", sizeof(struct string), 5461,1,0  );
>    +  init_tm(t_array, "aARRAY", sizeof(struct array), 4681,1,0 );
>    +  init_tm(t_symbol, "|SYMBOL", sizeof(struct symbol), 3640,1,0 );
>    +  init_tm(t_bignum, "BBIGNUM", sizeof(struct bignum), 2730,1,0 );
>    +  init_tm(t_ratio, "RRATIONAL", sizeof(struct ratio), 170,1,0 );
>       init_tm(t_shortfloat, "FSHORT-FLOAT",
>    -    sizeof(struct shortfloat_struct), 256 ,1);
>    +    sizeof(struct shortfloat_struct), 256 ,1,0);
>       init_tm(t_longfloat, "LLONG-FLOAT",
>    -    sizeof(struct longfloat_struct), 170 ,0);
>    -  init_tm(t_complex, "CCOMPLEX", sizeof(struct complex), 170 ,0);
>    -  init_tm(t_character,"#CHARACTER",sizeof(struct character), 256 ,0);
>    -  init_tm(t_package, ":PACKAGE", sizeof(struct package), 2*PAGESIZE / 
> sizeof(struct package),0);
>    -  init_tm(t_hashtable, "hHASH-TABLE", sizeof(struct hashtable), 78,0 );
>    -  init_tm(t_vector, "vVECTOR", sizeof(struct vector), 146 ,0);
>    -  init_tm(t_bitvector, "bBIT-VECTOR", sizeof(struct bitvector), 73 ,0);
>    -  init_tm(t_stream, "sSTREAM", sizeof(struct stream), 78 ,0);
>    -  init_tm(t_random, "$RANDOM-STATE", sizeof(struct random), 256 ,0);
>    -  init_tm(t_readtable, "rREADTABLE", sizeof(struct readtable), 256 ,0);
>    -  init_tm(t_pathname, "pPATHNAME", sizeof(struct pathname), 73 ,0);
>    -  init_tm(t_cclosure, "cCCLOSURE", sizeof(struct cclosure), 85 ,0);
>    -  init_tm(t_closure, "cCLOSURE", sizeof(struct cclosure), 85 ,0);
>    -  init_tm(t_vfun, "VVFUN", sizeof(struct vfun), 102 ,0);
>    -  init_tm(t_gfun, "gGFUN", sizeof(struct sfun), 0 ,0);
>    -  init_tm(t_afun, "AAFUN", sizeof(struct sfun), 0 ,0);
>    -  init_tm(t_cfdata, "cCFDATA", sizeof(struct cfdata), 102 ,0);
>    -  init_tm(t_spice, "!SPICE", sizeof(struct spice), 4096 ,0);
>    -  init_tm(t_relocatable, "%RELOCATABLE-BLOCKS", 1000,0,20);
>    -  init_tm(t_contiguous, "_CONTIGUOUS-BLOCKS", 1001,0,20);
>    +    sizeof(struct longfloat_struct), 170 ,1,0);
>    +  init_tm(t_complex, "CCOMPLEX", sizeof(struct complex), 170 ,1,0);
>    +  init_tm(t_character,"#CHARACTER",sizeof(struct character), 256 ,1,0);
>    +  init_tm(t_package, ":PACKAGE", sizeof(struct package), 2*PAGESIZE / 
> sizeof(struct package),1,0);
>    +  init_tm(t_hashtable, "hHASH-TABLE", sizeof(struct hashtable), 78,1,0 );
>    +  init_tm(t_vector, "vVECTOR", sizeof(struct vector), 146 ,1,0);
>    +  init_tm(t_bitvector, "bBIT-VECTOR", sizeof(struct bitvector), 73 ,1,0);
>    +  init_tm(t_stream, "sSTREAM", sizeof(struct stream), 78 ,1,0);
>    +  init_tm(t_random, "$RANDOM-STATE", sizeof(struct random), 256 ,1,0);
>    +  init_tm(t_readtable, "rREADTABLE", sizeof(struct readtable), 256 ,1,0);
>    +  init_tm(t_pathname, "pPATHNAME", sizeof(struct pathname), 73 ,1,0);
>    +  init_tm(t_cclosure, "cCCLOSURE", sizeof(struct cclosure), 85 ,1,0);
>    +  init_tm(t_closure, "cCLOSURE", sizeof(struct cclosure), 85 ,1,0);
>    +  init_tm(t_vfun, "VVFUN", sizeof(struct vfun), 102 ,1,0);
>    +  init_tm(t_gfun, "gGFUN", sizeof(struct sfun), 0 ,1,0);
>    +  init_tm(t_afun, "AAFUN", sizeof(struct sfun), 0 ,1,0);
>    +  init_tm(t_cfdata, "cCFDATA", sizeof(struct cfdata), 102 ,1,0);
>    +  init_tm(t_spice, "!SPICE", sizeof(struct spice), 4096 ,1,0);
>    +  init_tm(t_relocatable, "%RELOCATABLE-BLOCKS", 1000,0,20,0);
>    +  init_tm(t_contiguous, "_CONTIGUOUS-BLOCKS", 1001,0,20,0);
>       tm_table[t_relocatable].tm_nppage = PAGESIZE;
>       tm_table[t_contiguous].tm_nppage = PAGESIZE;
> 
>    Index: o/gbc.c
>    ===================================================================
>    RCS file: /cvsroot/gcl/gcl/o/gbc.c,v
>    retrieving revision 1.13.4.1
>    diff -u -r1.13.4.1 gbc.c
>    --- o/gbc.c        30 Jul 2003 15:11:12 -0000      1.13.4.1
>    +++ o/gbc.c        27 Aug 2003 19:21:52 -0000
>    @@ -1012,19 +1012,24 @@
>       e = pagetochar(j);
>       for (p = s;  p < e;) {
>         if (get_mark_bit((int *)p)) {
>    -  p += PTR_ALIGN;
>    +  /* SGC cont pages: cont blocks must be no smaller than
>    +     sizeof(struct contblock), and must not have a sweep
>    +     granularity greater than this amount (e.g. CPTR_ALIGN) if
>    +     contblock leaks are to be avoided.  Used to be aligned at
>    +     PTR_ALIGN. CM 20030827 */
>    +  p += CPTR_ALIGN;
>          continue;
>         }
>    -      q = p + PTR_ALIGN;
>    +      q = p + CPTR_ALIGN;
>         while (q < e) {
>          if (!get_mark_bit((int *)q)) {
>    -    q += PTR_ALIGN;
>    +    q += CPTR_ALIGN;
>            continue;
>          }
>          break;
>         }
>         insert_contblock(p, q - p);
>    -      p = q + PTR_ALIGN;
>    +      p = q + CPTR_ALIGN;
>       }
>       i = j + 1;
>       }
>    @@ -1067,8 +1072,8 @@
>       if(sgc_enabled) sgc_quit();
> 
>       }
>    -  
>    -  
>    +
>    +
>     #ifdef DEBUG
>       debug = symbol_value(sSAgbc_messageA) != Cnil;
>     #endif
>    @@ -1423,8 +1428,10 @@
>       if (!MAYBE_DATA_P(p) || (enum type)type_map[page(p)] != t_contiguous)
>       return;
>       q = p + s;
>    -  x = (int *)ROUND_DOWN_PTR(p);
>    -  y = (int *)ROUND_UP_PTR(q);
>    +  /* SGC cont pages: contblock pages must be no smaller than
>    +     sizeof(struct contblock).  CM 20030827 */
>    +  x = (int *)ROUND_DOWN_PTR_CONT(p);
>    +  y = (int *)ROUND_UP_PTR_CONT(q);
>       for (;  x < y;  x++)
>       set_mark_bit(x);
>     }
>    Index: o/sgbc.c
>    ===================================================================
>    RCS file: /cvsroot/gcl/gcl/o/sgbc.c,v
>    retrieving revision 1.9
>    diff -u -r1.9 sgbc.c
>    --- o/sgbc.c       15 Feb 2003 00:38:28 -0000      1.9
>    +++ o/sgbc.c       27 Aug 2003 19:21:53 -0000
>    @@ -887,19 +887,24 @@
>       e = pagetochar(j);
>       for (p = s;  p < e;) {
>         if (get_mark_bit((int *)p)) {
>    -  p += PTR_ALIGN;
>    +  /* SGC cont pages: cont blocks must be no smaller than
>    +     sizeof(struct contblock), and must not have a sweep
>    +     granularity greater than this amount (e.g. CPTR_ALIGN) if
>    +     contblock leaks are to be avoided.  Used to be aligned at
>    +     PTR_ALIGN. CM 20030827 */
>    +  p += CPTR_ALIGN;
>          continue;
>         }
>    -      q = p + PTR_ALIGN;
>    +      q = p + CPTR_ALIGN;
>         while (q < e) {
>          if (!get_mark_bit((int *)q)) {
>    -    q += PTR_ALIGN;
>    +    q += CPTR_ALIGN;
>            continue;
>          }
>          break;
>         }
>         insert_contblock(p, q - p);
>    -      p = q + PTR_ALIGN;
>    +      p = q + CPTR_ALIGN;
>       }
>       i = j + 1;
>       }
>    @@ -961,6 +966,11 @@
>       return count;
>     }
> 
>    +   /* SGC cont pages: After SGC_start, old_cb_pointer will be a linked
>    +   list of free blocks on non-SGC pages, and cb_pointer will be
>    +   likewise for SGC pages.  CM 20030827*/
>    +static struct contblock *old_cb_pointer;
>    +
>     int
>     sgc_start(void) {
> 
>    @@ -1005,7 +1015,10 @@
>                 count);fflush(stdout);
>     #endif       
>         for(j=0,count=0; j <= maxp ;j++) {
>    -  if (free_map[j] >= minfree) {
>    +  /* SGC cont pages: This used to be >=, which is a definite
>    +     bug, as minfree could be zero, leading this type to claim
>    +     SGC pages not of its type in type_map.  CM 20030827*/
>    +  if (free_map[j] > minfree) {
>            sgc_type_map[j] |= (SGC_PAGE_FLAG | SGC_TEMP_WRITABLE);
>            ++count;
>            if (count >= tm->tm_sgc_max)
>    @@ -1031,6 +1044,101 @@
>            goto FIND_FREE_PAGES;       
>         }
>       }
>    +
>    +/* SGC cont pages: Here we implement the contblock page division into
>    +   SGC and non-SGC types.  Unlike the other types, we need *whole*
>    +   free pages for contblock SGC, as there is no psersistent data
>    +   element (e.g. .m) on an allocated block itself which can indicate
>    +   its live status.  If anything on a page which is to be marked
>    +   read-only points to a live object on an SGC cont page, it will
>    +   never be marked and will be erroneously swept.  It is also possible
>    +   for dead objects to unnecessarily mark dead regions on SGC pages
>    +   and delay sweeping until the pointing type is GC'ed if SGC is
>    +   turned off for the pointing type, e.g. tm_sgc=0. (This was so by
>    +   default for a number of types, including bignums, and has now been
>    +   corrected in init_alloc in alloc.c.) We can't get around this
>    +   AFAICT, as old data on (writable) SGC pages must be marked lest it
>    +   is lost, and (old) data on now writable non-SGC pages might point
>    +   to live regions on SGC pages, yet might not themselves be reachable
>    +   from the mark origin through an unbroken chain of writable pages.
>    +   In any case, the possibility of a lot of garbage marks on contblock
>    +   pages, especially when the blocks are small as in bignums, makes
>    +   necessary the sweeping of minimal contblocks to prevent leaks. CM
>    +   20030827 */
>    +  {
>    +    void *p=NULL;
>    +    unsigned i,j,k,count;
>    +    struct contblock *new_cb_pointer=NULL,*tmp_cb_pointer=NULL,**cbpp;
>    +
>    +    tm=tm_of(t_contiguous);
>    +
>    +    /* SGC cont pages:  First count whole free pages available.  CM 
> 20030827 */
>    +    for (cbpp=&cb_pointer,count=0;*cbpp;cbpp=&(*cbpp)->cb_link) {
>    +      p=PAGE_ROUND_UP((void *)(*cbpp));
>    +      k=p-((void *)(*cbpp));
>    +      if ((*cbpp)->cb_size<k || (*cbpp)->cb_size-k<PAGESIZE) 
>    +  continue;
>    +      i=((*cbpp)->cb_size-k)/PAGESIZE;
>    +      count+=i;
>    +    }
>    +    count=tm->tm_sgc>count ? tm->tm_sgc - count : 0;
>    +
>    +    if (count>0) {
>    +      /* SGC cont pages: allocate more if necessary, dumping possible
>    +   GBC freed pages onto the old contblock list.  CM 20030827*/
>    +      p=alloc_contblock(count*PAGESIZE);
>    +      tmp_cb_pointer=cb_pointer;
>    +      cb_pointer=new_cb_pointer;
>    +      /* SGC cont pages: add new pages to new contblock list. p is not
>    +   already on any list as ensured by alloc_contblock.  CM
>    +   20030827 */
>    +      insert_contblock(p,PAGESIZE*count);
>    +      new_cb_pointer=cb_pointer;
>    +      cb_pointer=tmp_cb_pointer;
>    +      for (i=0;i<count;i++) 
>    +  sgc_type_map[page(p)+i]|= SGC_PAGE_FLAG;
>    +    }
>    +
>    +    for (cbpp=&cb_pointer;*cbpp;) {
>    +      p=PAGE_ROUND_UP((void *)(*cbpp));
>    +      k=p-((void *)(*cbpp));
>    +      if ((*cbpp)->cb_size<k || (*cbpp)->cb_size-k<PAGESIZE) {
>    +  cbpp=&(*cbpp)->cb_link;
>    +  continue;
>    +      }
>    +      i=((*cbpp)->cb_size-k)/PAGESIZE;
>    +      i*=PAGESIZE;
>    +      j=(*cbpp)->cb_size-i-k;
>    +      /* SGC contblock pages:  remove this block from old list CM 
> 20030827 */
>    +      *cbpp=(*cbpp)->cb_link;
>    +      /* SGC contblock pages:  add fragments old list CM 20030827 */
>    +      if (k) {
>    +  ncb--;
>    +  insert_contblock(p-k,k);
>    +      }
>    +      if (j) {
>    +  ncb--;
>    +  insert_contblock(p+i,j);
>    +      }
>    +      tmp_cb_pointer=cb_pointer;
>    +      cb_pointer=new_cb_pointer;
>    +      /* SGC contblock pages: add whole pages to new list, p p-k, and
>    +   p+i are guaranteed to be distinct when used. CM 20030827 */
>    +      insert_contblock(p,i);
>    +      new_cb_pointer=cb_pointer;
>    +      cb_pointer=tmp_cb_pointer;
>    +      i/=PAGESIZE;
>    +      for (j=0;j<i;j++)
>    +  sgc_type_map[page(p)+j]|= SGC_PAGE_FLAG;
>    +    }
>    +
>    +    /* SGC contblock pages: switch to new free SGC contblock list. CM
>    +       20030827 */
>    +    old_cb_pointer=cb_pointer;
>    +    cb_pointer=new_cb_pointer;
>    +
>    +  }
>    +
>       /* Now  allocate the sgc relblock.   We do this as the tail
>        end of the ordinary rb.     */  
>       {
>    @@ -1117,6 +1225,22 @@
>       return 0;
>       sgc_enabled=0;
>       rb_start = old_rb_start;
>    +
>    +  /* SGC cont pages: restore contblocks, each tmp_cb_pointer coming
>    +     from the new list is guaranteed not to be on the old. Need to
>    +     grab 'next' before insert_contblock writes is.  CM 20030827 */
>    +  {
>    +    struct contblock *tmp_cb_pointer,*next;
>    +    if (old_cb_pointer) {
>    +      tmp_cb_pointer=cb_pointer;
>    +      cb_pointer=old_cb_pointer;
>    +      for (;tmp_cb_pointer;  tmp_cb_pointer=next) {
>    +  next=tmp_cb_pointer->cb_link;
>    +  insert_contblock((void *)tmp_cb_pointer,tmp_cb_pointer->cb_size);
>    +      }
>    +    }
>    +  }
>    +
>       for (i= t_start; i < t_contiguous ; i++)
>       if (TM_BASE_TYPE_P(i)) {
>         tm=tm_of(i);
>    
> =============================================================================
> 
>    Matt Kaufmann <address@hidden> writes:
> 
>    > Hi, Camm --
>    > 
>    > Below is an example where GCL 2.5.0 reports the following:
>    > 
>    >   Error: Contiguous blocks exhausted.
>    >   Currently, 29486 pages are allocated.
>    >   Use ALLOCATE-CONTIGUOUS-PAGES to expand the space.
>    >   Fast links are on: do (si::use-fast-links nil) for debugging
>    > 
>    > In fact, GCL appears to go into an infinite loop at this point, until 
> getting
>    > to this:
>    > 
>    >   Error: Caught fatal error [memory may be damaged]
>    >   Fast links are on: do (si::use-fast-links nil) for debugging
>    >   Error signalled by SYSTEM:UNIVERSAL-ERROR-HANDLER.
>    >   Broken at SYSTEM:UNIVERSAL-ERROR-HANDLER.  Type :H for Help.
>    >   >>
>    > 
>    > The following six forms cause the error to happen.  However, if either 
> of the
>    > first two forms is omitted, then the error goes away.  Is this expected
>    > behavior?  This came up because an ACL2 user got the above error using 
> the file
>    > test3.lisp shown below.  It turns out that GCL si::sgc-on is called 
> before the
>    > ACL2 image is saved, and that si::*top-level-hook* is set to call
>    > si::allocate-relocatable-pages when ACL2 is started up.
>    > 
>    > (si::sgc-on t)
>    > (si::allocate-relocatable-pages 500)
>    > (in-package "USER")
>    > (compile-file "test3.lisp") ; test3.lisp is shown below
>    > (load "test3")
>    > (testfun 1000000 3)
>    > 
>    > ++++++++++++++++++++++++++++++ test3.lisp ++++++++++++++++++++++++++++++
>    > 
>    > (in-package 'user)
>    > (defconstant *A* #x5A39BFA0E42A3D15)
>    > (defconstant *M* (expt 2 63))
>    > (defconstant *C* 1)
>    > 
>    > 
>    > (defun genseed (seed)
>    >   (mod (+ (* *A* seed) *C*) *M*))
>    > 
>    > 
>    > (defun testfun (n seed)
>    >   (if (or (not (integerp n)) (<= n 0))
>    >       seed
>    >       (let* ((s0 (genseed seed))
>    >            (s1 (genseed s0)))
>    >       (testfun (1- n) s1))))
>    > 
>    > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>    > 
>    > Thanks --
>    > -- Matt
>    > 
>    > 
>    > 
> 
>    -- 
>    Camm Maguire                                               address@hidden
>    ==========================================================================
>    "The earth is but one country, and mankind its citizens."  --  Baha'u'llah
> 
> 
> 
> _______________________________________________
> Gcl-devel mailing list
> address@hidden
> http://mail.gnu.org/mailman/listinfo/gcl-devel
> 
> 
> 

-- 
Camm Maguire                                            address@hidden
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah




reply via email to

[Prev in Thread] Current Thread [Next in Thread]