[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Help-gnunet] Inserting large amounts of content?
From: |
Per Kreuger |
Subject: |
Re: [Help-gnunet] Inserting large amounts of content? |
Date: |
Mon, 21 Apr 2003 14:14:55 +0200 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030313 |
Ok, I've tried the cvs version of (friday 18) and adjusted only the Disk
quota to 5G.
Still I failed after inserting about 3.5G in 946 files. At this point
the size of /var/lib/GNUnet/data was about 2.5G i.e. half of the allowed
disc quota.
The error reported at failure was:
WARNING readTCPResult failed server, closed connection
WARNING server did not send confirmation of insertion
WARNING child->insert failed on level 1, pos 0, aborting
WARNING child->insert failed on level 2, pos 0, aborting
WARNING child->insert failed on level 3, pos 0, aborting
Error insering the file (-1)
What is probably worse is very distinct degradation of insertion speed.
I enclose a postscript file with the insertions-rate listed and graphed
over the several hours it took to insert the 946 files.
Since I failed inserting the last file I tried a gnunet-check -a on the
resulting database. Somewhere in the index chek part it started to
report failure
"PIdx database corrupt (could not unlink ...) from low DB ..."
After some 40 hours I interupted this process and tried to restart
gnunetd. It appears to come up fine and I can find and download at least
some of the files I've inserted. I don't see any other host and I get a
timeout from ojmv http when (I guess) downloading the nodelist but this
could have other reasons.
Trying to insert additional files after restarting gnunetd gives the
same error as above but also fills the logfile with "PIdx database
corrupt..." messages and then crashes gnunetd.
It would be interesting to know why the insertion speed degrades. I
noted that the size of gnunetd grew to about 60M during the insertion
process and that the distribution of sizes of the DB-buckets was quite
uneven all through the process. In the end, there are 20 content buckets
(bucket.5120.0-19) and their size vary from 27M to 444M. The index
buckets are all of approximately the same size (3-4M).
Why are there content buckets at all? I had content migration disabled
in the config file.
Hope this helps
piak
Christian Grothoff wrote:
On Thursday 17 April 2003 02:21 am, Per Kreuger wrote:
I'm experimenting with inserting large amounts of content (about 20G)
but fail (as axpected) at aboout 8G. The config file mentions changing
parameters in src/include/config.h but I found no such parameter in that
file.
What to do?
If you are using current CVS, you only have to edit the diskquota option in
gnunet.conf. All other places have been 'resolved'. Note that if you change
the quota (or the database), you need to run gnunet-convert to convert the
database over (to the new size or type).
Christian
_______________________________________________
Help-gnunet mailing list
address@hidden
http://mail.gnu.org/mailman/listinfo/help-gnunet
--
Per Kreuger, SICS, PO Box 1263, S-164 28 KISTA, Sweden
Email: address@hidden My home page is: http://www.sics.se/~piak
PGP Public Key at: http://www.sics.se/~piak/pgpkey.html
Tel: +46 8 633 15 22 Mob: +46 70 566 37 15 Fax: +46 8 751 72 30
gnunet-insert-result.ps
Description: PostScript document