[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [GNUnet-developers] Reproduceable failure
From: |
Christian Grothoff |
Subject: |
Re: [GNUnet-developers] Reproduceable failure |
Date: |
Wed, 2 Apr 2003 09:36:37 -0500 |
User-agent: |
KMail/1.5 |
On Tuesday 01 April 2003 19:42, Julia Wolf wrote:
> Perhaps unrelated, and not to be unexpected, if you use 'directory' for
> afs, and insert a few gigs of content, the CPU load goes up to a constant
> load or 2.5 (in my case) at which point gnunetd starts dropping stuff as
> per the setting in gnunet.conf . Consequently you can't actually use
> gnunet anymore, because gnunetd itself is eating up all the cpu time and
> starving everything else. (I mean it'll just sit there and thrash at the
> drive) 'gdb' and 'mysql' don't have this problem. (I'm guessing all the
> disk thrashing happens every time there's a hash lookup) (oh and the
> direcotry was sitting on a reiserfs filesystem)
Yes, the directory is just not meant to handle that many files. The CVS code
uses multiple directories, but it's not going to ever really go away --
directories are a stop-gap measure for users that do not have gdbm, tdb, bdb
or mysql installed, it is generally NOT a good idea to use that type of
database for this purpose. The only good thing with directories is that they
are easier to inspect and thus easer to debug for developers. For machines
with gigabytes of content, they are a really bad idea (TM).
Christian