gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Re[2]: [GNUnet-developers] Some small patches


From: Christian Grothoff
Subject: Re: Re[2]: [GNUnet-developers] Some small patches
Date: Thu, 8 Jan 2004 15:27:13 -0500
User-agent: KMail/1.4.3

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Thursday 08 January 2004 03:12 am, Hendrik Pagenhardt wrote:
> I thought a bit about the topic, and I think a good way to increase
> insertion throughput might be the bundling of inserts within gnunet.
> This probably could even be more efficient than the delayed inserts. The
> abysmal performance of inserts is IMHO closely related to the sequential
> nature of the insertion process (correct me if I'm wrong). And it's not
> helping that we can't profit from the potentially parallel select and
> insert capabilities of the database, because every bucket is locked with
> a semaphore when a request is in progress.

Again, this type of optimization is likely to cause some form of trouble
(like the asynchronous errors that you noted) and somehow sounds even worse 
than 'DELAYED' to me (but I don't know enough about MySQL to truely 
comprehend the extend of trouble DELAYED may or may not cause, so I'll leave 
that decision to Igor). Furthermore, I am not sure that insertion speed would 
be so much of an issue once we have the insertion/download-manager (far far 
in the future) where all of these things would just go into the background.  
And even now, why is it a problem to run 'gnunet-insert' overnight (assuming 
your machine is on 24/7)?

> BTW which threads can run in
> parallel when gnunetd is running? I would hope that at least one thread
> for each connection (local or remote) is used?

We never used a thread per peer-connection (2 total, always) and since 0.6.1 
we only use one thread for all local clients (before one thread per client).  
The rules for all of these threads are that they must not block (more than 
for a bounded amount of disk-IO), and I don't see anything wrong with that.

Note that there are other threads for background jobs that take longer, block 
indefinitely and for reading from the sockets.  Just the actual processing of 
p2p messages is done by 2 threads.

> For mysql this might be improved by collecting the inserts per bucket in
> a separate thread and when a threshold number or a timeout is reached
> from the last insert then a REPLACE statement with multiple value tuples
> can be created and sent to the mysql server. Of course this would skew
> the table quotas and of course this is more difficult to handle if
> errors occur, but I think it might be worth the hassle because one of
> the more annoying "features" of GNUnet is the slow insertion/indexing
> process. Which of course might lead to less acceptance among users and
> hindering the willingness to publish content...

So what would be wrong with just backgrounding the insert process? (Not that 
you could not do 'nohup gnunet-insert XXX &' already, but some people 
definitely would want a GUI for that and others would want to make sure that 
the process is resumed after a reboot, so some more code would definitely be 
nice there -- but not critical right now IMO).

C
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE//b0h9tNtMeXQLkIRAmQqAJsG2elcHvDrGDJbssiqAgDU/jmikACfQBEE
E6rnCUpla8r0eqLIwwu9zX8=
=sPEW
-----END PGP SIGNATURE-----





reply via email to

[Prev in Thread] Current Thread [Next in Thread]