[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [GNUnet-developers] Playing with GNUnet
From: |
Christian Grothoff |
Subject: |
Re: [GNUnet-developers] Playing with GNUnet |
Date: |
Thu, 11 Jul 2002 12:04:39 -0500 |
User-agent: |
KMail/1.4.1 |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Sunday 07 July 2002 06:55 pm, address@hidden wrote:
> Let's see... a few issues that arose while setting up a GNUnet server.
> (In the usual style, this is all complaints and no praise.
> Sorry, I haven't used it enough to be really impressed yet.
> I'm still working on finding something more interesting
> than the GPL on it.)
Well, since this is still a beta and the network is still *very* small, I can
imagine that you don't find too much (especially since we are still fairly
frequently changing things that break compatibility in one way or other, e.g.
changing to AES would make ALL existing content disappear since it would be
totally incompatible...).
> I presume you've already heard the suggestion that you migrate from
> Blowfish to AES. 530 encryption operations to schedule 4K of S-boxes
> just to do 128 encryption operations per 1K payload seems... excessive.
Yes, but OTOH, it does not seem to matter. CPU load is not really a concern.
> The promised FAQ entry on disk usage efficiency and ext2 doesn't
> seem to exist. I presume that it's telling me that my current
> file system parameters of 4K blocks and 16K bytes/inode will
> not work very well for GNUnet, which wants 1K for each.
Where is that one mentioned? We're now using gdbm, so this entry was removed.
Must be a dead reference, but where did you find it?
> One problem with the 1K block size is that, since you don't have any
> inter-block mixing, it might be vulnerable to code book attacks for
> low-entropy sources. Unfortunately, some file-wide mixing might
> remove the easy random-access you have right now.
You are mistaken in the purpose of the encoding. Deniability does not require
that it is *impossible* to break the cipher, just 'unreasonable to assume to
be done by an intermediary/router'. Code book attacks work on any hash-based
system.
> One possible solution would be to define the encryption key e_i for
> block i with parent block j, to be not equal to h_i, but e_i = f(h_i,
> e_j), for some suitable combining function f (perhaps XOR?). For this
> purpose, the root's e_j is taken to be 0.
>
> This makes the ciphertext for the whole file depend on the hash of the
> whole file, without requiring any more data than is already present in
> the indexes.
Again, it's fine if you can in some cases break the encryption. We want so
SHARE files, not hide them. As long as I can reasonably claim that I did not
know what was on my drive (because I could not trivially break the cipher),
deniability (deny knowledge of the content stored or routed) is achieved.
> I'm not quite sure how to interpret the daemon's debugging output, but
> it occurs to me that TTL values could be made more deniable. An easy
> one would be to change to only decrementing them half the time (and
> halving the initial value to compensate). Thus, just because I sent
> a query with a given TTL doesn't mean that I (or one of my peers)
> originated it...
Since initial TTLs are chosen with a random factor in them (they are NOT
constant), you can not say 'TTL 5 means it's 5 hosts away since they start at
10'. Decrementing only half of the time may make cycle detection much worse,
thus this is not a good idea.
> gnunetd really should fork into the background once it's completed
> initialization, like a good daemon. While I can just run it in the
> background in the first place, that makes it impossible to check for
> errors.
Sounds like a good idea, but since I never wrote a good deamon (mine are
always evil :-), I am not certain where exactly to do it and how to ensure
that everything is fine. Care to provide a patch?
> When setting gnunetd up in a chroot jail, I was a little confused as to why
> it needs all the libextractor_* libraries (which pull in libvorbis*).
> Shouldn't that only be used by the administrative commands?
Yes, it is only used there.
> In fact, removing -lextractor from the Makefiles in utils, common,
> and server produced a server without this dependency. Perhaps the
> build process could be tweaked a little?
As far as I can tell, Blake has already started on that.
> The need for /proc/loadavg, /proc/net/dev, /dev/null and /dev/urandom
> might be worth documenting. Fortunately, it's always possible to
> "mount -r --bind /proc/foo /jail/proc/foo", as long as you have touched
> /jail/proc/foo beforehand.
Yes, more documentation is always good. :-)
> gnunetd by default creates its directories with mode 0700. gnunet-search
> wants to look at data/hosts/*, which is impossible from another uid.
Sounds like a bug. Added to the pile...
http://www.ovmj.org/~mantis/view_bug_page.php?f_id=324
> I notice that gnunet forks children which spend a lot of time in
> loops like:
>
> getppid() = 17459
> poll([{fd=8, events=POLLIN}], 1, 2000) = 0
> getppid() = 17459
> poll([{fd=8, events=POLLIN}], 1, 2000) = 0
> getppid() = 17459
> poll([{fd=8, events=POLLIN}], 1, 2000) = 0
> getppid() = 17459
>
> There is an easy way to block on the parent exiting, using a pipe.
> Create a pipe which only the parent can write to (children close the fd
> after forking), and it will poll ready-to-read (EOF) when the parent exits
> (and closes the write end).
This must have something to do with our use of pthreads. What is odd
that on my machine(s), gnunetd takes only a couple of minutes of total
CPU time over weeks. What system are you using? Is it really an issue?
Since we're not explicitly calling getppid(), the above is definitely a very
low-level remark, so I'm also not sure where to even start to fix it (if it
is really an issue).
> There's another process which spends its time doing:
>
> nanosleep({1, 0}, {1, 0}) = 0
> rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
> rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
> rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
> nanosleep({1, 0}, {1, 0}) = 0
> rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
> rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
> rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
> nanosleep({1, 0}, {1, 0}) = 0
> rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
> rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
> rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
> nanosleep({1, 0}, {1, 0}) = 0
> time([1026070510]) = 1026070510
> time([1026070510]) = 1026070510
> time([1026070510]) = 1026070510
> ... 190 calls to time() deleted ...
> time([1026070510]) = 1026070510
> time([1026070510]) = 1026070510
> time([1026070510]) = 1026070510
> rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
> rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
> rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
> nanosleep({1, 0}, {1, 0}) = 0
> rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
> rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
> rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
> nanosleep({1, 0}, {1, 0}) = 0
> rt_sigprocmask(SIG_BLOCK, [CHLD], [RTMIN], 8) = 0
> rt_sigaction(SIGCHLD, NULL, {SIG_DFL}, 8) = 0
> rt_sigprocmask(SIG_SETMASK, [RTMIN], NULL, 8) = 0
> nanosleep({1, 0}, {1, 0}) = 0
>
> It is unclear why
> - It has to check once per second that SIGCHLD is still set to SIG_DFL
> - It has to block SIGCHLD around this unnecessary check
> - It has to periodically make 196 calls to time(), all within
> the same second.
That must be the cron-job. It checks every second if there is "more work". Why
it should call time that often is a mystery to me, though.
> If I might suggest a smaller bit of source code, modern processors
> with branch prediction don't need unrolled loops:
>
>
> /* Copyright abandoned; this code is in the public domain. */
> #include <limits.h>
>
> /* Avoid wasting space on 8-byte longs. */
> #if UINT_MAX >= 0xffffffff
> typedef unsigned int atleast_32;
> #elif ULONG_MAX >= 0xffffffff
> typedef unsigned long atleast_32;
> #else
> #error This compiler is not ANSI-compliant!
> #endif
>
> #define POLYNOMIAL (atleast_32)0xedb88320
> static atleast_32 crc_table[256];
>
> /*
> * This routine writes each crc_table entry exactly once,
> * with the ccorrect final value. Thus, it is safe to call
> * even on a table that someone else is using concurrently.
> */
> static void
> make_crc_table()
> {
> unsigned int i, j;
> atleast_32 h = 1;
> crc_table[0] = 0;
> for (i = 128; i; i >>= 1) {
> h = (h >> 1) ^ ((h & 1) ? POLYNOMIAL : 0);
> /* h is now crc_table[i] */
> for (j = 0; j < 256; j += 2*i)
> crc_table[i+j] = crc_table[j] ^ h;
> }
> }
>
> /*
> * This computes the standard preset and inverted CRC, as used
> * by most networking standards. Start by passing in an initial
> * chaining value of 0, and then pass in the return value from the
> * previous crc32() call. The final return value is the CRC.
> * Note that this is a little-endian CRC, which is best used with
> * data transmitted lsbit-first, and it should, itself, be appended
> * to data in little-endian byte and bit order to preserve the
> * property of detecting all burst errors of length 32 bits or less.
> */
> atleast_32
> crc32(atleast_32 crc, char const *buf, size_t len)
> {
> if (crc_table[255] == 0)
> make_crc_table();
> crc ^= 0xffffffff;
> while (len--)
> crc = (crc >> 8) ^ crctable[(crc ^ *buf++) & 0xff];
> return crc ^ 0xffffffff;
> }
Looks good to me, if nobody else sees a problem, I'll put this into
CVS instead of the zlib code that we're currently using.
Christian
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE9Lbqo9tNtMeXQLkIRAqcwAKCSmCWrz7xGrROK8RNvCEOpKn6eZgCeOXTA
ZYTW5JgM2WT4wt2T21RpMWI=
=CVwA
-----END PGP SIGNATURE-----