gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-developers] Replacing lookup


From: Igor Wronsky
Subject: [GNUnet-developers] Replacing lookup
Date: Mon, 10 Mar 2003 19:14:06 +0200 (EET)

A rant follows.

I set out to replace the lookup mechanism, but couldn't.
I wanted to do that for sake of efficiency (and disliking
the 300mb fixed file), but after looking at lookup.c and
running a couple of tests with gdbm I noticed that if being
honest with this, lookup functionality can not be done
efficiently with just gdbm and bloomfilters. In fact,
such a solution would be significantly worse. The reason
is that gdbm traverse is *very* expensive (it does *not*
perform it in disk-order, we are talking about hours
of constant disk trashing here for an 1G db) and thus it can't
be used for deleting low priority content, which, on a
loaded system, should be fairly frequent operation. Also
linear scanning for content for migration is much more
costly than with fixed size lookup file.

What we'd need would be to replace the lookup with some
dynamic, efficiently updateable data structure that handles
queries like delete_least_important() and stores pointers to actual
content outside. Compared to the cost of the mentioned operations
on plain gdbm, maintaining such structure would be cheap. But we'd
still need the efficient hash key based indexing (naturally).
Sadly I'm not interested in coding additional indexing schemes,
and even the less a tool for verifying and repairing their
consistency. Even fragmentation problems would follow with
separate and random sized data. And most likely it would
take months until a reinvented wheel would mature into any
acceptable condition. :(

In addition, it'd be nice to have a routine for fetching
random blocks (for migration).

Now I'm going to sound like Jan Marco again. But the fact
is that high-end dbmgrs like MySQL allow the use of
multiple indexes for different styles of queries,
and they already have tools for verifying and repairing
consistency of tables and their indexes, only requiring
something like "isamchk --repair" from the end user
(and thats it). And I've been running, now for years,
a MySQL DB with several hundred thousand entries (with
the current unique id running in over a million), and it
has always verified and repaired its indexes (I have 5) many
times faster than a run of gnunet-check on a much smaller db
and it has scaled nicely to this date. Competing software
like Postgresql is probably quite equivalent.

I don't know what we should do. Overhead of installing
a high-end dbmgr might be too much for many. What
I suggest is, that if we refactor the code to use the
database *only* through the interfaces I specify below,
I can do an efficient MySQL module that should
be able to survive each of the required operations with much
less overhead than a gdbm traverse brings. The small print
says that I don't know what to do about those users who wish to
stick to gdbm or such - I see very disktrash-intensive
future for them, unless someone codes additional on-disk
data-structures, finds new libs to depend on, or comes up
with some very clever tricks, and already the lookup in
itself is problematic (atleast for deletes and repair).

The interfaces would be loosely as follows. Internally,
small-scale db modules like gdbm could still use the old lookup.c
or whatnot but it must *not* be called from outside these functions.

insert(key,data,prio,indexinfo)
  insert key to db as entry {key, data, prio}. If data=NULL,
  just index with indexinfo (file#,offset).

fetch(key,delta)
  return {data}, increase prio related to this entry by delta

fetchrandom()
  return a random key + data (for activemigration), don't
  increase prio

free(m)
  delete those m entries from db that have smallest prio

It seems to me that basically these four suffice for gnunet's
current requirements. The main point is to get rid of the lookup
calls findEntry/addEntry/delEntry by including the necessary
stuff in parameters of these calls. We might include something
like forAllDatabaseEntries, no problem, but the whole must be split
in {bloomfilters,database} dichotomy. Database can be seen as
a black box, a module like gdbm or such can internally use
whatever lookup kludges it likes.

Comments?

I've run some tests on MySQL w/ 370000 entries along with their
1k data and randomizing some (colliding too!) priorities for them
(you might be interested that in this case the db takes 417 megs,
so there is slight overhead). I won't quote the worst case performance
capabilities of MySQL for different ops (typically no more than log(n),
though), but just state the intuitive feeling: insert/retrieve is 'about
instantenous' and free(m) is linear in the number of deleted keys m
(not linear in number of keys in db), thus deleting e.g. 1 meg of
worst content goes in a jiffy. fetchrandom() can't be implemented
straightforwardly but yet efficiently (afaik?), but we could use the
following kludge: for db of n keys uniformly distributed, we can
get - by simple probability calculation on n and 256 - the upper
bound for size k of a prefix that still gives us expected number
of results 1 (e.g. 'abc%' is a 3 char prefix, and can be efficiently
queried for). We just randomize a k byte prefix, and lookup with it
and take a random one of the result set. If the set is empty,
decrement k and redo. If the db is non-empty, this procedure is
guaranteed always to return a key, and most of the time with
just a single query.

One thing that is certain is that free(m) and traverses will
be insanely cheaper than using lookup+gdbm or plain gdbm,
even if the actual fetch/insert were more costly or equal.

More comments? Ideas? This is mainly a interface design
question. With a good interface and little overhead outside it,
rather nice solutions with highlevel dbmgrs should be obtainable,
but still allowing the hack3rs to develop lowlevel solutions.


Igor





reply via email to

[Prev in Thread] Current Thread [Next in Thread]