gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-developers] Expiring search results?


From: Igor Wronsky
Subject: [GNUnet-developers] Expiring search results?
Date: Sun, 8 Sep 2002 19:40:29 +0300 (EEST)

On Sun, 8 Sep 2002, Tracy R Reed wrote:

> > > Wouldn't it be possible to automatically send out queries for random
> > > blocks of a file to ensure that the file is available. If such queries
> > Yes, this idea is feasible (except that in order to download 'random' 
> > (leaf) 
> > blocks, we'd first have to download a couple of inner blocks (those blocks 
> Alternatively, couldn't such metadata expire after, say, an hour? That
> would avoid having stale info floating around.

The routing mechanism has hard time locating the content from 
the indexing node, so we can't rely forever on that. After 
content has got off the original node, it will be easier to find. 
Suppose content was inserted from node A. When can node B start 
distributing "meta-data" as you call it? Currently it does
it right away, when the data has passed. B can't currently 
know when the whole file has been fetched from A entirely.

One possibility is to make "metadata" expire as you say
and have only such hosts give it out with a 'fresh' time-to-live
that have downloaded the file entirely themselves. The client can 
tell the node (supposing gnunetd trusts its clients) that it 
can now re-stamp the respective rootblock, because the 
file has been received. Then, as the rootblock is requested and 
found locally, it'd be given a slightly randomized but fresh
time-to-live (or "expire-by") value by the node before 
giving it out. That wouldn't prevent malicious hosts from 
sending rootblocks that they have not downloaded, but the 
situation wouldn't be worse as it is now I think.

Any comments on this? Atleast it'd cause a problem with
deniability if the node was physically compromised. Also
because time-to-live perhaps has no point with pure data
blocks, the node would have to be able to separate data
from search results, and I don't think it currently
does that. If I remember correctly, it was explained
by security by obscurity or something... :) And last
but not least, could it be externally used to find out
who has downloaded the file and who has not?


I.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]