gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: TTL [was: Re: [GNUnet-developers] Found a related project]


From: Krista Bennett
Subject: Re: TTL [was: Re: [GNUnet-developers] Found a related project]
Date: Wed, 4 Jun 2003 00:43:33 -0500
User-agent: Mutt/1.4.1i

Tom Barnes-Lawrence hath spoken thusly on Wed, Jun 04, 2003 at 03:21:49AM +0100:

(Note: Any responses I make here may be tainted with both sleepiness and 
the fact I haven't thought about this for a while, so bear with me; 
Christian can correct me later if I've gotten things turned around in the 
intervening months! :)

> > Gnunetd on 62.131.97.197 can try to do
> > some anonymity by not sending it instant to me, but send it to another
> > gnunet peer. This peer sends it to me. Better anonymity should be sending
> > the block random to x peers not me (217.120.174.15), but network traces can
> > give the infomation where the file is coming from!
> 
>   No. Network traces can say that some data is being sent from machine
> A to machine B. They can't interpret the data, because it's encrypted,
> so they don't know if that data is being forwarded on behalf of another
> machine. 

Right, and because it's link-encrypted, from the outside, it doesn't look
like the same information going from machine A to machine B as it does
going from machine B to machine C. Being able to see the real contents of
those packets to compare them from one hop to the next does indeed assume
an adversary with massive control over the network.

> They can't even tell if the data is meaningful, as the server
> will send out a chunk of random noise sometimes to confuse things.
> They can't use traffic analysis to say "ah, this packet from B to C
> comes just after this one from A to B, so obviously B is forwarding a
> packet", because the timing has a certain randomness, and gnunetd
> can wait until it has several things to send before it sends anything,
> etc... Plus it doesn't send a *file*, the files that are shared
> get split up into lots of 1K chunks that get downloaded separately.
> 
>   The design of GNUnet always impresses me. (The *one* thing that
> I suspect may be a flaw is that AFAIK requests for things must have
> a TTL set, so presumably, even if it is made random, there will be
> times when the first node it reaches would be able to tell that it
> is a new request (not forwarded). Is this true, anyone?)

Ok, the person here to really answer this bit is Christian, but this issue
has indeed come up (at the PET conference, I know it was one of the
attacks discussed). Let me risk putting my foot into my mouth (it's
resident there most of the time anyway) and take a crack at some of this,
though, and Christian can give you the real argument when he gets out of
bed later.

The bit that gets us out of the trouble that you mentioned is that TTLs
are relative (synchronization of absolute TTLs would be difficult to
achieve); that is, I assign some particular TTL to a query I send out
(which could even be negative), and the real point at which the TTL
expires is Time(now) + TTL. Time(now) is local to every machine and
doesn't really matter to anyone else. When you get the query and put it
into your routing table, you compute the absolute TTL in reference to your
own machine (for your own use only) and decrement the relative TTL of the
query before sending it on. It's used strictly to compute a local value.
If this local absolute time is in the past, you, as a recipient of my
query, may decide not to serve the request because it has expired. That
relative number, though, doesn't really tell you much about a packet; it
may have originated nearby, it may have been bounced around a lot before
it got to you, or the sender (and this could be any sender, not just the
original querier) may not have thought it was very important.

I think the real concern over what might be leaked through TTL was related
to disclosing what I might be storing on my own machine. If you assign a
query a relatively low TTL and still get an answer from me, there's a
greater possibility that I have that content (or someone very close to me
does).  There are a few answers to this, and this is where I'd really
rather have Christian discussing this because he's the one who hashed this
out with other smart people working on PETs this Spring, but I'll take a
quick crack at it. Any mistakes I make here are strictly my own :)

First of all, content migration means that the content could well have
come from someone else and was just pushed out into the network. 

Secondly, since peers grab content floating by with a certain probability
and store it, me answering a query "too quickly" may well just mean that
the content was requested before and was indirected through me. The more 
it's requested, the more it propagates.

Also, if a query with a low TTL gets answered, it doesn't necessarily mean
the content was at an adjoining or nearby host. The locally determined
absolute TTL (again, TTL + Time(now)) is used to decide which query
entries get dropped from a full routing table, but nothing guarantees a
query will necessarily be dropped, especially when the routing table
*isn't* full. It simply determines which one should be dropped if we need
to drop entries. If my routing table isn't full, or most of the entries
already in it have lower TTLs (which can happen simply by the virtue of
them being long expired), AFAIK I may still decide to serve "dead" queries
based upon "how dead" they are. Is that right, Christian? Now, granted, if
the network is superbusy, these queries aren't going to hang around for
long, and I think this is where the concern was, but that's as much as I
remember about this particular argument atm.

Finally, if repeated attempts to query for a particular piece of content
occur in an effort to find the source, the adversary in many ways defeats
himself; the content in question will have propagated along several paths
to his machine with each additional set of queries and will be
increasingly quickly served to him as his queries cause the content to
migrate closer to him.

Don't get me wrong. I don't actually remember whether the TTL issue was
resolved or not (GNUnet has been simmering on my back burner for a few
months now), but that's a subset of the arguments against a successful
attack, I think. I may have also misunderstood your feelings about how
stuff may be leaked, so feel free to enlighten me. :) I hope I haven't 
confused things further with my verbosity.

FYI, if you're interested, you may want to take a look at Christian's
slides from the PET workshop (http://www.ovmj.org/GNUnet/download/pet/).  
They're easier to understand when he's standing there explaining things to
you and don't hit on TTL much, but they do capture some of the arguments
made at the workshop and our responses to them. They're not a bad
companion to the anonymity paper, since they were created after long
discussions with Roger Dingledine (our very helpful shepherd) about some
of the arguments that had been raised regarding GNUnet's anonymity.

> 
>   If I (or prolly anyone else) thought that Gnunet and/or its AFS
> system wasn't actually anonymous... then what would be the point?
> There are numerous other p2p systems without anonymity, that are
> faster and have gigantic user bases (and the higher amount of content
> that comes with them). They would be better to use.

Well said.

:)

- Krista

-- 
***********************************************************************
Krista Bennett                               address@hidden
Graduate Student
Interdepartmental Program in Linguistics
Purdue University
     
     "You're more important than a bowl of spaghetti!" - My mom




reply via email to

[Prev in Thread] Current Thread [Next in Thread]