gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] HTTP


From: N. Durner
Subject: Re: [GNUnet-developers] HTTP
Date: Fri, 20 Jun 2003 17:29:52 +0100

> As far as I can see, the patch is only for the http 1.0 download of the
> HOSTLISTURL and not the HTTP transport service, but that aspect is
complete.
> It does not seem to break anything, so I've put it into CVS (didn't test
it,
> though).
http.c was also changed, but you're right - it doesn't break things.

> Also, without the
> chunked encoding, we would have to re-establish connections again and
again,
> which would be terrible.
HTTP/1.1 introduces keep-alive connections
(http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.10), which
means that multiple HTTP-requests may be sent during a single
HTTP-connection.
I've tried it with Squid and MS IIS and it works fine.

> > Another problem is, that peers behind a proxy can't accept connections
from
> > the outside. So the proxied peer has to maintain "GET"-connections to
all
> > the peers it is connected to in order to receive messages.
>
> Not really, look at the code, it is bi-directional, just like TCP behind a
> NAT box. We don't do GET, we do PUSH with a 200 OK response -- and then we
> have both sides transmitting chunks.
I guess that this isn't HTTP-compliant, because it doesn't follow the "HTP
request -> HTTP reply, HTTP request -> HTTP reply ..." scheme.

> I don't know if we really need proxy support at this stage (other than for
> the HOSTLISTURL, where it makes a lot of sense).
If the proxy support results in a new incompatible version of the HTTP
transport, it is IMO better to do it now.

> If we do, we may have to
> reconsider the way the HTTP blocks are currently encapsulated. But doing
> permanent TCP reconnects is out of the question. I've recently bought a
book
> on HTTP, so if I ever get to read it, I may have a better idea :-).
What do you think about the following model:

A peer (first.gnunet.peer) connect()s to its proxy and sends
---
POST http://second.gnunet.peer/ HTTP/1.1
Connection: keep-alive
Content-Length: 24

<welcome message>

---
(the remote peer shouldn't care about the URI - "POST
http://second.gnunet.peer/form.pl"; should be okay)

and second.gnunet.peer replies
---
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: <length of msg>

<optional GNUnet message here>

---

Then, the real communication takes place.
first.gnunet.peer sends messages encapsulated in a POST:
---
POST http://second.gnunet.peer/ HTTP/1.1
Connection: keep-alive
Content-Length: 1000

<GNUnet message>

---

second.gnunet.peer has to reply to each POST with a
---
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: <length of msg>

<optional other GNUnet message>

---

first.gnunet.peer doesn't have to wait for a "200 OK", before it sends
further requests ("pipelining",
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.2.2)

Of course we should include other common header fields like "Date: ",
"Server: ", "User-Agent: " (they shouldn't be the same on every GNUnet peer)
and all "don't cache this"-directives.
Furthermore, I'd prefer "Content-Type: application/octet-stream" to
"text/html".

> Do you see a reason why the "utopic content-size field" idea would not
work?
No, except that the maximum size of a request may be restricted by the
proxy. If the "Content-Length" is greater than the configured size, a "413
Request Entity Too Large " is returned.


Nils





reply via email to

[Prev in Thread] Current Thread [Next in Thread]