loco-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Loco-dev] File Sharing development


From: tok
Subject: Re: [Loco-dev] File Sharing development
Date: 24 May 2002 19:00:00 +0200

hi guys,

sorry about the log entry, no harm intended. to clearify: i fixed a
brain-dead mem-leak i created myself (a few hours earlier).
about the user list function: i just saw the comment work-in-progress,
had a need for something (which i'd have called like that) and did what
seemed logical.
arne> won't break anything else (g)... by the way, no need for
compliments (G)

the productive part:

about filesharing: as i understand it, the loco tree allows searching by
gpg-key-ids (name, comment, email). thus a signed file would have some
unique id. [may one node have unlimited files? - how many keys may it
have at once. may i have foreign keys (files signed by anybody else)]
when files are split, each segment should imho have its own id, all of
them somehow linked to the total_file id.
any thoughts?

about proxying:
i have made a few changes for proxy support (local only). what i think:

- i seem to have to differ between directly reachable and firewalled
nodes, added a boolen firewalled to LocoPeer

- a loco node acting as proxy for another node will probably need
another task and two ports (one listener connected to the fwed client
and another one as outbound listener for that client)
imho the more nodes one is proxying for, the less likely it should be
that it will accept another proxy request... but howto implement?

- should there be different request types (from fwed clients to an open
node)? e.g.: 
proxy - connect_loco_network (just messaging)
      - connect_and_proxy (open listener for others to reach me)
      - get_file
      - send_msg
      - get_http
we would need some protocol for this ;-(

concluding: i need more docs. got anything(!) written down about the way
the tree should work?

best regards,
tok




reply via email to

[Prev in Thread] Current Thread [Next in Thread]