gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] manuscripts/pointers article.rst


From: Benja Fallenstein
Subject: [Gzz-commits] manuscripts/pointers article.rst
Date: Mon, 03 Nov 2003 19:31:47 -0500

CVSROOT:        /cvsroot/gzz
Module name:    manuscripts
Branch:         
Changes by:     Benja Fallenstein <address@hidden>      03/11/03 19:31:47

Modified files:
        pointers       : article.rst 

Log message:
        write out concl

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/manuscripts/pointers/article.rst.diff?tr1=1.107&tr2=1.108&r1=text&r2=text

Patches:
Index: manuscripts/pointers/article.rst
diff -u manuscripts/pointers/article.rst:1.107 
manuscripts/pointers/article.rst:1.108
--- manuscripts/pointers/article.rst:1.107      Mon Nov  3 18:59:14 2003
+++ manuscripts/pointers/article.rst    Mon Nov  3 19:31:47 2003
@@ -449,89 +449,80 @@
 Implementation
 ==============
 
-- our Java impl (storm) provides local storage and
-  P2P networking through the GISP DHT
-- currently only one P2P network implementation,
-  but the code is modular so that it would be easy
-  to plug in others (we're looking into Tapestry)
-- we have implemented block storage and indexing
-  and pointer records on top of this
-- P2P Web is browsable through an HTTP gateway (can be run
-  locally or on any machine)
-- History link
-- (Side-benefit of P2P: Backlinks link)
-- Documents you own can also be edited through WebDAV.
-- Storm is 2.5 years old and has gone through
-  a number of pointer models. The model described
-  in this paper has been in use for about six months.
-- Current usage: mainly, keeping notes
-- However, it is currently undergoing a re-write
-  because the current pointer record format was ad-hoc
-  and a quick hack.
-- We will release Storm when the re-write is complete,
-  which will be real soon, now.
+In this section we discuss our Free Software 
+Java implementation, Storm, of pointer records
+and our basic block-based storage model. 
+
+Storm provides implementations of the block/pool abstraction
+and of reverse indexing for both local storage
+and distribution through the GISP DHT [kato02gisp]_.
+Our API is sufficiently abstract that different overlays
+could easily be plugged in; we're currently looking
+into writing an implementation based on Tapestry [zhao01tapestry]_.
+
+The P2P Web created by Storm is browsable through an
+HTTP gateway, which can be run locally or offered for
+public access. The HTTP gateway can be configured to
+insert on each page a "history" link, which allows the user
+to browse past versions of the page, using the history
+created by pointer records. When run on the local machine,
+documents owned by the machine's owner can also be
+edited through simple WebDAV clients.
+
+**Note to reviewers:**
+While pointer records are implemented in Storm, the 
+current implementation is a quick hack and being re-written.
+Storm is available from CVS; we will make a release available at
+``http://savannah.nongnu.org/files/? group=storm`` when the rewrite
+is complete, on or before December 1st, 2003.
 
 
 Conclusions
 ===========
 
-- We have presented a peer-to-peer infrastructure that...with 
location-independent
-  identifiers
-
-- Pointer records: a novel method for implementing revision control in 
peer-to-peer
-  environment
-
-- ADVANTAGES:
-
-    - simple
-
-    - robust; since the blocks are all the information,
-      system state can't be lost easily
-
-    - could be standardized as a way for different 
-      P2P networks to interoperate to create a P2P Web
-    
-       - network-agnostic,
-         interoperating between any existing and future
-         P2P nets
-
-       - doesn't require a particular block keeping model; 
-         various models from different sources can work,
-         such as storing only the latest versions &c
-
-
-    - side benefit: users can cache any back versions they like
-
-    -  ...
-
-- DISADVANTAGES:
-
-    -  ...
-
-- COUNTERARGUMENTS THAT NEED TO BE ADDRESSED:
-
-    - efficiency? Storing lots of versions could get inefficient,
-      especially on the part of indexing the pointers in the system.
-      If there are 1000 000 000 or more pointer blocks of a single pointer,
-      what part of the system fails, if any?
-
-       - will be there only if someone *wants* to store those versions
-         unwanted versions get purged
-
-       - spam with a stolen key?
-
-       - formulas for efficiency?
-
-    - spamming with new versions (only author due to DS, but still)
-
-    - Copyright issues
-
-       - on web, not such a great problem, as 
-         most of the web is about publishing things for all to see.
-
-- XXX
-- To make the Web a solid foundation for standing
-  on the shoulders of giants.
+We have presented *pointer records*, a novel method
+for implementing revision control in a peer-to-peer
+environment, which is simple, does not require
+the network to store any versioning-related information,
+can easily be used by several different types
+of P2P networks, and allows versions of documents
+to be resolved for as long as any host on the network
+keeps a copy.
+
+We have also presented the Storm data model, a simple
+model for representing data published on the P2P Web.
+The Storm model is easy to implement on current systems,
+and allows powerful abstractions to be built on top of it
+(such as pointer records).
+
+An adversary may try to attack our system by trying
+to publish such a large amount of information
+that it overloads the P2P network used. This
+a problem shared with other P2P applications.
+
+An adversary may also use an information hiding attack,
+withholding new versions of documents to clients,
+making them believe that old versions are still current.
+It could be argued that this is more of a problem
+than an information-hiding attack in a filesharing system,
+where the user is told that a file cannot be found at all,
+because showing an outdated version of the same page
+is less visible. Thus, if possible, the underlying P2P network
+used should try to protect against information hiding attacks.
+
+While our system helps to keep documents alive after
+the original publisher has lost interest in them, it
+does not protect against a publisher that wants to make
+past versions inaccessible. By simply signing and publishing
+a large number of "fake" versions (e.g., files containing
+random data), a publisher could make correct past versions
+hard to access. A time-stamping mechanism may help here.
+
+We have proposed to use pointer records as the versioning
+mechanism for a location-independent Web.
+We hope that by making links more permanent, this can make
+the Web a more solid foundation for standing 
+on the shoulders of giants.
 
 
 Acknowledgements




reply via email to

[Prev in Thread] Current Thread [Next in Thread]