gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...


From: Hermanni Hyytiälä
Subject: [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...
Date: Thu, 13 Mar 2003 03:05:50 -0500

CVSROOT:        /cvsroot/gzz
Module name:    gzz
Changes by:     Hermanni Hyytiälä <address@hidden>      03/03/13 03:05:49

Modified files:
        Documentation/misc/hemppah-progradu: masterthesis.tex 

Log message:
        Updates

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex.diff?tr1=1.134&tr2=1.135&r1=text&r2=text

Patches:
Index: gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
diff -u gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.134 
gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.135
--- gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.134      Thu Mar 
13 02:52:07 2003
+++ gzz/Documentation/misc/hemppah-progradu/masterthesis.tex    Thu Mar 13 
03:05:49 2003
@@ -234,7 +234,7 @@
 peers can form the overlay network based on \emph{local} knowledge. Figure 
\ref{fig:gnutella_overlay}
 illustrates how peers form an overlay network. Initially, peer 1 creates the 
overlay, since
 it's the first participating peer. Then, repeatedly new peers join the network 
and connect to
-other nodes in a random manner. Thus, Gnutella can be considered as a 
variation of \emph{scale-free
+other peers in a random manner. Thus, Gnutella can be considered as a 
variation of \emph{scale-free
 graph}\footnote{In scale-free graphs (also known as power-law graphs) only a 
few peers have high number of neighbor 
 links and major of peers have low number of neighbor links.}.
 
@@ -302,7 +302,7 @@
 \begin{figure}
 \centering
 \includegraphics[width=10cm, height=6cm]{gnutella_overlay_clusters.eps}
-\caption{Power-law network overlay with 2-redundant super node clusters.}
+\caption{Power-law network overlay with 2-redundant super peer clusters.}
 \label{fig:gnutella_overlay_cluster}
 \end{figure}  
 
@@ -334,7 +334,7 @@
 This list includes CAN \cite{ratnasamy01can}, Chord \cite{stoica01chord}, 
 Kademlia \cite{maymounkov02kademlia}, Kelips \cite{gupta03kelips}, 
 Koorde \cite{kaashoek03koorde}, ODHDHT \cite{naor03simpledht}, 
-Pastry \cite{rowston01pastry}, Peernet \cite{eriksson03peernet}, 
+Pastry \cite{rowston01pastry}, PeerNet \cite{eriksson03peernet}, 
 Skip Graphs \cite{AspnesS2003}, SkipNet \cite{harvey03skipnet2}, 
 Symphony \cite{gurmeet03symphony}, SWAN \cite{bonsma02swan}, Tapestry 
 \cite{zhao01tapestry}, Viceroy \cite{malkhi02viceroy} and others 
\cite{freedman02trie}. 
@@ -397,7 +397,7 @@
 assigned keys.} keys is removed at the cost of each peer maintaining one 
 ''resource peer'' in the overlay network for each resource item pair it 
publishes.
 
-PeerNet differs from other tightly structured overlays in that it operates
+PeerNet \cite{eriksson03peernet} differs from other tightly structured 
overlays in that it operates
 at the \emph{network} layer. PeerNet makes an explicit distinction 
 between peer identity and address, which is not supported by standard
 TCP/IP-protocols. Otherwise, PeerNet has the same performance properties
@@ -417,7 +417,7 @@
 Currently, all proposed tightly structured overlays provide at least 
 poly--logarithmical data lookup operations. However, there are some key 
 differences in the data structure that they use as a routing table. For 
example, Chord 
-\cite{stoica01chord}, Skip graphs \cite{AspnesS2003} and Skipnet 
\cite{harvey03skipnet2} maintain a local 
+\cite{stoica01chord}, Skip graphs \cite{AspnesS2003} and SkipNet 
\cite{harvey03skipnet2} maintain a local 
 data structure which resembles Skip lists \cite{78977}.
 In figure \ref{fig:structured_query}, we present an overview of Chord's lookup 
process.
 On the right side of Chord's lookup process, the same data lookup process
@@ -603,7 +603,7 @@
 \parbox{100pt}{Controlled and structured}
 \\ \hline
                  
-\parbox{90pt}{Max. number of nodes} &
+\parbox{90pt}{Max. number of peers} &
 \parbox{100pt}{Millions} &
 \parbox{100pt}{Billions} 
 \\ \hline
@@ -706,7 +706,7 @@
 \parbox{37pt}{$O$($d$)} &
 \parbox{37pt}{$O(dn^{\frac{1}{d}})$} &
 \parbox{85pt}{2$d$} &
-\parbox{85pt}{The performance of system may decrease if nodes are not 
homogeneous and nodes join and leave the system in a dynamic manner, where $d$ 
is the dimension of virtual key space}
+\parbox{85pt}{The performance of system may decrease if peers are not 
homogeneous and peers join and leave the system in a dynamic manner, where $d$ 
is the dimension of virtual key space}
 \\ \hline
 
 \parbox{37pt}{Chord \cite{stoica01chord}} &
@@ -714,7 +714,7 @@
 \parbox{37pt}{$O(\log{n}$} &
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{85pt}{2$(\log{n})$} &
-\parbox{85pt}{The performance of system may decrease if nodes are not 
homogeneous and nodes join and leave the system in a dynamic manner}
+\parbox{85pt}{The performance of system may decrease if peers are not 
homogeneous and peers join and leave the system in a dynamic manner}
 \\ \hline
 
 
@@ -749,8 +749,8 @@
 \parbox{37pt}{$O(2(\sqrt{n}*(log^2{n})) + (\sqrt{n} + (log^3{n})))$} &
 \parbox{37pt}{$O$($\sqrt{n}$)} &
 \parbox{37pt}{$O(1)$} &
-\parbox{85pt}{$\frac{n}{\sqrt{n}} + c*(\sqrt{n}-1) + \frac{Totalnumber of 
files}{\sqrt{n}}$, where n is the number of nodes and c the number of 
contacts/foreign affinity group} &
-\parbox{85pt}{Insert/delete overhead is constant and performed in the 
background, the performance of system may decrease if nodes are not homogeneous 
and nodes join and leave the system in a dynamic manner}
+\parbox{85pt}{$\frac{n}{\sqrt{n}} + c*(\sqrt{n}-1) + \frac{Totalnumber of 
files}{\sqrt{n}}$, where n is the number of peers and c the number of 
contacts/foreign affinity group} &
+\parbox{85pt}{Insert/delete overhead is constant and performed in the 
background, the performance of system may decrease if peers are not homogeneous 
and peers join and leave the system in a dynamic manner}
 \\ \hline
 
 \parbox{37pt}{Koorde \cite{kaashoek03koorde}} &
@@ -775,7 +775,7 @@
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{85pt}{$(2^{b - 1})\frac{\log{n}}{b}$, where $b$ is a configurable 
parameter for tuning digit-fixing properties (routing table)} &
-\parbox{85pt}{The performance of system performance may decrease if nodes are 
not homogeneous and nodes join and leave the system in a dynamic manner, based 
on Plaxton's algorithm}
+\parbox{85pt}{The performance of system performance may decrease if peers are 
not homogeneous and peers join and leave the system in a dynamic manner, based 
on Plaxton's algorithm}
 \\ \hline
 
 
@@ -800,7 +800,7 @@
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{85pt}{$4r(\log{n}) + (\log{n})$, where r=number of resources 
provided)} &
-\parbox{85pt}{In this approach, node is treated as ''named resource''}
+\parbox{85pt}{In this approach peer is treated as ''named resource''}
 \\ \hline
 
 \parbox{37pt}{SkipNet \cite{harvey03skipnet2}} &
@@ -816,14 +816,14 @@
 \parbox{37pt}{$O(1)$} &
 \parbox{37pt}{$O(n)$} &
 \parbox{85pt}{Can be 1-10000 connections (aka social connections, connections 
are permanent)} &
-\parbox{85pt}{Number of connections number depends on node's memory/network 
capabilities}
+\parbox{85pt}{Number of connections number depends on peer's memory/network 
capabilities}
 \\ \hline
 
 \parbox{37pt}{Symphony \cite{gurmeet03symphony}} &
 \parbox{37pt}{$O(\log^2{n})$} &
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{37pt}{$O(\log{n})$} &
-\parbox{85pt}{$2k+2+f$, where k = long range connections, 2 = node's 
neighbors, f = fault-tolerance connections)} &
+\parbox{85pt}{$2k+2+f$, where k = long range connections, 2 = peer's 
neighbors, f = fault-tolerance connections)} &
 \parbox{85pt}{Space can be also $O(1)$. Additional space of can be used as a 
lookahead list for better performance, not necessarily fault-tolerant because 
of constant degree of neighbors}
 \\ \hline
 
@@ -832,7 +832,7 @@
 \parbox{37pt}{$O(1)$} &
 \parbox{37pt}{$O(\log^2{n})$} &
 \parbox{85pt}{$r(2b+2s+2l)$ (where r=number of resources provided, b=boot 
connections, s=short range connections, l=long range connections), typical 
connection configuration: 2*(6+7+8)=36} &
-\parbox{85pt}{In this approach, node is treated as ''named resource''}
+\parbox{85pt}{In this approach, peer is treated as ''named resource''}
 \\ \hline
 
 
@@ -841,7 +841,7 @@
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{85pt}{$(2^{b - 1})\frac{\log{n}}{b}$, where $b$ is a configurable 
parameter for tuning digit-fixing properties (routing table)} &
-\parbox{85pt}{The system performance may decrease if nodes are not homogeneous 
and nodes join and leave the system in a dynamic manner, based on Plaxton's 
algorithm}
+\parbox{85pt}{The system performance may decrease if peers are not homogeneous 
and peers join and leave the system in a dynamic manner, based on Plaxton's 
algorithm}
 \\ \hline
 
 \parbox{37pt}{Viceroy \cite{malkhi02viceroy}} &
@@ -849,7 +849,7 @@
 \parbox{37pt}{$O(1)$} &
 \parbox{37pt}{$O(\log{n})$} &
 \parbox{85pt}{11} &
-\parbox{85pt}{The system performance may decrease if nodes are not homogeneous 
and nodes join and leave the system in a dynamic manner, not necessarily 
fault-tolerant because of constant degree of neighbors}
+\parbox{85pt}{The system performance may decrease if peers are not homogeneous 
and peers join and leave the system in a dynamic manner, not necessarily 
fault-tolerant because of constant degree of neighbors}
 \\ \hline
 
 
@@ -1077,7 +1077,7 @@
 Fiat et al. in \cite{fiat02censorship}, 
\cite{saia02dynamicfaultcontentnetwork} and Datar in \cite{datar02butterflies}  
 describe tightly structured overlay with analytical results in the presence of 
hostile entities. However,
 none of these proposals address an efficient, dynamic tightly structured 
overlay and multiple rounds
-of hostile attack. Also, above mentioned proposals are not very efficient. In 
\cite{fiat02censorship}, each node 
+of hostile attack. Also, above mentioned proposals are not very efficient. In 
\cite{fiat02censorship}, each peer 
 must maintain information of $O(\log^3{n})$ other peers, and in 
\cite{datar02butterflies}, $O(\log^2{n})$ is required.
 
 Finally, Ratnasamy and Gavoille \cite{ratnasamy02routing}, 
\cite{gavoille01routing} list several open problems
@@ -1125,7 +1125,7 @@
 are Peer-to-Peer systems which use somewhat similar method when performing 
data lookups.
 
 Local indices \cite{yang02improvingsearch} is one variation of active caching. 
-In this scheme, each peer maintains an index over the data of all nodes within 
+In this scheme, each peer maintains an index over the data of all peers within 
 $h$ hops of itself, where $h$ is a system-wide variable, called radius of the
 index\footnote{In normal BFS case, the value of $h$ is 0, as peer only has 
index
 over its local content.}. Mutual index caching architecture, as proposed in 
@@ -1152,7 +1152,7 @@
 lookup \emph{latency}. CAN \cite{ratnasamy01can}, Kademlia 
\cite{maymounkov02kademlia}, 
 Pastry \cite{rowston01pastry} and Tapestry \cite{zhao01tapestry} have advanced 
heuristics for
 proximity based routing. Additionally, most recent version of Chord uses 
proximity based 
-routing inspired by Karger and Ruhl \cite{karger02findingnearest}. Skipnet 
\cite{harvey03skipnet1} 
+routing inspired by Karger and Ruhl \cite{karger02findingnearest}. SkipNet 
\cite{harvey03skipnet1} 
 uses combination of proximity and application level overlay routing when 
performing data 
 lookups. Authors call this feature \emph{constrained load balancing}.
 
@@ -1219,9 +1219,9 @@
 joins and leaves in the system. Some research has been done already in this 
area. 
 
 A concept of ''half-life'' was introduced by Liben-Nowell 
\cite{libennowell01observations}. Half-life is defined
-as follows: let there be $N$ live nodes at time $t$. The doubling from time 
$t$ is the time that pass before
-$N$ new additional nodes arrive into the system. The halving time from time 
$t$ is the time
-required for half of the living nodes at time $t$ to leave the system. The 
half-life from 
+as follows: let there be $N$ live peers at time $t$. The doubling from time 
$t$ is the time that pass before
+$N$ new additional peers arrive into the system. The halving time from time 
$t$ is the time
+required for half of the living peers at time $t$ to leave the system. The 
half-life from 
 time $t$ is smaller of the properties stated above. The half-life of the 
entire system is the 
 minimum half-life over all times $t$. Concept of half-time can be used as a 
basis for developing
 more efficient analytical tools for modeling complex Peer-to-Peer systems. 
@@ -1364,7 +1364,7 @@
 
 \parbox{90pt}{Sybil attack \cite{douceur02sybil}, 
\cite{castro02securerouting}} &
 \parbox{110pt}{Single hostile entity presents multiple entities} &
-\parbox{110pt}{Identify all nodes simultaneously across the system, collect 
pool of nodes which are validated, distributed node ID creation} &
+\parbox{110pt}{Identify all peers simultaneously across the system, collect 
pool of peers which are validated, distributed peer ID creation} &
 \parbox{110pt}{Not practically realizable, research focused on persistence, 
not on identity distinction}
 \\ \hline 
 
@@ -1404,9 +1404,9 @@
 \\ \hline
 
 
-\parbox{90pt}{Malicious nodes \cite{sit02securitycons}, 
\cite{castro02securerouting}} &
-\parbox{110pt}{How to identify malicious nodes in the system} &
-\parbox{110pt}{Create invariants for node behavior, verify invariants, 
self-certifying data} &
+\parbox{90pt}{Malicious peers \cite{sit02securitycons}, 
\cite{castro02securerouting}} &
+\parbox{110pt}{How to identify malicious peers in the system} &
+\parbox{110pt}{Create invariants for peer behavior, verify invariants, 
self-certifying data} &
 \parbox{110pt}{Partial solutions, self-certifying data most reliable}
 \\ \hline
 
@@ -1419,15 +1419,15 @@
 
 
 \parbox{90pt}{Inconsistent behavior \cite{sit02securitycons}} &
-\parbox{110pt}{Hostile node could act correctly with its neighbors, but 
incorrectly with others} &
+\parbox{110pt}{Hostile peer could act correctly with its neighbors, but 
incorrectly with others} &
 \parbox{110pt}{Public keys, digital signatures} &
 \parbox{110pt}{Not practical approach/working proposal created yet}
 \\ \hline
 
 
 \parbox{90pt}{Hostile groups \cite{castro02securerouting}} &
-\parbox{110pt}{Joining node may join parallel network, formed a group of 
hostile nodes, hostile node(s) controls the construction of the network} &
-\parbox{110pt}{Use trusted nodes, based on history information, cryptography, 
key infrastructure} &
+\parbox{110pt}{Joining peer may join parallel network, formed a group of 
hostile peers, hostile peer(s) controls the construction of the network} &
+\parbox{110pt}{Use trusted peers, based on history information, cryptography, 
key infrastructure} &
 \parbox{110pt}{Not 100\% sure if Central Authority (CA) is missing, not 
practical approach/working proposal created yet}
 \\ \hline
 
@@ -1478,8 +1478,8 @@
 
 \parbox{90pt}{Efficient and scalable data discovery 
\cite{lv02searchreplication}, \cite{osokine02distnetworks}, 
\cite{yang02improvingsearch}, \cite{lv02gnutellascalable}, 
\cite{ganesan02yappers}, \cite{adamic02localsearch}, 
\cite{adamic01powerlawsearch}, \cite{ripeanu02mappinggnutella}, 
\cite{milgram67smallworld}, \cite{adamic99small}, \cite{ramanathan02goodpeers}, 
\cite{kleinberg99small}, \cite{nips02-Kleinberg}, \cite{zhang02using}, 
\cite{watts00dynamics}} &
 \parbox{110pt}{Find resources efficiently, if resource exists (loosely 
structured)} &
-\parbox{110pt}{Super nodes, node clusters, caching techniques} &
-\parbox{110pt}{More efficient, less network traffic, not comparable to DHT's 
efficiency}
+\parbox{110pt}{Super peers, peer clusters, caching techniques} &
+\parbox{110pt}{More efficient, less network traffic, not comparable to the 
efficiency of tightly structured systems}
 \\ \hline
 
 
@@ -1507,7 +1507,7 @@
 \parbox{90pt}{Data availability/persistence \cite{bhagwan03availability}} &
 \parbox{110pt}{Data might be temporarily unavailable, or lost permanently} &
 \parbox{110pt}{Data caching, data replication} &
-\parbox{110pt}{Working solutions, but creates more traffic and overhead per 
node}
+\parbox{110pt}{Working solutions, but creates more traffic and overhead per 
peer}
 \\ \hline
 
 
@@ -1519,14 +1519,14 @@
 
 
 \parbox{90pt}{Locality \cite{keleher-02-p2p}, 
\cite{hildrum02distributedobject}, \cite{freedman02trie}, 
\cite{sloppy:iptps03}, \cite{plaxton97accessingnearby}, 
\cite{karger02findingnearest}} &
-\parbox{110pt}{Could DHTs exploit locality properties better ?} &
+\parbox{110pt}{Could tightly structured systems exploit locality properties 
better ?} &
 \parbox{110pt}{Constrained Load Balancing, using network properties for 
nearest neighbor selection, self-organizing clusters} &
 \parbox{110pt}{Working solutions}
 \\ \hline
 
 
 \parbox{90pt}{Hot spots \cite{258660}, \cite{sloppy:iptps03}, 
\cite{maymounkov03ratelesscodes}} &
-\parbox{110pt}{What will happen if some resource is extremely popular and only 
one node is hosting it ?} &
+\parbox{110pt}{What will happen if some resource is extremely popular and only 
one peer is hosting it ?} &
 \parbox{110pt}{Caching, multi source downloads, replication, load balancing, 
sloppy hashing} &
 \parbox{110pt}{For query hot spots, caching and multi source downloads 
efficiently reduce hot spots, for routing hot spots, benefits are smaller}
 \\ \hline
@@ -1539,7 +1539,7 @@
 \\ \hline
 
 \parbox{90pt}{System in flux \cite{libennowell01observations}, \cite{571863}, 
\cite{ledlie02selfp2p}, \cite{albert-02-statistical}} &
-\parbox{110pt}{Nodes join and leave system constantly. What about load 
balancing and performance ?} &
+\parbox{110pt}{Peers join and leave system constantly. What about load 
balancing and performance ?} &
 \parbox{110pt}{Half-life phenomenon (for analysis), simple overlay maintenance 
and construction algorithm} &
 \parbox{110pt}{Initial theoretical analysis have been created, but not 
comprehensive model for analyzing different system states and its variations 
(e.g. complex usage patterns)}
 \\ \hline
@@ -1547,18 +1547,18 @@
 \parbox{90pt}{Sudden network partition \cite{harvey03skipnet1}, 
\cite{harvey03skipnet2}, \cite{rowston03controlloingreliability}} &
 \parbox{110pt}{Sub network is isolated from other network because of network 
disconnection} &
 \parbox{110pt}{Self-tuning, environment observation, localized network 
connection for minimum latency (backup connections)} &
-\parbox{110pt}{Creates more overhead/space requirements per node}
+\parbox{110pt}{Creates more overhead/space requirements per peer}
 \\ \hline
 
 \parbox{90pt}{Fail Stop} &
-\parbox{110pt}{A faulty node stops working} &
+\parbox{110pt}{A faulty peer stops working} &
 \parbox{110pt}{Failure detectors, informing algorithms} &
 \parbox{110pt}{Creates more network traffic, peer's information can be 
outdated, failure detectors not reliable}
 \\ \hline
 
 
 \parbox{90pt}{Byzantine faults \cite{296824}} &
-\parbox{110pt}{Faulty nodes may behave arbitrarily} &
+\parbox{110pt}{Faulty peers may behave arbitrarily} &
 \parbox{110pt}{Byzantine replication algorithms, get information from multiple 
entities, trust majority's opinion} &
 \parbox{110pt}{Much research has been done on this field, practical solutions, 
decreases system performance slightly}
 \\ \hline
@@ -1610,8 +1610,8 @@
 
 
 \parbox{90pt}{Heterogeneity \cite{saroiu02measurementstudyp2p}, 
\cite{brinkmann02compactplacement}, 
\cite{zhao02brocade},\cite{gurmeet03symphony}} &
-\parbox{110pt}{There are different kind of nodes in the system, in light of 
bandwidth and computing power} &
-\parbox{110pt}{Super peers (broadcasting), cluster (broadcasting) additional 
layer upon DHTs, structural simplicity (DHTs)} &
+\parbox{110pt}{There are different kind of peers in the system, in light of 
bandwidth and computing power} &
+\parbox{110pt}{Super peers (loosely structured), clusters (loosely structured) 
additional layer upon tighty structured systems, structure itself is simple 
(tighty structured)} &
 \parbox{110pt}{Working solutions, increases system complexity (additional 
layer)}
 \\ \hline
 
@@ -1619,7 +1619,7 @@
 \parbox{90pt}{Programming guidelines \cite{zhao03api}, 
\cite{frise02p2pframework}, \cite{babaoglu02anthill}, \cite{rhea03benchmarks}, 
\cite{garciamolina03sil}, \cite{balakrishnan03semanticfree}} &
 \parbox{110pt}{Set of programming guidelines/frameworks is needed for better 
interoperability between different systems} &
 \parbox{110pt}{Common frameworks and APIs} &
-\parbox{110pt}{Common framework/API is still missing, a few proposals have 
been made (DHTs)}
+\parbox{110pt}{Common framework/API is still missing, a few proposals have 
been made (tightly structured)}
 \\ \hline
 
 
@@ -1632,7 +1632,7 @@
 
 \parbox{90pt}{Overlay management and health monitoring \cite{zhang03somo}} &
 \parbox{110pt}{System is self-capable to monitor it's status and health for 
better performance} &
-\parbox{110pt}{Build a meta data overlay atop of structured overlay (such as 
SOMO for structured overlays), make local decisions about overlay 
(unstructured)} &
+\parbox{110pt}{Build a meta data overlay atop of structured overlay (such as 
SOMO for structured overlays), make local decisions about overlay (loosely 
structured)} &
 \parbox{110pt}{For tightly structured overlays, efficient and simple to 
implement, fault tolerance unknown, for loosely structured not necessarily 
efficient because decisions are based on local knowledge}
 \\ \hline
 
@@ -1901,7 +1901,7 @@
 On top of Kademlia, we propose the usage of Sloppy hashing 
\cite{sloppy:iptps03} which
 is optimized for DOLR abstraction of tightly structured overlays. With Sloppy 
hashing, 
 we are able to reduce the generation of query hot spots. Sloppy hashing 
enables to 
-locate nearby data without looking up data from distant nodes. Moreover, 
authors' 
+locate nearby data without looking up data from distant peers. Moreover, 
authors' 
 proposal for self-organizing clusters using network diameters may be useful, 
 especially within small groups of working people. Thus, with Sloppy hashing
 we can provide locality properties for Fenfire.
@@ -1948,7 +1948,7 @@
 \begin{enumerate}
 \item Submit data lookup using scroll block's identifier.
 \item Repeat until hosting peer is found: each peer forwards the data lookup 
to a closer peer which hosts the given scroll block identifier.
-\item Pointer peer returns most recent pointer block's value (e.g., hosting 
peer's IP-address) to query originator.
+\item Pointer peer returns most recent pointer block's value (e.g., hosting 
peer's IP address) to query originator.
 \item Query originator requests hosting peer to return the scroll block.
 \end{enumerate}
 \end{itemize}
@@ -1963,7 +1963,7 @@
 \begin{enumerate}
 \item Query originator locally computes a hash for given pointer random string.
 \item Repeat until hosting peer is found: each peer forwards the data lookup 
to a closer peer which hosts the given hash of pointer random string.
-\item Pointer peer returns most recent pointer block's key/value-pair (e.g., 
hosting peer's IP-address) to query originator, using pointer block's own 
indexing schemes. 
+\item Pointer peer returns most recent pointer block's key/value-pair (e.g., 
hosting peer's IP address) to query originator, using pointer block's own 
indexing schemes. 
 \item Query originator requests hosting peer to return the scroll block.
 \end{enumerate}
 \end{itemize}
@@ -1974,7 +1974,7 @@
 
 \item Query originator locally computes a hash for given pointer random string.
 \item Repeat until hosting peer is found: each peer forwards the data lookup 
to a closer peer which hosts the given hash of pointer random string.
-\item Pointer peer returns pointer block's key/value-pair(s) (e.g., hosting 
peer's IP-addresses) to query originator, using pointer block's own indexing 
schemes. 
+\item Pointer peer returns pointer block's key/value-pair(s) (e.g., hosting 
peer's IP addresses) to query originator, using pointer block's own indexing 
schemes. 
 \item Query originator requests hosting peer to return the scroll block.
 \end{enumerate}
 \end{itemize}




reply via email to

[Prev in Thread] Current Thread [Next in Thread]