PDPTA 05 Poster: ROME: Optimising Lookup and Load-Balancing in DHT-Based P2P Network
-
Upload
james-salter -
Category
Software
-
view
31 -
download
0
Transcript of PDPTA 05 Poster: ROME: Optimising Lookup and Load-Balancing in DHT-Based P2P Network
ROME: Optimising Lookup and Load Balancing in DHT-based P2P Networks
James Salter, Nick Antonopoulos and Roger PeelDepartment of Computing, University of Surrey, UK
29th June 2005
The financial support of the UK Engineering and Physical Sciences Research Council (EPSRC) (for JS) is gratefully acknowledged
Chord
Lookup Cost:log2(n) hops worst case½log2(n) hops average
Structured P2P Network Architecture
Based on Distributed Hash Tables Each node stores a
portion of the index Simple lookup
mechanism Given a key, it will return
associated value(s) Small amount of
routing state stored on each node
Logarithmic lookup and maintenance costs
ROME Concept Message cost of Chord is proportional to
number of nodes in network (n) If we reduce n, we reduce message cost ROME goal: Keep the ring “just big enough” Workload should determine ring size, not
number of nodes in the network Only add nodes to ring when they are required
Adding additional functionality to Chord Previously defined operations:
Node addition, replacement and failure
ROME Architecture
Pool ofavailable nodes
ROMEChordLowerLayers
Chord ringBootstrap
Server
New node
New nodes held on bootstrap server – not immediately added to ring
ROME monitors workload of each ring node: take action if under/overloaded
Slide Node
Nodes can “slide” around the ring (alter their Chord ID) to increase/decrease their portion of keyspace, hence their workload
Slide action does not require bootstrap server
Overloaded
Swap Nodes
Nodes post requests to blackboard, indicating volume of workload to add/remove
Agent on blackboard informs nodes of matches, enabling them to swap positions in the ring
Underloaded
Overloaded
Remove Operation
If node is underloaded and its successor can handle its workload, node can be removed from the ring
Node placed back in pool of available machines
Underloaded
Evaluation: ROME vs Chord Node Pool: 1 million nodes Node Capacity: 100 units Upper Threshold: 95% (95 units)
Initial network-wide workload: 1 unit ROME Ring size: 1 node Chord Ring size: 1 million nodes
ROME ring size adapts to network-wide workload requirements
Evaluation: ROME vs Chord
0
500000
1000000
1500000
2000000
0 300 600 900 1200 1500 1800
time
netw
ork-
wid
e w
orkl
oad
0
200
400
600
800
1000
0 300 600 900 1200 1500 1800
time
node
s in
ring
ChordROME
0
1
2
3
4
5
0 300 600 900 1200 1500 1800
time
mea
n ho
ps p
er lo
okup
ChordROME
0%
20%
40%
60%
80%
100%
0 300 600 900 1200 1500 1800
time
capa
city
util
isat
ion
ChordROME
Conclusions
Proposed extensions to ROME to slide, swap and remove nodes from Chord ring
Demonstrated smaller message costs than standard Chord through controlled ring construction using ROME When capacity of machines exceed ring workload
– lookup in less hops than standard Chord When capacity of machines and ring workload
near equal – lookup in equivalent hops to standard Chord