Device (Addressing &) Discovery Prasun Dewan Department of Computer Science University of North...

183
Device (Addressing &) Discovery Prasun Dewan Department of Computer Science University of North Carolina [email protected]

Transcript of Device (Addressing &) Discovery Prasun Dewan Department of Computer Science University of North...

Device (Addressing &) Discovery

Prasun Dewan

Department of Computer Science University of North Carolina

[email protected]

2

Addressing Devices vs. Traditional Servers

Sever and network is always around - static address

Centralized heavyweight scheme acceptable to give install and give unique addresses to servers

Client expected to know traditional server address imap.cs.unc.edu

Discovery phase possible www.cnn.com logical name to physical

name can be bound dynamically

Devices may be dynamically added on ad-hoc networks - dynamic address

With so many dynamic devices and ad-hoc networks lightweight decentralized scheme needed

Client may not know or care about device server address print to the nearest printer turn off all light bulbs

Implies later binding and thus a discovery phase

3

0 Addressing Control point and device get address

Use a DHCP server Else use Auto IP

What is Auto IP? IETF Draft Automatically Choosing an IP

Address in an Ad-Hoc IPv4 Network What steps does it take?

Pick an address in 169.254/16 range Check to see if it is used (ARP) Periodically check for DHCP server

Could use DNS and include DNS Client

4

Discovery : Active and Passive SSDP Solution:

Hybrid approach Advertisement has

lifetime Can simulate pure

push model HTTP over UDP

What if message gets lost?

Must send UDP message 3 times

Solution over TCP planned

Client Server

Server

5

Issues raised by UPnP discovery

Scaling problem of multicast-based discovery Auto Shut off problem

Simple-minded search Attribute-based

Lack of access control

6

Jini Discovery

Client ServerService Object

Attributes

NSService Object

AttributesService Object

Attributes

Java Type

Template

lookup()

Service Object

Attributes

join()

discover() discover()

announce()announce()

7

Jini Discovery Discovery is of Java object reference

Can be used directly to invoke methods or register for events

Language-based solution Can search by type

But type is a Java interface/class E.g. edu.unc.cs.appliances.Printer Can use inheritance for matching

• edu.unc.cs.appliances.Output

Versioning problems• Client type version may not be same as Server’s

8

Jini Discovery Service has typed attributes

Color Printing, local printer name,physical location,document format, paper size, resolution, access list

Some of this info may come from third-party (local admin)

Physical location, local name, access list Client specifies

Type of service Template of attributes

Non-null values are filled in by discovery• E.g. user-interface

9

Jini Lookup Special name server expected on network

Servers join• Can register object by value (local) or reference (remote)• Service has lease

Clients lookup• Proxy of reference object loaded • Copy of by value objects loaded

– May behave as smart proxy• Class of the object may also be dynamically loaded

Servers and clients discover it using (LocalDisovery object, which) multicasts

Discovery service multicasts to announce existence

10

Jini Discovery

Client ServerService Object

Attributes

NSService Object

AttributesService Object

Attributes

Java Type

Template

lookup()

Service Object

Attributes

join()

discover() discover()

announce()announce()

What if no name server?

11

Peer Discovery

Client ServerService Object

AttributesService Object

Attributes

Java Type

Template

lookup()

Service Object

Attributes

join()

discover() discover()

announce()announce()

What if no name server? Client uses some special method in server to find server? Server uses some special method in client to announce

service? Actually, can reuse methods of lookup-based discovery.

12

Peer Lookup

What if no name server? Client pretends to be a lookup service

multicasting announcement of service replying to lookup service searches?

Servers send it join information Can filter it

Client ServerService Object

Attributes

NS

Service Object

Attributes

Service Object

Attributes

join()

discover()?

announce()

13

Dual?

What if no name server? Server pretends to be a name server

sending announcement of service replying to lookup service searches.

Clients send it lookup information Can filter it

Every lookup request sent to every server?

Client ServerService Object

AttributesService Object

Attributes

discover()?

announce()

Java Type

Template

lookup()

NS

14

Service Location Protocol

What if no name server? Client multicasts lookup request rather than unicast More network traffic?

SLP Address: IP address, port number, type-dependent path

Not bound to Java

Client ServerService

AttributesService

Attributes announce()

Type

Template

mlookup()

15

SLP with Lookup and DHCP

Client ServerService

Attributes

NSService

AttributesService

Attributes

Type

Template

lookup()

Service

Attributes

join()

DHCP

register

NS

lookup (NS)lookup (NS)

16

No DHCP

Client ServerService

Attributes

NSService Object

AttributesService

Attributes

Type

Template

lookup()

Service

Attributes

join()

discover() discover()announce()announce()

17

Replicated Lookups

Client ServerService

Attributes

NS1

Service

Attributes

Type

Template

lookup()

NS 2

discover() Service

Attributes

Service

Attributes

join()

Service

Attributes

discover(l1)

discover(l1)

Joins sent to all NS discovered. Not considered desirable to discover all NS.

Not a multicast!

18accounts

legal

Scoped Discovery

Client Server 1Service 1

Attributes 1

NS1

Service 1

Attributes 1

Type

Template

lookup()

NS 2

Service 1

Attributes

Service 1

Attributes

join()

Server 2Service 2

Attributesjoin()Service 2

Attributes 2

Service 2

Attributes 2

Service 2

Attributes

announce (accounts)

announce (legal)

discover(legal)

19accounts

legal

Peer Scoped Discovery

Client Server 1Service 1

Attributes 1Service 1

Attributes 1

Type

Template

mlookup(legal)

Server 2Service 2

Attributes

20

SLP Scaling NS discovered through DHCP Also through UDP-based multicast

Repeated multicasts list who has been found so far NS replicated

Synchronized via multicasts of join to detected lookups Lookups partitioned

Room 115 vs 150 Legal vs. Account Payable a la DNS

Partitions can overlap Wide-area scaling?

Every service contacts every name server in partition discovered.

21

Wide-Area Extension (WASRV) Divide world into SLP domains Each domain has

Advertising agent Multicast services to other domains Configuring necessary

• Only selected services multicast• Otherwise everything shared

Brokering agent Listens to multicasts from remote advertising agents Advertises services to local domain

22

WASRV Limitations addressed by SDS

Wide-area multicast “ill-advised” Configuring necessary to determine what is

multicast Otherwise everything shared

Linear scaling

23

Controlled Partitioning Partitioning automatically selected

Based on query criteria? Globe, OceanStore, Tapestry, Chord, Freenet,

DataInterface location = UNC UNC server location = UCB Berkeley server

Works as long as single criteria location = UNC & type = printer

• All printers and all UNC devices in one domain

type = printer & model = 2001• All printers and 2001 models and UNC devices in one

domain

Popular criteria (2001 models) can lead to bottleneck

24

Query Flooding No controlled partitioning Query sent to all partitions

Service announcement sent to specific (arbitrary or local) partition

These sent frequently No control over query rate

Scaling problem

25

Centralization A single central name server

Napster, Web search engine Multi criteria search Bottleneck

26

DNS Hybrid Hierarchical scheme A single central name server at root level

All queries and service announcements contact it Forwards request to partitioned lower-level

servers base d on single-criterion query Works because

Positive and negative caching Low update rates

27

SDS Hybrid Partitioning Centralization Query Flooding

28

SDS Query Filtering Service announcement given to specific NS node – the local

domain node Query given to specific NS node – the local domain node

NS advertisement on well known multicast address Address can be used to find NS using expanding ring search

Increase TTL until found From each NS all other nodes reachable Information reachable from a neighbour is summarized

Summarization is lossy aggregation Hashing Can give false positives

Can direct query not satisfied locally to matching neighbours Multiple neighbours because of false positives Can do this in parallel Intelligent query flooding

Recursive algorithm

29

Mesh vs. Tree Mesh

Need to detect cycles TTL Or give unique Ids to queries to avoid repetition

Tree Can summarize information in one direction

Parent summarizes children Upper nodes are bottlenecks

Centralization a la DNS Queries start at any node rather than always at root Upper nodes may not be contacted Service announcements propagated to all ancestors Bandwidth used for propagation bounded

• Less the bandwidth more responsive to changes Summaries propagated so lower load

• More concise summary more query flooding

30

Summarization Filters All pass filter

Low update load High query load

Must contact all neighbours Send complete description to neighbour

No loss in aggregation High update load No flooding

Need something in between False positives OK False negatives not OK (those nodes not

searched)

31

Cntroid Indexed Terminals Filter WHOIS++/LDAP For each attribute send all values of that attribute

Service 1 location = “UNC”, model = “2001”

Service 2 location = “Duke”, model = “2000”

Summary location: “UNC”, “Duke” model: “2001”, “2000”

False Positives location = “UNC”, model = “2000” Location = “Duke”, model = “2001”

No false negatives

32

Cross Terminals Filter For each attribute description create hashes attribute

cross products Service

location = “UNC”, model = “2001” Possible matching queries

location = “UNC” model = “2001” location = “UNC” & model = “2001”

If actual query hash equal to a possible matching query hash, then positive

Bound number of attributes considered to avoid exponential # of cross products

Must find cross products of query because which service attributes used in cross products unknown

Otherwise false negatives

33

Bloom-filtered Cross Terminals Given a list of hashes:

d1, d2 ,..dn

Create a compressed word of size L from hashing salts s1, s2 ,..sn

Bit x is set if hash (di + sj ) mod L = x for some i, j

An item d is in list if Hash (d + sj ) mod L is set for all j.

What happens when service deleted reference count with each set bit

34

False Positives in BCT Cross products Limiting number of attributes Bloom filters

35

Range queries Query:

Location = University of * Model > 2000 Type = printer

Bloom filter elides query, ignores range attributes Type = printer

Keeps list of false positives with queries Negative caching

36

Tree construction Support multiple hierarchies Can be specified by configuration file

Admin domains edu, com

Computed automatically Network topology

• Hops Geographic location

• Distance

Query specifies which domain to consider in search.

Special primary domain guaranteeing coverage of all nodes.

37

Tree construction A node can serve multiple levels in the hierarchy Child nodes can be dynamically added/deleted Services and clients continuously listen for

domain, address announcements Replication for fault tolerance

Multicast address representing replicas Replicas and parents listen for heart beats

38

Other SDP Features

Security Hierarchical rather than flat attributes

39

SDP Security KindsKinds of security: Access control

Arbitrary client cannot discover arbitrary service Arbitrary clients and services can invoke NS

methods (lookup() and join()) Authentication

Clients, Services, and NS Privacy

Service descriptions Not queries or NS announcements

40

Access Control MechanismsAccess control

Capability lists vs. access lists Access lists for persistence

AC server keeps them Client (not NS) contacts AC server

Derived cap lists for performance A la open call returning file handler Given to client by AC server after authentication

41

Authentication Mechanisms Trusted machine address, port number

Cannot use when variable address Public key

Sender decrypts message with own private key Authenticator encrypts message with sender’s

public key Used for Client and Server

42

Privacy Mechanisms Symmetric Encryption

Encryption and decryption key are the same E.g. XOR, Blowfish

Asymmetric Encryption Sender encrypts with receiver’s public key Receiver decrypt’s with receiver’s private key RSA

Performance: Blowfish

Encryption: 2.0ms; Decryption: 1.7ms RSA

Encryption: 15.5 seconds; Decryption: 142.5 ms (raising message to power of key)

DSA Signature: 33.1 ms, Verification: 133.4ms

NS can get overloaded with asymmetric How to establish symmetric key?

43

SDS Privacy Mechanisms Use asymmetric for establishing symmetric

key for some tunable time period Use symmetric key for sending info during

that period.

44

SDP Hierarchical Attributes Service associated with XML DDT describing

attributes Join() describes hierarchical attributes

Must follow syntax specified by DDT Can add tags

<?xml version=1.0?><!doctype printcap system

http://www/~ravenben/printer.dtd><printcap>

<name>print466; lws466</name><location>466</location><color> yes </color><postscript>yes</postscript><rmiaddr>http://joker.cs/lws466</rmiaddr>

</printcap>

45

SDP Hierarchical Attributes Lookup describes hierarchical template()

<?xml version=1.0”?><printcap>

<color> yes </color>

<postscript> yes </postscript>

</printcap> DDT for it? Bound to a particular type?

46

An Architecture for a Secure Service Discovery

ServiceSteven Czerwinski, Todd Hodes, Ben Zhao,Anthony Joseph, Randy Katz

UC Berkeley

Internet Scale Research Group

47

Outline

Intro Architecture Security Wide Area Conclusion

48

Supporting Ubiquitous Computing

Ubiquitous Computing envisions… Billions of computers and devices available to users Devices seamlessly interact with all others Networks and computers as an unobtrusive utility

One problem: Locating servers and devices How can you locate a light bulb among billions? Solution must be scalable, fault-tolerant, self-configuring,

secure, and support wide-area

Existing solutions don’t adequately address needs

49

A Secure Service Discovery Service

Services are applications/devices running in the network One piece of the puzzle

Helps manage explosive growth of services Aids in configuration by providing indirection Aids in protecting user and services by providing security

The Idea:A secure directory tool which tracks services in the network and allows authenticated users to locate them through expressive queries

50

Berkeley Service Discovery Service

<service> <name> 443 Phaser </name> <type> io.printer </type> <location> Soda/443 </location> <color> yes </color> <postscript> yes </color> <contact> <url> rmi://batman.cs </url> </contact></service>

czerwin@cs

Where is a color printer?

The SDS

443 Phaser“4

43 Phaser”

<query> <type> io.printer </type> <color> yes </color></query>

XML Query

Service

Description

51

Discovery Services

Discovery/Directory services are not new Provide a mapping of attribute values to domain

specific addresses Examples: Telephone book, card catalogs, etc..

Computer network discovery services DNS NIS SAP Globe LDAP Jini LookUp service

52

Differentiating Discovery Services Query Routing

Implicitly specified by the query (DNS, globe) Queries

Query grammar complexity (LDAP vs. DNS) Push (advertisements) versus pull (queries)

Pull only (DNS) vs. Push Only (SAP modulo caching)

Update rate Short for mobility vs. long for efficient caching

53

Discovery Services Cont. Bootstrapping

“Well-known” local name (“www.”) List of unicast addresses (DNS) Well-known global/local multicast address (SAP, SLP)

Soft state vs. hard state Implicit recovery vs. guaranteed persistence

Service data Reference (globe) vs. content (SAP+SDP)

Security Privacy and authentication

54

Features of the Berkeley SDS Hierarchical network of servers

Multiple hierarchies based on query types Queries

Use XML for service descriptions and queries Bootstrapping via Multicast announcements

Listen on well-known global channel for all parameters Soft-state approach

State rebuilt by listening to periodic announcements Secure

Use certificates/capabilities to authenticate

55

The Berkeley SDS Architecture

Printer

Converter

Jukebox

Printer

Services

CertificateAuthority

CapabilityManager

UC Berkeley

Soda Hall

Room466

Room 464

Cory Hall

SDS Servers

SDS Server

Client

“czerwin@cs”

56

The Berkeley SDS Architecture

Printer

Converter

Jukebox

Printer

Services

CertificateAuthority

CapabilityManager

UC Berkeley

Soda Hall

Room466

Room 464

Cory Hall

“czerwin@cs”

SDS Server

SDS Servers•Create hierarchy for query routing•Store service information and process requests •Advertise existence for bootstrapping

Client

SDS Servers

57

The Berkeley SDS Architecture

Printer

Converter

Jukebox

Printer

CertificateAuthority

CapabilityManager

UC Berkeley

Soda Hall

Room466

Room 464

Cory Hall

SDS Server

“czerwin@cs”

Services

Services•Responsible for creating and

propagating XML service description

Client

SDS Servers

58

The Berkeley SDS Architecture

Printer

Converter

Jukebox

Printer

Services

CertificateAuthority

CapabilityManager

UC Berkeley

Soda Hall

Room466

Room 464

Cory Hall

SDS Server

“czerwin@cs”

Clients•The users of the system•Perform look up requests via SDS server

Client

SDS Servers

59

The Berkeley SDS Architecture

Printer

Converter

Jukebox

Printer

ServicesCapabilityManager

UC Berkeley

Soda Hall

Room466

Room 464

Cory Hall

SDS Server

“czerwin@cs”

CertificateAuthority

Certificate Authority•Provides a tool for authentication•Distributes certificates to other components

Client

SDS Servers

60

The Berkeley SDS Architecture

Printer

Converter

Jukebox

Printer

Services

CertificateAuthority

UC Berkeley

Soda Hall

Room466

Room 464

Cory Hall

SDS Server

“czerwin@cs”

CapabilityManager

Capability Manager•Maintains access control rights for users•Distributes capabilities to other components

Client

SDS Servers

61

How the Pieces Interact...SDS

Server

Client

Printer MusicServer

BackupSDS

Server

Server Announcements:

• Global multicast address• Periodic for fault

detection• Provides all parameters

Service Announcements:

• Multicast address from server

• Periodic for soft state• Contains description

Client Queries:• SDS address from

server• Sends service

specification• Gets service description

and URL

62

Security Goals Access control

Authentication of all components

Encrypted communication

63

Security Goals Access control

Services specify which users may “discover” them Authentication of all components

Protects against masquerading Holds components accountable for false information

Encrypted communication Authentication meaningless without encryption Hides sensitive information (service announcements)

No protection against denial of service attacks

64

Security HazardsSDS

Server

Client

PrinterMusicServer

BackupSDS

Server

Clients:• Encryption for 2-way

communication• Have to prove rights• Authenticated RMI

Server Announcements:

• Have to sign information• No privacy needed• Signed broadcasts

Service Announcements:

• Only intended server can decrypt

• Signed descriptions to validate

• Secure One-Way Broadcasts

All components:

• Use certificates for authentication

<ninja@cs>

<soda-admin@cs>

<soda-admin@cs>

<ravenben@cs>

<czerwin@cs>

65

Secure One-Way Broadcasts

Service KPrivate

Signing(DSA)

AsymmetricEncryption

(RSA)

SymmetricEncryption(Blowfish)

Service Description

ServerEKPublic

KSession

KSession {Signed Description} EKPublic {Session Key}

Key idea: Use asymmetric algorithm to encrypt symmetric key

66

Secure One-Way Broadcasts

AsymmetricEncryption

(RSA)

SymmetricEncryption(Blowfish)

Signed Service Description

ServerEKPrivate

KSession

KSession {Signed Description} EKPublic {Session Key}

(Cache it)

To decode, only intended server can decrypt session key• Use session to retrieve service description• Cache session key to skip later asymmetric operations

67

Wide Area

Room 443

ISRG

Kinko’s

UCB Physics

IRAM

UC Berkeley

UCB CS

Stanford U

Kinko’s #123 CS Physics

Mobile People

Root

Hierarchy motivation:• Divide responsibility among servers for scalabilityThe big question:• How are queries routed between servers?

68

The Wide Area Strategy Build hierarchies based upon query criteria

Administrative domain Network topology Physical location

Aggregate service descriptions (lossy) Route queries based on aggregation tables

Parent Based Forwarding (PBF)

69

Service Description Aggregation Hash values of tag subsets of service description

used as description summary Hash list compressed with Bloom Filter [Bloom70] Fixed-size aggregation tables prevent explosion at

roots Guarantees no false negatives Can have false positives, probability affected by

table size

Algorithm: To add service, compute description tag

subsets, insert into Bloom Filter table To query, compute query tag subsets, examine

corresponding entries in Bloom Filter table for possible matches

70

Multiple Hierarchies

Room 443

ISRG

Kinko’s

UCB Physics

IRAM

UC Berkeley

UCB CS

Stanford U

Kinko’s #123 CS Physics

Mobile People

Root

Administrative Hierarchy

71

Multiple Hierarchies

Room 443

ISRG

Kinko’s

UCB Physics

IRAM

UC Berkeley

Soda Hall

Stanford U

Kinko’s #123 CS Physics

Mobile People

Root

Physical Location Hierarchy

Stanford, USBerkeley, US

Hearst St

Northern California

72

Query Routing in Action

Room 443

ISRG

UCB Physics

IRAM

UC Berkeley

Soda Hall Kinko’s #123

Berkeley, US

Hearst St

SDS servers

Services

Clientsczerwin@cs

Color Fax<type>fax</type><color>yes</color>?

73

Query Routing in Action

Room 443

ISRG

UCB Physics

IRAM

UC Berkeley

Soda Hall Kinko’s #123

Berkeley, US

Hearst St

SDS servers

Services

Clientsczerwin@cs

Color Fax<type>fax</type><color>yes</color>?

Room 443

Room 443 server examines its data and tables, routes to parent

74

Query Routing in Action

Room 443

ISRG

UCB Physics

IRAM

UC Berkeley

Soda Hall Kinko’s #123

Berkeley, US

Hearst St

SDS servers

Services

Clientsczerwin@cs

Color Fax<type>fax</type><color>yes</color>?

Each server checks aggregation tables, Hearst sees possible hit

75

Query Routing in Action

Room 443

ISRG

UCB Physics

IRAM

UC Berkeley

Soda Hall Kinko’s #123

Berkeley, US

Hearst St

SDS servers

Services

Clientsczerwin@cs

Color Fax<type>fax</type><color>yes</color>?

Kinko’s #123 finds match, returns service description

76

Conclusion A tool for other applications

Provides a listing of services in the network XML descriptions allow for flexibility Well defined security model Fault tolerant, scalable Releasing local area implementation as part of Ninja

Ongoing work Experimenting with wide area strategy and caching

For more information [email protected]

77

INS Issues System-supported search

System parameters Hop count

Application parameters Least loaded printer

Mobility Node mobility

Node may move between discovery and operation End-to-end solution

Service mobility Ideal node changes

• Least loaded printer• Closest location

How to store hierarchical attributes? Fault tolerance and availability

78

System supported search

Allow service to advertise an application-defined metric (load) Single metric

Either least loaded or closest printer Name server will find service with least value of

metric

79

Mobility Client never sees physical address

Query serves as intentional name for source and destination

Discovery infrastructure also does message routing Different from query routing in SDS

Conventional model Get address from query Use address to send message

INS model Send message with query What if multiple services

Anycast• Send to service with least value of metric

Multicast• Send to all matching services• Cannot use internet multicast!

80

Multicast approach Internet multicast groups

Contain internet addresses But internet addresses may change!

Point to point multicast Inefficient because of duplication of messages along

common paths Overlay routing network of NS

Multiple replicated NS Number can vary dynamically

Each service and client bound to specific NS Spanning tree of NS

Based on round trip latency NS forwards message to appropriate neighbours

81

Distributed Spanning Tree Construction

List of current name servers kept in (possibly replicated) NS server

New name server addition Gets list of NS’s from NS server

NS server serializes requests Does a ping to all existing NS Makes closest NS its neighbour (parent) Puts itself in NS server list

Connections made a spanning tree? NS’s are connected n-1 links made Any connected graph with n-1 links is tree Nodes put in NS server in linear order A later node cannot be parent of earlier node

82

Load balancing NS server keeps list of potential NS sites As load gets high

New NSs added to inactive potential sites dynamically attach themselves to spanning tree

As load gets low at a site NS killed Tree must adjust

Incremental vs. batch Incremental

• Informs peers (children) about this• They rejoin

83Service 1 name record

service

[service = camera]camera

Storing/Searching AttributesService 1 announcement:

[service = camera [color = true]][location = 536]Name record: IP Adr, next hop NS

Query:[location = 536]

root

location

536

color

true

Service 2 announcement:[service = printer [postscript = true]][location = 536]Name record: IP adr, next hop NS

postscript

true

printer

Service 2 name record

84

Storing/Searching Attributes

Lookup (T, q): Set of name records

let S be the set of all name-records

for each av pair, (aq, vq) of query

if vq = *

find attribute node aq in T

intersect S with name records in subtree(aq )

else

find av node (aq, vn) in T

if vq or vn is leaf

intersect S with name records in subtree(vn )

else

intersect S with name records in lookup (subtree(vn ), subtree(vn))

return S

85

Lookup overhead

T(d) = na * (ta + tv + T(d-1)) T(d) = na * (1 + T(d-1)) (hashing) T(0) = b (intersect all name records with selected set) T(d) = O (na

d * (1 + b))

T(0) = O(n), (n = number of name records) “experimentally” from random name specifiers and tree

T(d) = O (nad * n))

root

ra

rv

2d

na

86

Fault Tolerance All name servers are replicas

each announcement forwarded to neighbours announcements sent periodically (hearbeat)

Hearbeat mechanism detects service and NS failure if service fails

announcement removed If NS fails

reconfigure overlay network Client queries sent to single NS

load balancing

87

Forwarding Announcements

root

service location

camera 536printer

color

true

postscript

true

Service 1 name record Service 2 name record

88

Forwarding Announcements

root

service

camera printer

color

true

postscript

true

Service 1 name record Service 2 name record

[color = true]

[service = camera]

[service = camera [color = true]

location

536

[location = 536]

[location = 536]

[color = true]][

89

Get NameGetName(r)

name a new empty name-specifierroot.PTR namefor each parent value node of r

trace (parentvalueNode, null)return n

Trace(valueNode, name) if valueNode.PTR != null

if name != nullgraft name as child of valueNode.PTR

else valueNode.PTR (valueNode.parentAttribute(), valueNode.value())If (name != null)

graft name as child of valueNode.PTRTrace(valueNode.parentValue(), valueNode.PTR)

Synthesized attributes!

90

Update time t = n * (Tlookup + Tgraft + Tupdate + network_delay)

Experiments show costly operation Solution is to divide name space into disjoint

virtual spaces ala SLP camera-ne43 building-ne43

Each service heart beat not sent each NS Sending tree diffs as a solution?

91

Applications Floorplan: An map-based navigation tool

Example service [service = camera[entity = transmitter][Id=a]] [room=150]

Retrieving map [service = locator [entity = server]] [region = first floor]

Retrieving devices [room = 150]

92

Applications Load-balancing printer

Unicast [id = hplj156] Retrieve, delete, submit job

Anycast [service=printer [entity=spooler]][room=517]

Printer advertises metric based on Error status, number of enqueued jobs, length of

jobs

93

Applications Camera: A mobile image/video service

Request camera image destination address

• [service = camera[entity = transmitter]][room = 510] source address

• [service = camera[entity = receiver][id=r]][room = 510] Transmitters/receivers can move

Multicast to receivers Multiple users viewing image

Multicast to senders User viewing multiple cameras

94

NS Caching Case 1

Camera multicasts image Some other client subsequently requests image

Case 2 Client requests image Requests image again A la web caching

Caching implemented for any service using IN Assumes RPC call interface?

Cache lifetime given

95

Design and implementation of an intentional naming system

William Adjie-Winoto Elliot Schwartz

Hari Balakrishnan Jeremy Lilley

MIT Laboratory for Computer Science

http://wind.lcs.mit.edu/

SOSP 17, Kiawah Island ResortDecember 14, 1999

96

Environment Heterogeneous network with devices, sensors and

computers Dynamism

Mobility Performance variability Services “come and go” Services may be composed of groups of nodes

Example applications Location-dependent mobile apps Network of mobile cameras

Problem: resource discovery

97

Responsiveness Integrate name resolution and message routing (late binding)

Robustness

Easy configuration Name resolvers self-configure into overlay network

Expressiveness

Decentralized, cooperating resolvers with soft-state protocol

Design goals and principles

Names are intentional; apps know

what, not where

98

Naming and service discovery Wide-area naming

DNS, Global Name Service, Grapevine Attribute-based systems

X.500, Information Bus, Discover query routing Service location

IETF SLP, Berkeley service discovery service Device discovery

Jini, Universal plug-and-play Intentional Naming System (INS)

Mobility & dynamism via late binding Decentralized, serverless operation Easy configuration

99

INS architecture

Name resolverName resolver

Overlay network of resolvers

ClientClient

Intentionalanycast

Intentionalmulticast

Message routing using intentional namesMessage routing using intentional names

Name

Name

ServiceService

Late bindingLate bindingName with Name with messagemessage

100

Name-specifiers

[vspace = mit.edu/thermometer][building = ne43 [floor = 5 [room = *]][temperature < 600F]

[vspace = mit.edu/thermometer][building = ne43 [floor = 5 [room = *]][temperature < 600F]datadata

[vspace = lcs.mit.edu/camera][building = ne43

[room = 510]][resolution=800x600]][access = public][status = ready]

[vspace = lcs.mit.edu/camera][building = ne43

[room = 510]][resolution=800x600]][access = public][status = ready]

Expressive name language (like XML) Resolver architecture decoupled from language Providers announce descriptive names Clients make queries

Attribute-value matches Wildcard matches Ranges

101

Name lookups

Lookup Tree-matching algorithm AND operations among orthogonal attributes

Polynomial-time in number of attributes O(nd) where n is number of attributes and d is the

depth

102

Resolver network

Resolvers exchange routing information about names Multicast messages forwarded via resolvers Decentralized construction and maintenance Implemented as an “overlay” network over UDP

tunnels Not every node needs to be a resolver Too many neighbors causes overload, but need a

connected graph Overlay link metric should reflect performance Current implementation builds a spanning tree

104

Late binding

Mapping from name to location can change rapidly

Overlay routing protocol uses triggered updates

Resolver performs lookup-and-forward lookup(name) is a route; forward along

route Two styles of message delivery

Anycast Multicast

105

Intentional anycast

lookup(name) yields all matches Resolver selects location based on

advertised service-controlled metric E.g., server load

Tunnels message to selected node Application-level vs. IP-level anycast

Service-advertised metric is meaningful to the application

106

Intentional multicast

Use intentional name as group handle Each resolver maintains list of neighbors

for a name Data forwarded along a spanning tree of

the overlay network Shared tree, rather than per-source trees

Enables more than just receiver-initiated group communication

107

Robustness

Decentralized name resolution and routing in “serverless” fashion

Names are weakly consistent, like network-layer routes Routing protocol with periodic & triggered

updates to exchange names Routing state is soft

Expires if not updated Robust against service/client failure No need for explicit de-registration

109

vspace=camera vspace=5th-floor

Delegate this to another INR

Routing updates for all names

Routing Protocol Scalability

vspace = Set of names with common attributes Virtual-space partitioning: each resolver now handles

subset of all vspaces

Name-tree at resolver

111

Applications

Location-dependent mobile applications Floorplan: An map-based navigation tool Camera: A mobile image/video service Load-balancing printer TV & jukebox service

Sensor computing Network-independent “instant messaging” Clients encapsulate state in late-binding

applications

112

Status Java implementation of INS & applications

Several thousand names on single Pentium PC; discovery time linear in hops

Integration with Jini, XML/RDF descriptions in progress Scalability

Wide-area implementation in progress Deployment

Hook in wide-area architecture to DNS Standardize virtual space names (like MIME for

devices/services)

113

Conclusion

INS is a resource discovery system for dynamic, mobile networks

Expressiveness: names that convey intent

Responsiveness: late binding by integrating resolution and routing

Robustness: soft-state name dissemination with periodic refreshes

Configuration: resolvers self-configure into an overlay network

114

Active Name Issues

Example Printer selected

randomly round-robin

INS metric of load not sufficient Need some application-specific code

115

Active ApproachesActive networks

Install code in internet routers can only look at low-level packets not enough semantics

Active services Allows application to install arbitrary code in

network application-awareness

116

Active Names Active names

Install code in name servers enough semantics application-transparency

vs INS installs metric and data extension is to install code declarative vs procedural security problem

• confine code a la applet vs HTTP caching

special proxies translate name

117

Active Names Active name

name namespace program to interpret name

specified by active name• name• name space program to interpret name• recursively until

– name– specifier of predefiened name space

programs such as DNS, HTTP Example

printing round-robin• install program that listens for printer

announcements• picks printers in round-robin

118

Active Name RPC

Example Camera data caching

Active name used in RPC call name space program has notion of

data (parameters) reply

it can cache data

119

Upward comapatibity Example Service name without transport protocol

www.cs.utexas.edu/home/smith Transport protocol concern of name space program

Root name space delegates to WWW-root active name program

WWW-root implements default web caching Response to URL indicates name space program to

interpret name www.cs.utexas.edu/home/smith/active/* Original request must be prefix of active name program

www.cs.utexas.edu/* illegal

120

Active Name Delegation Example

Might wish to transcode camera image color to black and white

Name space program in client’s NS can transcode

but might want transcoding nearer data producer if network is slow

Name delegation Each name space program interprets part of the

name data input stream

Chooses next NS and name space program to interpret the rest

DNS special case

121

After Methods Return path for result may not be the same as path for

data request forwarded to more appropriate NS

Each delegation specifies return path for result active name called after method pushed on stack of current active names

Stack popped on way back Each node on return path

pops active name sends name, part of result and popped stack to NS

Leaf destination chooses closest NS and top-level after method

Can influence how request is serviced on the network Transcoding, adding banner ads

Name space resolver and after method at other return nodes choose processing and subsequent node

Direct cal cost: 0.2 s, After method cost: 3.2 s

122

Security Answer could come back from anyone How to trust? Assume transitive trust

A trusts B B trusts C A trusts C

Client sends capability Some unforgeable object

It is passed along Trusts anyone who returns it.

123

Active Names: Flexible Location and

Transport of Wide-Area Resources

Amin Vahdat

Systems Tea

April 23, 1999

124

Active Naming Vision

Today: Name is static binding to physical location and object (DNS, LDAP)

Want: dynamic, flexible binding to service/data Server selection among geographic replicas

(CISCO, IBM, etc.) Client customization (e.g., distillation, custom

CNN) Server customization (e.g., hit counting, ad

rotation, etc.) An Active Name is a mobile program that invokes

a service or acquires data Flexibly support various naming semantics Minimize wide-area communication

125

Outline

Background Active Names

Opportunity Implementation

Examples Programmability Location Independence Composibility

Conclusions

126

Current Name Services

DNS translates machine names to IP addresses Updates propagated over a period of days Bindings cached at client, TTL invalidation Assumes bindings change slowly, updated

centrally RPC name service binds caller and callee

Assumes every service provider is equivalent In wide area: heterogeneous quality of service

depending on selection of provider

127

Wide Area Naming Today:HTTP Redirect w/Dynamic

Content

Client Proxy1 Name

DNSServer

2 Host Binding

HTTPServer

3 URL Redirect

4 URL HTTPServer

5 Name Program 6 Data

128

Current Attempts to Add Flexibility to Name Binding

HTTP redirect DNS round robin Cisco Local Director/Distributed Director URN’s with sed scripts to mangle names Global object IDs (e.g., Globe, Legion) Web caches/Active Caches Mobile IP ...

129

The Active Naming Opportunity

Name translation often incorporates client-specific info Custom home pages (www.cnn.com => your news

page) Distilling picture to client requirements (small B&W

for PDA’s)

Naming often a step in a larger process

Availability of remotely programmable resources Java, Active Networks

Importance of minimizing wide-area latency for requests

130

Active Naming Implementation Clients generate Active Names

domain:name Active Name Resolver determines domain-

specific program Location independent, can run anywhere Application specific, name resolved in domain-

specific manner Domain-specific code to check for cache hit

Active caching (hit counters, ad-rotation) After Methods associated with each Active

Name List of programs guaranteed to be called after

initial eval Multi-way RPC, anonymization, distillation Client-specific transformation of data

131

Active Name Resolution

Program agent of service Hit counting Dynamic content

generation Location independent

Hand off to other resolvers

After methods perform client-specific transforms Distillation Personalization

Virtual Machine Resource allocation Safety

ClientName

Data

Active Name Resolver

Domain Resolver

After MethodsDistillationReturn Data

Virtual Machine

Program

Cache

132

Multi-Way RPC

Goal: minimize latency Usually have to pass

results all the way down a hierarchy

Adds latency Store and forward

delays Multi-Way RPC

leverages after methods Convention: last after

method transmits result back to client

Minimize latency Back fill for caches?

P

C P

P

S P

P

C P

P

S P

Multi-Way RPC

Traditional

Request

Request

Response

Response

133

Change the Socket API?

Network programming traditional model: ipaddr = gethostbyname(“www.cs.duke.edu”);

socket = connect(ipaddr, 80 /* port */);write(socket, “GET /index.html HTTP/1.0\n\n”);read(socket, dataBuffer);

dataBuffer = ANResolver.Eval(“www.cs.duke.edu/index.html”);

Analogs in other areas Filesystems, virtual memory, Java URL objects Programmer does not see inodes or physical

page addresses Allows for reorganization underneath the hood

134

Security Considerations

Multi-way RPC Send request to local resolver Wait for answer on a socket Answer could be transmitted by anyone

Solution: use capabilities Associate a capability with each request Capability must come back with reply

Future work: integrate with CRISIS xfer certificates Specify the privileges available for name resolution

(local state) Inspect complete chain of transfers linked to replies

135

Outline

Background Active Names

Opportunity Implementation

Examples Programmability Location Independence Composibility

Conclusions

136

Example: Load Balancing

DNS Round-Robin» Randomly choose replica

» Avoid hotspots

Distributed Director» Route to nearest replica

» Geographic locality

Active Naming» Previous performance, distance

» Adaptive

BerkeleyReplica

SeattleReplica

BerkeleyClients

137

Load Balancing Performance

0

0.05

0.1

0.15

0.2

0.25

0.3

0 5 10 15

Offered Load (Clients)

Res

pon

se T

ime

(s)

DirectorRound RobinActive

Optimal load balancing varies with offered load:» Low load: choose closest server

» High load: distribute load evenly

138

Example: Mobile Distillation

Clients name a single object Returned object based on client

Network connection, screen Current approach [Fox97]

Proxy maintains client profile Requests object, distills

Active naming Transmit name + program Flexible distillation point Tradeoff

computation/bandwidth Support mobile clients

Client-Specific Naming

Variables: Network, Screen

139

Determine Placement of Computation: First-Cut Placement

AlgorithmserverDist= (evalCost / specintserver* loadAverageserver) +

(smallFileBW / distillFileSize);proxyDist = (largeFileBW / origFileSize) +

(evalCost / specintproxy * loadAverageproxy);

Prob(serverEval)= proxyDist / (proxyDist + serverDist);

Distribute load based on estimate of cost Feedback confidence information? In wide area

Must use approximate information Avoid deterministic decisions

140

Importance of Location Independence I

0

10

20

30

2 3 4 6 8 10 12 15 20

Number of Clients

Dis

till

atio

n L

aten

cy (

s)

Active

Server

Proxy

Distill 59 K image to 9 K

Clients/Proxy at UC Berkeley

Server at Duke Active Policy

tracks then beats best static policy

141

0

10

20

30

40

2 4 6 8 10 12 15 20

Number of Clients

Dis

tilla

tion

Late

ncy

(s)

Active

Server

Proxy

Importance of Location Independence II

Server loaded with 10 competing processes

No longer makes sense to perform all distills at server

Dynamic placement of computation for optimal performance

142

Example: Active Caches

Reason Requests Possible Strategy

Dynamic 20.7% Replace CGI w/AN Program

Consistency Polling 9.9% Server driven consistency

“Uncachable” 9.2% AN for hit counting, customization

Compulsory 44.8% Prefetching, delta-encoding

Redirect 3.7% AN load balancing

Misc. 11.5% Error handling, prefetching, etc.

Low (50%) hit rates to proxy caches; causes for misses:

143

Example: Active Caches

50% hit rate to caches Active Name Resolvers promise to run domain-

specific code to retrieve/enter cache entries Cache program implements

Ad rotation, Server side include (SSI) expansion,Access checking, Hit counting

No magic bullets, have to compose multiple extension» Combination of distillation, server customization

outperforms distillation-only by 50%, customization-only by 100%

144

Related Work

Active Networks: New routing protocols: multicast,

anycast, RSVP Must modify routers Bottom of protocol stack: end-to-end app

performance Active Services:

No composibility, extensibility Restricted to single point in the network Unrestricted programming model

145

Hyper-Active Naming

Buying stocks Name stock, requested price, credit card Execute Active Name applet at server to

purchase Cooperating agents

Request best price on a commodity (book) Active Name applet returns with results and

option to buy Print to nearest printer (mobility)

Active name applet locates printer with client prefs

Runs method at print server to print document

146

Discussion

Query optimization problem Reordering of after methods

Placement of wide-area computation What’s available for running my job?

Estimating remote bandwidth, CPU load Integration with CRISIS security Resource allocation Debugging? Performance overheads?

147

Conclusions Active Name: mobile program that invokes a

service or acquires data Prototype demonstrates

Feasibility of approach Interesting applications

Provides dynamic, flexible binding to service/data Server selection among geographic replicas (Rent-

A-Server) Client customization (e.g., distillation, custom CNN) Server customization (e.g., hit counting, ad rotation,

etc.) Active Caching

148

Active vs. Intentional Names INS and Active Networks

Name servers deliver and route message Name servers are general modules with

application-specific features INS declarative

• attributes and metric

Active name procedural• Name space resolver programs

Overlay network Use IP for routing

149

Sensor Networks Sensor networks

Name servers directly do routing Not built on top of IP

Connected directly to each other with physical point-to-point links

Special ad-hoc network separate from internet No distinction between

Client Service Name server Router

Every node is a routing sensor and name server Framework for building such networks

150

Distributed Sensors vs. Appliances

Sensors produce and consume

data Communication

data messages a la event notifications

Appliances respond to

operations send back events

Communication remote method

call event notifications

151

Internet vs. Sensor Networks

Internet Plentiful power Plentiful bandwidth Low delay Router throughput an

issue

Sensor network Scarce power Bandwidth dear High-delay Sensor/router nodes powerful

compared to bandwidth 3000 instructions take as much

power as sending a bit 100m by radio

Tradeoff computation in router for communication Rainfall aggregation near source

node Duplicate suppression Avoid flooding, multicast …

152

No IP address Client does not use

address As in active name, IN

Service does not register address Unlike active name, IN No special name server

source

sink

153

Naming and Routing How do nodes

communicate? Naming

attribute-based as in INS

Routing Broadcast to all

neighbours

source

sink

Away path

154

Avoiding Away paths Sink sends interest to source Interest forwarded by

intermediate nodes Sink and intermediate nodes

record gradient: neighbours from which each

interest came And update rate,

active/inactive interest Data matching interest

returned to these neighbours if active at update rate

Avoiding away paths for interest - Directed diffusion

Away interest

path

155

Avoiding interest away-paths Away paths of interest

send interest about interests Extends Model View

Controller Many to many

communication must stop recursion

Away paths for interest about interest

Away interest about

interest path

156

Avoiding source overload Source does not

generate/sense data until interest about interest arrives

Interest can be dynamically activated/deactivated at source and intermediate

nodes changes gradient

activated

157

Avoiding sink overload Update rate associated

with interest intermediate nodes

forward events at update rate

recorded in gradientNot sent because of update rate

158

Multiple paths Event goes to

destination along all paths

159

Avoiding Multiple Paths Send “source” sends

exploratory message to all possible sinks

Each sink reinforces path along which message reached earliest

Subsequently reinforced path used

Exploratory messages periodically sent to update routes

Local repair on node failure Negative reinforcements

possible if alternative paths better How determined?

160

Aggregation Same real-world event may

trigger multiple sensors Multiple motion detectors

Sink may be interested in aggregation of events from multiple sensors Rainfall sensors

Download filters at intermediate nodes They match attributes Examine matching messages Can respond by doing

application-specific aggregation

A la active networks

Installed Filter

161

Aggregation Kind/Examples Binary

There was a detection Area

Detection in specific quadrant Probability

80% chance of detection

162

Two-Level Sensing Wireless monitoring system

Low-power sensors Light and motion detectors Always on

High-power sensors Microphones and steerable

cameras Triggered by low-power

sensors Configured externally by

some distant client Where the user is

Must go through some intermediate nodes

How configured?

High-power

Low-power

Sink

163

Conventional Solution

Conventional Client listens for low-

power events Triggers high-power

events Lots of communication

164

Nested Query Solution

Nested query solution Nested interest sent to

secondary sensor It registers interest in

primary sensors Less communication

165

Attribute Matching

Interest

class IS interest

task EQ “detectAnimal”

confidence GT 5.0

latitude GE 10.0

latitude LE 100.0

longitude GE 5.0

longitude LE 95.0

target IS “4-leg”

Published

class IS data

task IS “detectAnimal”

confidence IS 90

latitude IS 20.0

longitude IS 80.0

target IS “4-leg”

Free variable (actual parameter)

Bound variable (actual parameter)

Matching = Unification

EQ = subclass of ?

key valop

166

Attribute Matching Algorithm

One-way match (AttributeSet A, B)

for each unbound a in A {

matched false

for each bound b in B where a.key = b.key

matched compare (a.key, b.key, a.op)

if not matched return false

return true

167

API

Subscription subscribe (AttributeSet attributeSet, SubscriptionCallback callback);

unsubscribe(Subscription subscription);

Publication publish(AttributeSet attributeSet);

unpublish (Publication publication);

send (Publication publication, AttributeSet sendAttrs);

Filter addFilter(AttributeSet attributeSet, int priority, FilterCallback callback);

removeFilter (Filter filter)

sendMessage(Message message, Handle handle, Agent agent)

sendMessageToNext(Message message, Handle handle)

Accesses gradient, message, previous and next

destination

168

Two Implementations Full size

55 KB Code 8Kb data 20KB library 4KB data

Meant for secondary sensors

Micro size 3K code 100 byte data

Micro functionality Single attribute 5 active gradients 10 packet cache 2 relevant

bytes/packet No filters

Meant for primary sensors

169

Database based composition

Queries over multiple devices For each rainfall sensor, average rainfall For each sensor in Tompkin county,

current rainfall For next 5 hrs, every 30 minutes, rainfall

in Tompkin county

170

Query KindsDevice Queries Historical

For each rainfall sensor, average rainfall Snapshot

For each sensor in Tompkin county, current rainfall

Long-running For next 5 hrs, every 30 minutes, rainfall in

Tompkin county New kind of query

Defining device databases and queries?

171

Cougar Device Model Embedded vs. Attached

Embedded Network appliance/sensor

Regular appliance attached to computer Computer connected to regular device

Stationary vs. Mobile Strongly vs. Intermittently connected Local area vs. wide area Device database work focuses on stationary

devices Data Gatherers (sensors) vs. Operation Servers

(appliances)?

172

Cougar Device Operations Operation Model

acquire, store, and process data may trigger action in physical world return result

Synchronous operation returns result immediately

Asynchronous operation result(s) later

abnormal rainfall as event

Intermittently connected device only asynchronous operations possible device not guaranteed to be connected when

operation invoked intermittently connected device as server

173

Defining Device DatabaseDevice DBMS vs. Traditional

Relational DBMS Device vs. Data collection

Computed vs. stored values

Distributed information sources Data needed not available

locally May not even be available

remotely for intermittent Long-running queries.

Not modelled by traditional DBMS

Solution Base device relations

One record for each device

Virtual Relations Records partitioned

over distributed nodes Includes results of

device operations Extended query

language over virtual relations

174

Base Relations

Collection of devices of a particular type.

one record per device attributes

device id X coordinate Y coordinate

ID X Y

175

Virtual RelationsPer device function

Attribute for each function

argument result global timestamp of

result device id

New record added for each new result

Append-only relation Each device

contributes to part of relation

a1 Valam TSID

f(a1,…, am ): T

176

ExampleRFSensor One function

getRainfallLevel() : int Base relation

RFSensors Virtual relation

VRFSensorsGetRainfallLevel

Value TS Device ID

RFSensors

VRFSensorsGetRainfallLevel

Device ID X Y

177

For next four hours, retrieve every 30 seconds, rainfall level of each sensor if it is greater than 50mm.

Query Q: SELECT VR.value FROM RFSensors R, VRFSensorsGetRainfallLevel VRWHERE R.ID = VR.ID AND VR.value > 50 ANDAND $every(30)

Run Q for 4 hours200 devicesR cardinality = 200VR cardinality = 480

Long running queries

join selection

AND R.X = ...

IN ...

WHY JOIN?

178

Execution Strategies

R Materialized VR R VR VR VR

Q

R VR VR VR

if R.ID = VR.ID

VR.value > 50

R VR VR VR

R.ID = VR.ID

if VR.value > 50

VR.value > 50R.ID = VR.ID

Val every 3O min

Val every 3O min Val every 3O min

No local knowledge Local rate info, remote Q

Local Join, Remote Selection Local Selection, Remote Join

179

Performance Metrics

Traditional Throughput Response time

Long query running time implementation independent

Sensor specific Resource usage

network, power Reaction time

Production to consumption time

180

Power Usage Components

CPU Memory access Sending message Sending Nbytes

Cost in joules = Wcpu*CPU + Wram*RAM + WmsgMsg + Wbytes*NBytes

181

Centralized Warehouse

No local knowledge Single location

monitors all sensors Queries are sent to

site

Works for historical queries

Wastes resources for long running queries Irrelevant sites Higher rate than

necessary What to monitor for

long running queries Camera direction?

Centralization Workload Bottleneck

182

Distributed Device Database All sensors together

form a distributed device database system

Individual nodes sense on demand do part of query

processing

Better resource utilization

Know what to monitor

Historical? based on some

long-running query

183

Distributed Database Approach All sensors together

form a distributed device database system

Individual nodes sense on demand do part of query

processing

Better resource utilization

Know what to monitor

Historical?

184

Remote Query Evaluation

Only relevant sensors send data

No relations sent to sensors

R VR VR VR

VR.value > 50R.ID = VR.ID

Val every 3O min

Local rate info, remote Q

185

Local Join

R VR VR VR

if R.ID = VR.ID

VR.value > 50

Val every 3O min

Local Join, Remote Selection

Only relevant sensors send value

Whole relation send to device communication

overhead

186

Execution Strategies Only relevant

sensors send value Whole relation send

to device communication

overhead

R VR VR VR

R.ID = VR.ID

if VR.value > 50

Val every 3O min

Local Selection, Remote Join