Network Algorithms, Lecture 4: Longest Matching Prefix Lookups George Varghese.

24
Network Algorithms, Lecture 4: Longest Matching Prefix Lookups George Varghese

Transcript of Network Algorithms, Lecture 4: Longest Matching Prefix Lookups George Varghese.

  • Slide 1
  • Network Algorithms, Lecture 4: Longest Matching Prefix Lookups George Varghese
  • Slide 2
  • Slide 3
  • Plan for Rest of Lecture Defining Problem, why its important Trie Based Algorithms Multibit trie algorithms Compressed Tries Binary Search Binary Search on Hash Tables
  • Slide 4
  • Longest Matching Prefix Given N prefixes K_i of up to W bits, find the longest match with input K of W bits. 3 prefix notations: slash, mask, and wildcard. 192.255.255.255 /31 or 1* N =1M (ISPs) or as small as 5000 (Enterprise). W can be 32 (IPv4), 64 (multicast), 128 (IPv6). For IPv4, CIDR makes all prefix lengths from 8 to 28 common, density at 16 and 24
  • Slide 5
  • Why Longest Match Much harder than exact match. Why is thus dumped on routers. Form of compression: instead of a billion routes, around 500K prefixes. Core routers need only a few routes for all Stanford stations. Really accelerated by the running out of Class B addresses and CIDR
  • Slide 6
  • Sample Database
  • Slide 7
  • Slide 8
  • Skip versus Path Compression Removing 1-way branches ensures that tries nodes is at most twice number of prefixes. Skip count (Berkeley code, Juniper patent) requires exact match and backtracking: bad!
  • Slide 9
  • Multibit Tries
  • Slide 10
  • Optimal Expanded Tries Pick stride s for root and solve recursively Srinivasan Varghese
  • Slide 11
  • Leaf Pushing: entries that have pointers plus prefix have prefixes pushed down to leaves Degermark et al
  • Slide 12
  • Slide 13
  • Why Compression is Effective Breakpoints in function (non-zero elements) is at most twice the number of prefixes
  • Slide 14
  • Lulea uses large arrays: TreeBitMap uses small arrays, counts bits in hardware. No leaf pushing, 2 bit maps per node. CRS-1 Eatherton-Dittia-Varghese
  • Slide 15
  • Binary Search Natural idea: reduce prefix matching to exact match by padding prefixes with 0s. Problem: addresses that map to diff prefixes can end up in same range of table.
  • Slide 16
  • Modified Binary Search Solution: Encode a prefix as a range by inserting two keys A000 and AFFF Now each range maps to a unique prefix that can be precomputed.
  • Slide 17
  • Why this works Any range corresponds to earliest L not followed by H. Precompute with a stack.
  • Slide 18
  • Modified Search Table Need to handle equality (=) separate from case where key falls within region (>).
  • Slide 19
  • Transition to IPv6 So far: schemes with either log N or W/C memory references. IPv6? We describe a scheme that takes O(log W) references or log 128 = 7 references Waldvogel-Varghese-Turner. Uses binary search on prefix lengths not on keys.
  • Slide 20
  • Why Markers are Needed
  • Slide 21
  • Why backtracking can occur Markers announce Possibly better information to right. Can lead to wild goose chase.
  • Slide 22
  • Avoid backtracking by.. Precomputing longest match of each marker
  • Slide 23
  • 2011 Conclusions Fast lookups require fast memory such as SRAM compression Eatherton scheme. Can also cheat by using several DRAM banks in parallel with replication. EDRAM binary search with high radix as in B-trees. IPv6 still a headache: possibly binary search on hash tables. For enterprises and reasonable size databases, ternary CAMs are way to go. Simpler too.
  • Slide 24
  • Principles Used P1: Relax Specification (fast lookup, slow insert) P2: Utilize degrees of freedom (strides in tries) P3: Shift Computation in Time (expansion) P4: Avoid Waste seen (variable stride) P5: Add State for efficiency (add markers) P6: Hardware parallelism (Pipeline tries, CAM) P8: Finite universe methods (Lulea bitmaps) P9: Use algorithmic thinking (binary search)