Sometimes randomness helps in computation. · 𝐑𝐋Algorithm: Start a random walker at and use...
Transcript of Sometimes randomness helps in computation. · 𝐑𝐋Algorithm: Start a random walker at and use...
randomized computation
Sometimes randomness helps in computation.
randomized computation
Augment our usual Turing machines with a read-only tape of random bits.
𝑞0
𝑞1
𝑞start
𝑞accept
𝑞2
𝑎𝑎 ## 𝑏 𝑎
00 10 1 0 1 1
randomized computation
The notion of acceptance changes.
One-sided error [two variants]:
Two-sided error:
𝑥 ∈ 𝐿 ⇒ Pr 𝑀 accepts 𝑥 = 1
𝑥 ∉ 𝐿 ⇒ Pr 𝑀 accepts 𝑥 <9
10
What does it mean for a TM 𝑀 with access to random bits to decide a language 𝐿?
𝑥 ∈ 𝐿 ⇒ Pr 𝑀 accepts 𝑥 >2
3
𝑥 ∉ 𝐿 ⇒ Pr 𝑀 accepts 𝑥 <1
3
𝑥 ∈ 𝐿 ⇒ Pr 𝑀 accepts 𝑥 >1
10
𝑥 ∉ 𝐿 ⇒ Pr 𝑀 accepts 𝑥 = 0
In both cases, we can make the error much smaller by repeating a few times.
randomized computation
For instance, the two-sided error variant of 𝐏 is called 𝐁𝐏𝐏.𝐁𝐏𝐏 stands for “bounded-error probabilistic polynomial time”
Derandomization problem:
Can every efficient randomized algorithm be replaced by a deterministic one?
We don’t know the answer, though it is expected to be “yes.”This is the question of 𝐏 vs. 𝐁𝐏𝐏.
randomized computation
Recall the complexity class 𝐋 = SPACE log 𝑛
The randomized one-sided error variant of 𝐋 is called 𝐑𝐋.
𝑥 ∈ 𝐴 ⇒ Pr 𝑀 accepts 𝑥 ≥1
2
𝑥 ∉ 𝐴 ⇒ Pr 𝑀 accepts 𝑥 = 0
A language 𝐴 is in 𝐑𝐋 if there is a randomized 𝑂 log𝑛 space Turing machine such that [one way access to random tape]:
The question of whether 𝐋 = 𝐑𝐋 is also open.
We know that 𝐑𝐋 ⊆ 𝐍𝐋 ⊆ SPACE log 𝑛 2
Nisan showed how to do this using a pseudo-random number generator for space-bounded machines.
undirected connectivity
Recall the language:DIRPATH= 𝐺, 𝑠, 𝑡 ∶ 𝐺 is a directed graph with a directed 𝑠 → 𝑡 path
We saw that DIRPATH is 𝐍𝐋-complete.
Thus 𝐋 = 𝐍𝐋 if and only if DIRPATH ∈ 𝑳.
What about the language?PATH = 𝐺, 𝑠, 𝑡 ∶ 𝐺 is an 𝐮𝐧𝐝𝐢𝐫𝐞𝐜𝐭𝐞𝐝 graph with an 𝑠 ↔ 𝑡 path
undirected connectivity
What about the language?PATH = 𝐺, 𝑠, 𝑡 ∶ 𝐺 is an 𝐮𝐧𝐝𝐢𝐫𝐞𝐜𝐭𝐞𝐝 graph with an 𝑠 ↔ 𝑡 path
Is PATH ∈ 𝐋?
random walks
Let’s show that PATH ∈ 𝐑𝐋.
One step of random walk: Move to a uniformly random neighbor.
random walks
What we will show: If 𝐺 = (𝑉, 𝐸) is a graph with 𝑛 vertices and 𝑚 edges, then a random walker started at 𝑠 ∈ 𝑉 who takes 4𝑚𝑛 steps will visit everyvertex in the connected component of 𝑠 with probability at least ½.In particular, if there is an 𝑠 ↔ 𝑡 path in 𝐺, she will visit 𝑡 with probability at least ½.
random walks
What we will show: If 𝐺 = (𝑉, 𝐸) is a graph with 𝑛 vertices and 𝑚 edges, then a random walker started at 𝑠 ∈ 𝑉 who takes 4𝑚𝑛 steps will visit everyvertex in the connected component of 𝑠 with probability at least ½.In particular, if there is an 𝑠 ↔ 𝑡 path in 𝐺, she will visit 𝑡 with probability at least ½.
𝐑𝐋 Algorithm: Start a random walker at 𝑠 and use the random bits to simulate a random walk for 4𝑚𝑛 steps (can count this high in 𝑂 log𝑛space). If we visit 𝑡 at any point, accept. Otherwise, reject.
If there is an 𝑠 ↔ 𝑡 path in 𝐺: Pr accept 𝐺, 𝑠, 𝑡 ≥ 12
If no 𝑠 ↔ 𝑡 path in 𝐺: Pr accept 𝐺, 𝑠, 𝑡 = 0
cover time of a graph
Cover time of a graph = expected # of steps before all vertices are visited.
cover time of a graph
We will show that if 𝐺 has 𝑛 vertices and 𝑚 then the expected time before the random walk covers 𝐺 (from any starting point) is 2𝑚𝑛.
Then it must be that with probability at least ½, we cover 𝐺 after 4𝑚𝑛 steps.(This fact is called “Markov’s inequality.”)
heat dispersion on a graph
spectral embedding
𝑣2
hitting times and commute times
For vertices 𝑢, 𝑣 ∈ 𝑉, the hitting time is𝐻 𝑢, 𝑣 = expected time for randomwalk to hit 𝑣 starting at 𝑢
𝑢
𝑣
The commute time between 𝑢 and 𝑣 is𝐶 𝑢, 𝑣 = 𝐻 𝑢, 𝑣 + 𝐻(𝑣, 𝑢)
hitting times and commute times
For vertices 𝑢, 𝑣 ∈ 𝑉, the hitting time is𝐻 𝑢, 𝑣 = expected time for randomwalk to hit 𝑣 starting at 𝑢
𝑢𝑣
The commute time between 𝑢 and 𝑣 is𝐶 𝑢, 𝑣 = 𝐻 𝑢, 𝑣 + 𝐻(𝑣, 𝑢)
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
hitting times and commute times
𝑠
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
Let’s use the claim to show that the cover time of 𝐺 is at most 2𝑚(𝑛 − 1).
hitting times and commute times
𝑠
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
Let’s use the claim to show that the cover time of 𝐺 is at most 2𝑚(𝑛 − 1).
Choose an arbitrary spanning tree of 𝐺.
hitting times and commute times
𝑠
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
Let’s use the claim to show that the cover time of 𝐺 is at most 2𝑚(𝑛 − 1).
Choose an arbitrary spanning tree of 𝐺.
Consider the “tour around the spanning tree”
This is a path 𝑠 = 𝑢0, 𝑢1, 𝑢2, … , 𝑢2𝑛 = 𝑠Where 𝑢𝑖 , 𝑢𝑖+1 is an edge for each 𝑖, and each edge appears exactly twice.
hitting times and commute times
𝑠
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
Let’s use the claim to show that the cover time of 𝐺 is at most 2𝑚(𝑛 − 1).
Choose an arbitrary spanning tree of 𝐺.
Consider the “tour around the spanning tree”
Claim: Cover time of 𝐺 is at most𝐶 𝑢0, 𝑢1 + 𝐶 𝑢1, 𝑢2 + 𝐶 𝑢2, 𝑢3 +⋯+ 𝐶 𝑢2𝑛−1, 𝑢2𝑛
hitting times and commute times
𝑠
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
Let’s use the claim to show that the cover time of 𝐺 is at most 2𝑚(𝑛 − 1).
Choose an arbitrary spanning tree of 𝐺.
Consider the “tour around the spanning tree”
Claim: Cover time of 𝐺 is at most
𝑢,𝑣 ∈𝑇
𝐶(𝑢, 𝑣) ≤ 2𝑚(𝑛 − 1)
high school physics
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
high school physics
Claim: If {𝑢, 𝑣} is an edge of 𝐺, then 𝐶 𝑢, 𝑣 ≤ 2𝑚.
View the graph 𝐺 as an electrical network with unit resistances on the edges.
When a potential difference is applied between the nodes from an external source, current flows in the network in accordance with Kirchoff’s and Ohm’s laws.
K1: The total current flowing into a vertex = total current flowing outK2: The sum of potential differences around any cycle is zero.Ohm: The current flowing along any edge {𝑢, 𝑣} is equal to
(potential 𝑢 − potential 𝑣 )/resistance(𝑢, 𝑣)
The effective resistance between 𝑢 and 𝑣 is defined as the potential difference required to send one unit of current from 𝑢 to 𝑣.
𝑢 𝑣
presentation agenda
Error-correcting codes:
Simple uses of randomness in computation:
1. Intro [Amr]2. Reed-Solmon codes [Jie]3. Expander codes [Ali]
1. Checking matrix multiplication [Mahmoud]2. Karp-Rabin matching algorithm [Stanislav]
random
pseudo-random