Quantum Channels and their capacities

32
Quantum Channels and their capacities Graeme Smith, IBM Research 11 th Canadian Summer School on Quantum Information

description

Quantum Channels and their capacities. Graeme Smith, IBM Research 11 th Canadian Summer School on Quantum Information. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A. Information Theory. - PowerPoint PPT Presentation

Transcript of Quantum Channels and their capacities

Page 1: Quantum Channels and their capacities

Quantum Channels and their capacities

Graeme Smith, IBM Research

11th Canadian Summer School on Quantum Information

Page 2: Quantum Channels and their capacities

Information Theory

• “A Mathematical Theory of Communication”, C.E. Shannon, 1948

• Lies at the intersection of Electrical Engineering, Mathematics, and Computer Science

• Concerns the reliable and efficient storage and transmission of information.

Page 3: Quantum Channels and their capacities

Information Theory: Some Hits

• Source Coding: Cell Phones

Lempel-Ziv compression (gunzip, winzip, etc)

Voyager (Reed Solomon codes)

Low density parity check codes

Page 4: Quantum Channels and their capacities

Quantum Information Theory

When we include quantum mechanics (which was there all along!) things get much more interesting!

Secure communication, entanglement enhanced communication, sending quantum information,…

Capacity, error correction, compression, entropy..

Page 5: Quantum Channels and their capacities

Outline

• Lecture 1: Classical Information Theory

• Lecture 2: Quantum Channels and their many capacities

• Lecture 3: Advanced Topics--- Additivity, Partial Transpose, LOCC,Gaussian Noise,etc

• Lecture 4: Advanced Topics---Additivity, Partial Transpose, LOCC,Gaussian Noise,etc.

Page 6: Quantum Channels and their capacities

Example: Flipping a biased coinLet’s say we flip n coins. They’re independent and identically distributed (i.i.d):

Pr( Xi = 0 ) = 1-p Pr( Xi = 1 ) = p

Pr( Xi = xi, Xj = xj ) = Pr( Xi = xi ) Pr( Xj = xj )

x1x2 … xn

Q: How many 1’s am I likely to get?

Page 7: Quantum Channels and their capacities

Example: Flipping a biased coinLet’s say we flip n coins. They’re independent and identically distributed (i.i.d):

Pr( Xi = 0 ) = 1-p Pr( Xi = 1 ) = p

Pr( Xi = xi, Xj = xj ) = Pr( Xi = xi ) Pr( Xj = xj )

x1x2 … xn

Q: How many 1’s am I likely to get?

A: Around pn and, with very high probability between (p-)n and (pn

Page 8: Quantum Channels and their capacities

Shannon EntropyFlip n i.i.d. coins, Pr( Xi = 0) = p, Pr( Xi = 1) = 1-p Outcome: x1…xn.

w.h.p. get ¼ pn 1’s, but how many different configurations?

There are such strings.

Using we get

Where H(p) = -plogp – (1-p)log(1-p)

So, now, if I want to transmit x_1…x_n, I can just check which typical sequence, and report that! Maps n bits to nH(p)

Similar for larger alphabet:

¡ npn

¢= n!

(pn)!((1¡ p)n)!

logn!= n logn ¡ n + O(logn)

logµ

npn

¶¼ nlogn ¡ n ¡ pn log(pn) + pn ¡ (1¡ p)n log((1¡ p)n) + (1¡ p)n

= nH (p)

H(X ) =P

x ¡ p(x) logp(x)

Page 9: Quantum Channels and their capacities

Shannon Entropy

Flip n i.i.d. coins, Pr( Xi = xi) = p(xi)x1…xn has prob P(x1…xn) = p(x1)…p(xn)

Typically,

So, typical sequences have

More or less uniform dist over typical sequences.

logP (x1:::xn) =P n

i=1 logp(xi )

¡ 1n logP (x1:::xn) ¼¡ hlogp(x)i = H(X )

P (x1:::xn) » 12n (H (X ) § ±)

Page 10: Quantum Channels and their capacities

Correlations and Mutual Information

H(X) – bits of information transmitted by letter from ensemble X

Noisy Channel:p(y|x)

(X,Y) correlated. How much does Y tell us about X?

After I get Y, how much more do you need to tell me so that I know X?

Given y, update expectations:

Only bits per letter needed.

Can calculate

Savings: H(X)–H(X|Y) = H(X) + H(Y) – H(X,Y) =: I(X;Y)

N NX Y

p(xjy) = p(yjx)p(x)p(y)

H(X jY ) = h¡ logp(xjy)i

H (X jY ) = h¡ log p(x;y)p(y) i = H (X ;Y ) ¡ H (Y )

Page 11: Quantum Channels and their capacities

Channel Capacity

m0¼m

N N

N N

N N

.

.

.

Decoder

mEncoder

Given n uses of a channel, encode a message m 2 {1,…,M} to a codeword xn = (x1(m),…, xn(m))

At the output of the channel, use yn = (y1,…,yn) to make a guess, m0.

The rate of the code is (1/n)log M.

The capacity of the channel, C(N), is defined as the maximum rate you can get with vanishing error probability as n ! 1

Page 12: Quantum Channels and their capacities

Binary Symmetric Channel

N NX Y

p(0|0) = 1-p p(1|0) = p

p(0|1) = p p(1|1) = 1-p

Page 13: Quantum Channels and their capacities

Capacity of Binary Symmetric Channel

Input string

xn =(x1,…, xn)

2n possible outputs

Page 14: Quantum Channels and their capacities

Capacity of Binary Symmetric Channel

Input string

xn =(x1,…, xn)

2n possible outputs2nH(p) typical errors

Page 15: Quantum Channels and their capacities

Capacity of Binary Symmetric Channel

Input string

x1n =(x11,…, x1n)

2n possible outputs2nH(p) typical errors

x2n =(x21,…, x2n)

2nH(p)

Page 16: Quantum Channels and their capacities

Capacity of Binary Symmetric Channel

Input string

x1n =(x11,…, x1n)

2n possible outputs2nH(p) typical errors

x2n =(x21,…, x2n)

xMn =(xM1,…, xMn)

.

.

.

2nH(p)

2nH(p)

Page 17: Quantum Channels and their capacities

Capacity of Binary Symmetric Channel

Input string

x1n =(x11,…, x1n)

2n possible outputs2nH(p) typical errors

x2n =(x21,…, x2n)

xMn =(xM1,…, xMn)

.

.

.

2nH(p)

2nH(p)

Each xmn gets

mapped to 2nH(p) different outputs.

If these sets overlap for different inputs, the decoder will be confused.

So, we need

M 2nH(p) · 2n, which implies

(1/n)log M · 1-H(p)

Upper bound on capacity

Page 18: Quantum Channels and their capacities

Direct Coding Theorem:Achievability of 1-H(p)

(1) Choose 2nR codewords randomly according to Xn (50/50 variable)

(2) xmn ! yn. To decode, look at all strings within 2n(H(p)+ ) bit-

flips of yn. If this set contains exactly one codeword, decode to that. Otherwise, report error.

Decoding sphere is big enough that w.h.p. the correct codeword xm

n is in there.

So, the only source of error is if two codewords are in there. What are the chances of that???

Page 19: Quantum Channels and their capacities

Direct coding theorem: Achievablility of 1-H(p)

2n total

Random codeword

Mapped here

Page 20: Quantum Channels and their capacities

Direct coding theorem: Achievablility of 1-H(p)

2n total

Size 2n(H(p) + )

Page 21: Quantum Channels and their capacities

Direct coding theorem: Achievablility of 1-H(p)

2n total

Size 2n(H(p) + )If code is chosen randomly, what’s the chance of another codeword in this ball?

Page 22: Quantum Channels and their capacities

Direct coding theorem: Achievablility of 1-H(p)

2n total

Size 2n(H(p) + )If code is chosen randomly, what’s the chance of another codeword in this ball?

If I choose one more word, the chance is 2n (H (p)+ ±)

2n

Page 23: Quantum Channels and their capacities

Direct coding theorem: Achievablility of 1-H(p)

2n total

Size 2n(H(p) + )If code is chosen randomly, what’s the chance of another codeword in this ball?

If I choose one more word, the chance is 2n (H (p)+ ±)

2n

Choose 2nR more, the chance is 2n (H (p)+ R + ±)

2n

If R < 1 – H(p) – d, this ! 0 as n ! 1

Page 24: Quantum Channels and their capacities

Direct coding theorem: Achievablility of 1-H(p)

So, the average probability of decoding error (averaged over codebook choice and codeword) is small.

As a result, there must be some codebook with low prob of error (averaged over codewords).

2n total

Size 2n(H(p) + )If code is chosen randomly, what’s the chance of another codeword in this ball?

If I choose one more word, the chance is 2n (H (p)+ ±)

2n

Choose 2nR more, the chance is 2n (H (p)+ R + ±)

2n

If R < 1 – H(p) – d, this ! 0 as n ! 1

Page 25: Quantum Channels and their capacities

Low worst-case probability of error

Showed: there’s a code with rate R such that

Let N2 = #{i | Pi > 2}. Then,

So that N2 < 2nR - 1. Throw these away.

Gives a code with rate R – 1/n and Pi < 2 for all i.

12n R

P 2n R

i=1 Pi < ²

2²N 2²2n R · 1

2n R

Pi st P i >2² Pi · 1

2n R

P 2n R

i=1 Pi < ²

Page 26: Quantum Channels and their capacities

Coding theorem: general case

• 2nH(X) typical xn

• For typical yn, ¼ 2n(H(X|Y) + ) candidate xn’s in decoding sphere. If unique candidate, report this.

• Each sphere of size 2n(H(X|Y)+) contains a fraction of the inputs.

• If we choose 2nR codewords, prob of accidentally falling into wrong sphere is

• Okay as long as R < I(X;Y)

N NX Y

2n (H (X j Y )+ ±)

2n H (X )

2n R

2n (H (X ) ¡ H (X j Y ) ¡ ±) = 2n R

2n I (X ;Y ) ¡ ±

Page 27: Quantum Channels and their capacities

Capacity for general channel

• We showed that for any input distribution p(x), given p(y|x), we can approach rate R = I(X;Y). By picking the best X, we can achieve C(N) = maxXI(X;Y). This is called the “direct” part of the capacity theorem.

• In fact, you can’t do any better. Proving there’s no way to beat maxXI(X;Y) is called the “converse”.

Page 28: Quantum Channels and their capacities

Converse

For homework, you’ll prove two great results about entropy:

1. H(X,Y) · H(X)+ H(Y)

2. H(Y1Y2|X1X2) · H(Y1|X1) + H(Y2|X2)

We’re going to use the first one to prove that no good code can transmit at a rate better than maxX I(X;Y)

Page 29: Quantum Channels and their capacities

Converse• Choose some code with 2nR strings of n letters.• Let Xn be uniform distribution on the codewords (codeword i with

prob 1/2nR)• Because the channels are independent, we get p(y1…yn|x1…xn) =

p(y1|x1)…p(yn|xn)• As a result, the conditional entropy satisfies H(Yn|Xn) = h –log p(y1…

yn|x1…xn) i = h –log p(yi|xi)i = H(Yi|Xi).• Now, I(Yn;Xn) = H(Yn) – H(Yn|Xn) · H(Yi) - H(Yi|Xi) = I(Yi;Xi) · n

maxX I(X;Y)

• Furthermore, I(Yn;Xn) = H(Xn)-H(Xn|Yn) = nR - H(Xn|Yn)· n maxX I(X;Y).

• Finally, since Yn must be sufficient to decode Xn, we must have (1/n)H(Yn|Xn) ! 0, which means R · maxX I(X;Y).

Page 30: Quantum Channels and their capacities

Recap

• Information theory is concerned with efficient and reliable transmission and storage of data.

• Focused on the problem of communication in the presence of noise. Requires error correcting codes.

• Capacity is the maximum rate of communication for a noisy channel. Given by maxX I(X;Y).

• Two steps to capacity proof: (1) direct part: show by randomized argument that there exist codes with low probability of error. Hard part: error estimates. (2) Converse: there are no codes that do better---use entropy inequalities.

Page 31: Quantum Channels and their capacities

Coming up:

Quantum channels and their various capacities.

Page 32: Quantum Channels and their capacities

• Homework is due at the Beginning of next class.

• Fact II is called Jensen’s inequality

• No partial credit