Emergence of classical reality from within quantum theory · Emergence of classical reality from...

Post on 23-Aug-2020

4 views 1 download

Transcript of Emergence of classical reality from within quantum theory · Emergence of classical reality from...

Emergence of classical reality fromwithin quantum theory

Oral Thesis presentation

David Poulin

Institute for Quantum Computing

Perimeter Institute for Theoretical Physics

Waterloo, November 2004 – p.1

I like getting into trouble...H. Ollivier, DP, and W.H. Zurek, “Emergence of objective properties from subjectivequantum states: Environment as a witness”, to appear in PRL

DP, R. Blume-Kohout, R. Laflamme, and H. Ollivier, “Exponential speed-up with asingle bit of quantum information: Measuring the average fidelity decay”, PRL 04

J.-C. Boileau, D. Gottesman, R. Laflamme, DP, R.W. Spekkens, “Robustpolarization-based quantum key distribution over collective-noise channel”, PRL 04.

DP, “Macroscopic observables", to appear in PRA.

J.A. Holbrook, D.W. Kribs, R. Laflamme, and DP, “Noiseless Subsystems for CollectiveRotation Channels in Quantum Information Theory”, Int. Eqs. & Oper. Theory.

J. Emerson, S. Lloyd, DP, and D. Cory, “Estimation of the Local Density of States on aQuantum Computer”, PRA 04.

D. Poulin, R. Laflamme, G.J. Milburn, and J.P. Paz, “Testing integrability with a singlebit of quantum information”, PRA 03.

DP and R. Blume-Kohout, “Compatibility of quantum states”, PRA 03.

H. Ollivier, DP, and W. H. Zurek, “Environment as a witness: selective proliferation ofinformation and emergence of objectivity", submitted to PRA.

H. Ollivier, DP, and W.H. Zurek, “Emergence of objective properties from subjectivequantum states: Environment as a witness”, to appear in PRL

DP, R. Blume-Kohout, R. Laflamme, and H. Ollivier, “Exponential speed-up with asingle bit of quantum information: Measuring the average fidelity decay”, PRL 04

J.-C. Boileau, D. Gottesman, R. Laflamme, DP, R.W. Spekkens, “Robustpolarization-based quantum key distribution over collective-noise channel”, PRL 04.

DP, “Macroscopic observables", to appear in PRA.

J.A. Holbrook, D.W. Kribs, R. Laflamme, and DP, “Noiseless Subsystems for CollectiveRotation Channels in Quantum Information Theory”, Int. Eqs. & Oper. Theory.

J. Emerson, S. Lloyd, DP, and D. Cory, “Estimation of the Local Density of States on aQuantum Computer”, PRA 04.

D. Poulin, R. Laflamme, G.J. Milburn, and J.P. Paz, “Testing integrability with a singlebit of quantum information”, PRA 03.

DP and R. Blume-Kohout, “Compatibility of quantum states”, PRA 03.

H. Ollivier, DP, and W. H. Zurek, “Environment as a witness: selective proliferation ofinformation and emergence of objectivity", submitted to PRA.

Conventional / Different

Waterloo, November 2004 – p.2

I like getting into trouble...H. Ollivier, DP, and W.H. Zurek, “Emergence of objective properties from subjectivequantum states: Environment as a witness”, to appear in PRL

DP, R. Blume-Kohout, R. Laflamme, and H. Ollivier, “Exponential speed-up with asingle bit of quantum information: Measuring the average fidelity decay”, PRL 04

J.-C. Boileau, D. Gottesman, R. Laflamme, DP, R.W. Spekkens, “Robustpolarization-based quantum key distribution over collective-noise channel”, PRL 04.

DP, “Macroscopic observables", to appear in PRA.

J.A. Holbrook, D.W. Kribs, R. Laflamme, and DP, “Noiseless Subsystems for CollectiveRotation Channels in Quantum Information Theory”, Int. Eqs. & Oper. Theory.

J. Emerson, S. Lloyd, DP, and D. Cory, “Estimation of the Local Density of States on aQuantum Computer”, PRA 04.

D. Poulin, R. Laflamme, G.J. Milburn, and J.P. Paz, “Testing integrability with a singlebit of quantum information”, PRA 03.

DP and R. Blume-Kohout, “Compatibility of quantum states”, PRA 03.

H. Ollivier, DP, and W. H. Zurek, “Environment as a witness: selective proliferation ofinformation and emergence of objectivity", submitted to PRA.

Conventional / Different Waterloo, November 2004 – p.2

Outline

Motivation

Environment as a witness

Macroscopic observables

Conclusion

Waterloo, November 2004 – p.3

Epistemic view

Classical Quantum

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Classical QuantumP (x,p),

xpP (x,p) = 1 ρ, Tr{ρ} = 1

P = {H,P} ρ = −i[H, ρ]P (x,p|D) = P (D|x,p)P (x,p)

P (D) ρ|Q = QρQP (Q)

P (x,p) =∑

D P (x,p|D)P (D) ρ 6= ∑

Q ρQP (Q)

States of complete knowledge (δ) No such statesP (x,p|D) = P (x,p)

The absence of such states of complete knowledge isresponsible for the discomfort with quantum theory.

Waterloo, November 2004 – p.4

Epistemic view

Properties of classical systems can be discoveredby initially ignorant observers.

How can the classical objective reality emergefrom the underlying subjective quantum theory?

Waterloo, November 2004 – p.5

Epistemic view

How can the classical objective reality emergefrom the underlying subjective quantum theory?

Waterloo, November 2004 – p.5

Epistemic view

How can the classical objective reality emergefrom the underlying subjective quantum theory?

Waterloo, November 2004 – p.5

Decoherence

|e〉|Catalive〉 → (|e〉+ |g〉)|Catalive〉→ |e〉|Catalive〉+ |g〉|Catdead〉

Quantum theory allows superpositionof macroscopic objects.

Such superpositions however are not observed.

Waterloo, November 2004 – p.6

Decoherence

|e〉|Catalive〉 → (|e〉+ |g〉)|Catalive〉→ |e〉|Catalive〉+ |g〉|Catdead〉

Quantum theory allows superpositionof macroscopic objects.

Such superpositions however are not observed.

Waterloo, November 2004 – p.6

Decoherence

Environment induced superselection (einselection):

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉→ |e〉|Catalive〉|Mousedead〉+ |g〉|Catdead〉|Mousealive〉= |Ψ〉

If the mouse is not a controlable degree of freedom

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

=1

2|e〉〈e| ⊗ |Catalive〉〈Catalive|

+1

2|g〉〈g| ⊗ |Catdead〉〈Catdead|

Waterloo, November 2004 – p.7

Decoherence

Environment induced superselection (einselection):

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉→ |e〉|Catalive〉|Mousedead〉+ |g〉|Catdead〉|Mousealive〉= |Ψ〉

If the mouse is not a controlable degree of freedom

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

=1

2|e〉〈e| ⊗ |Catalive〉〈Catalive|

+1

2|g〉〈g| ⊗ |Catdead〉〈Catdead|

Waterloo, November 2004 – p.7

Decoherence

Environment induced superselection (einselection):

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉→ |e〉|Catalive〉|Mousedead〉+ |g〉|Catdead〉|Mousealive〉= |Ψ〉

If the mouse is not a controlable degree of freedom

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

=1

2|e〉〈e| ⊗ |Catalive〉〈Catalive|

+1

2|g〉〈g| ⊗ |Catdead〉〈Catdead|

Waterloo, November 2004 – p.7

Decoherence

Environment induced superselection (einselection):

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉→ |e〉|Catalive〉|Mousedead〉+ |g〉|Catdead〉|Mousealive〉= |Ψ〉

If the mouse is not a controlable degree of freedom

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

=1

2|e〉〈e| ⊗ |Catalive〉〈Catalive|

+1

2|g〉〈g| ⊗ |Catdead〉〈Catdead|

Waterloo, November 2004 – p.7

Decoherence

Environment induced superselection (einselection):

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉→ |e〉|Catalive〉|Mousedead〉+ |g〉|Catdead〉|Mousealive〉= |Ψ〉

If the mouse is not a controlable degree of freedom

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

=1

2|e〉〈e| ⊗ |Catalive〉〈Catalive|

+1

2|g〉〈g| ⊗ |Catdead〉〈Catdead|

Waterloo, November 2004 – p.7

Decoherence

Environment induced superselection (einselection):

(|e〉|Catalive〉+ |g〉|Catdead〉)|Mousealive〉→ |e〉|Catalive〉|Mousedead〉+ |g〉|Catdead〉|Mousealive〉= |Ψ〉

If the mouse is not a controlable degree of freedom

ρAtom+Cat = TrMouse|Ψ〉〈Ψ|

=1

2|e〉〈e| ⊗ |Catalive〉〈Catalive|

+1

2|g〉〈g| ⊗ |Catdead〉〈Catdead|

Waterloo, November 2004 – p.7

Decoherence

The description of the quantum systems of interest (Atom +Cat) is a classical mixture of “e alive” and “g dead”.

Operationally, the interaction with an environmentexplains why we only experience statistical mixtures

as opposed to coherent superpositions.

Basis ambiguity: mixture of what?

Does not restrict my freedom of measurements, so∑

ρ|QP (Q) 6= ρ.

Indirect information acquisition (Part I).A pre-agreed set of measurements (Part II).

Waterloo, November 2004 – p.8

Decoherence

The description of the quantum systems of interest (Atom +Cat) is a classical mixture of “e alive” and “g dead”.

Operationally, the interaction with an environmentexplains why we only experience statistical mixtures

as opposed to coherent superpositions.

Basis ambiguity: mixture of what?

Does not restrict my freedom of measurements, so∑

ρ|QP (Q) 6= ρ.

Indirect information acquisition (Part I).A pre-agreed set of measurements (Part II).

Waterloo, November 2004 – p.8

Decoherence

The description of the quantum systems of interest (Atom +Cat) is a classical mixture of “e alive” and “g dead”.

Operationally, the interaction with an environmentexplains why we only experience statistical mixtures

as opposed to coherent superpositions.

Basis ambiguity: mixture of what?

Does not restrict my freedom of measurements, so∑

ρ|QP (Q) 6= ρ.

Indirect information acquisition (Part I).A pre-agreed set of measurements (Part II).

Waterloo, November 2004 – p.8

Decoherence

The description of the quantum systems of interest (Atom +Cat) is a classical mixture of “e alive” and “g dead”.

Operationally, the interaction with an environmentexplains why we only experience statistical mixtures

as opposed to coherent superpositions.

Basis ambiguity: mixture of what?

Does not restrict my freedom of measurements, so∑

ρ|QP (Q) 6= ρ.

Indirect information acquisition (Part I).

A pre-agreed set of measurements (Part II).

Waterloo, November 2004 – p.8

Decoherence

The description of the quantum systems of interest (Atom +Cat) is a classical mixture of “e alive” and “g dead”.

Operationally, the interaction with an environmentexplains why we only experience statistical mixtures

as opposed to coherent superpositions.

Basis ambiguity: mixture of what?

Does not restrict my freedom of measurements, so∑

ρ|QP (Q) 6= ρ.

Indirect information acquisition (Part I).A pre-agreed set of measurements (Part II).

Waterloo, November 2004 – p.8

Part I: Environment as a witness

David Poulin, Harold Ollivier, Wojciech Zurek

Waterloo, November 2004 – p.9

Environment as a witness

Switch focus of attention from S to E .What states of S are maximally stable?What properties of S can be learned frominterrogating E?

How is this information displayed in E?Delocalized in E (require a joint measurement)?Single or multiple copies? (Redundant?)

Role of E : reservoir destroying coherence ⇒selective amplifier proliferating information about S.

Waterloo, November 2004 – p.10

Environment as a witness

Switch focus of attention from S to E .What states of S are maximally stable?What properties of S can be learned frominterrogating E?

How is this information displayed in E?Delocalized in E (require a joint measurement)?Single or multiple copies? (Redundant?)

Role of E : reservoir destroying coherence ⇒selective amplifier proliferating information about S.

Waterloo, November 2004 – p.10

Environment as a witness

Switch focus of attention from S to E .What states of S are maximally stable?What properties of S can be learned frominterrogating E?

How is this information displayed in E?Delocalized in E (require a joint measurement)?Single or multiple copies? (Redundant?)

Role of E : reservoir destroying coherence ⇒selective amplifier proliferating information about S.

Waterloo, November 2004 – p.10

Information theory

H(X|Y ) I(X : Y ) H(Y |X)

H(X) H(Y )

H(X) = −∑

x P (x) lnP (x),H(X|Y ) =

y P (y)∑

x P (x|y) lnP (x|y)Entropy = # bits required to specify the value of X.

Waterloo, November 2004 – p.11

Information theory

Observable σ on S: σ = {σi}Observable τ on E : τ = {τk}

Given the state of S + E , the joint probability is given byBorn’s rule: P (σi, τk) = Tr{ρSE(σi ⊗ τk)}

H(σ) unpredictability of the value of σ of S.

H(σ|τ) remaining unpredictability about σ after having“peeked at the environment” through τ .

I(σ : τ) information learned about the property σ of Sgiven the value of τ of E .

Waterloo, November 2004 – p.12

Information theory

Observable σ on S: σ = {σi}Observable τ on E : τ = {τk}

Given the state of S + E , the joint probability is given byBorn’s rule: P (σi, τk) = Tr{ρSE(σi ⊗ τk)}

H(σ) unpredictability of the value of σ of S.

H(σ|τ) remaining unpredictability about σ after having“peeked at the environment” through τ .

I(σ : τ) information learned about the property σ of Sgiven the value of τ of E .

Waterloo, November 2004 – p.12

Information theory

Observable σ on S: σ = {σi}Observable τ on E : τ = {τk}

Given the state of S + E , the joint probability is given byBorn’s rule: P (σi, τk) = Tr{ρSE(σi ⊗ τk)}

H(σ) unpredictability of the value of σ of S.

H(σ|τ) remaining unpredictability about σ after having“peeked at the environment” through τ .

I(σ : τ) information learned about the property σ of Sgiven the value of τ of E .

Waterloo, November 2004 – p.12

Information theory

Observable σ on S: σ = {σi}Observable τ on E : τ = {τk}

Given the state of S + E , the joint probability is given byBorn’s rule: P (σi, τk) = Tr{ρSE(σi ⊗ τk)}

H(σ) unpredictability of the value of σ of S.

H(σ|τ) remaining unpredictability about σ after having“peeked at the environment” through τ .

I(σ : τ) information learned about the property σ of Sgiven the value of τ of E .

Waterloo, November 2004 – p.12

Information theory

Observable σ on S: σ = {σi}Observable τ on E : τ = {τk}

Given the state of S + E , the joint probability is given byBorn’s rule: P (σi, τk) = Tr{ρSE(σi ⊗ τk)}

H(σ) unpredictability of the value of σ of S.

H(σ|τ) remaining unpredictability about σ after having“peeked at the environment” through τ .

I(σ : τ) information learned about the property σ of Sgiven the value of τ of E .

Waterloo, November 2004 – p.12

Information in the environment

I(σ) = maxτ I(σ : τ).

I(σ) ≈ H(σ) ⇒It is possible to learn about σ indirectly.

There exists a measurement τ on E such thatH(σ|τ) ≈ 0.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

Not a selective criterion.

Waterloo, November 2004 – p.13

Information in the environment

I(σ) = maxτ I(σ : τ). I(σ) ≈ H(σ) ⇒

It is possible to learn about σ indirectly.

There exists a measurement τ on E such thatH(σ|τ) ≈ 0.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

Not a selective criterion.

Waterloo, November 2004 – p.13

Information in the environment

I(σ) = maxτ I(σ : τ). I(σ) ≈ H(σ) ⇒It is possible to learn about σ indirectly.

There exists a measurement τ on E such thatH(σ|τ) ≈ 0.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

Not a selective criterion.

Waterloo, November 2004 – p.13

Information in the environment

I(σ) = maxτ I(σ : τ). I(σ) ≈ H(σ) ⇒It is possible to learn about σ indirectly.

There exists a measurement τ on E such thatH(σ|τ) ≈ 0.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

Not a selective criterion.

Waterloo, November 2004 – p.13

Information in the environment

I(σ) = maxτ I(σ : τ). I(σ) ≈ H(σ) ⇒It is possible to learn about σ indirectly.

There exists a measurement τ on E such thatH(σ|τ) ≈ 0.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

Not a selective criterion.

Waterloo, November 2004 – p.13

Information in the environment

I(σ) = maxτ I(σ : τ). I(σ) ≈ H(σ) ⇒It is possible to learn about σ indirectly.

There exists a measurement τ on E such thatH(σ|τ) ≈ 0.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

Not a selective criterion.

Waterloo, November 2004 – p.13

Redundancy of information

This is a manifestation of “basis ambiguity”.

The solution is obvious when we look at real models.

Assume that the environment is composed ofsubsystems:

E =⊗

k

Ek

Denote the number of disjoint fragments of E — i.e.F ⊂ {Ek} — which contain a copy of this informationR(σ).

R(σ) � 1 is a prerequisitefor the objective existence of σ.

Waterloo, November 2004 – p.14

Redundancy of information

This is a manifestation of “basis ambiguity”.

The solution is obvious when we look at real models.

Assume that the environment is composed ofsubsystems:

E =⊗

k

Ek

Denote the number of disjoint fragments of E — i.e.F ⊂ {Ek} — which contain a copy of this informationR(σ).

R(σ) � 1 is a prerequisitefor the objective existence of σ.

Waterloo, November 2004 – p.14

Redundancy of information

This is a manifestation of “basis ambiguity”.

The solution is obvious when we look at real models.

Assume that the environment is composed ofsubsystems:

E =⊗

k

Ek

Denote the number of disjoint fragments of E — i.e.F ⊂ {Ek} — which contain a copy of this informationR(σ).

R(σ) � 1 is a prerequisitefor the objective existence of σ.

Waterloo, November 2004 – p.14

Redundancy of information

This is a manifestation of “basis ambiguity”.

The solution is obvious when we look at real models.

Assume that the environment is composed ofsubsystems:

E =⊗

k

Ek

Denote the number of disjoint fragments of E — i.e.F ⊂ {Ek} — which contain a copy of this informationR(σ).

R(σ) � 1 is a prerequisitefor the objective existence of σ.

Waterloo, November 2004 – p.14

Redundancy of information

This is a manifestation of “basis ambiguity”.

The solution is obvious when we look at real models.

Assume that the environment is composed ofsubsystems:

E =⊗

k

Ek

Denote the number of disjoint fragments of E — i.e.F ⊂ {Ek} — which contain a copy of this informationR(σ).

R(σ) � 1 is a prerequisitefor the objective existence of σ.

Waterloo, November 2004 – p.14

Redundancy of information

R(σ) � 1 ⇒

There are many copies of the information about σ in theenvironment.

Many independent observers can learn about σ withoutinvalidating each other’s information.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

R(σ) � 1 and IN (σ) ≈ H(σ)

implies a unique observable.

Waterloo, November 2004 – p.15

Redundancy of information

R(σ) � 1 ⇒There are many copies of the information about σ in theenvironment.

Many independent observers can learn about σ withoutinvalidating each other’s information.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

R(σ) � 1 and IN (σ) ≈ H(σ)

implies a unique observable.

Waterloo, November 2004 – p.15

Redundancy of information

R(σ) � 1 ⇒There are many copies of the information about σ in theenvironment.

Many independent observers can learn about σ withoutinvalidating each other’s information.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

R(σ) � 1 and IN (σ) ≈ H(σ)

implies a unique observable.

Waterloo, November 2004 – p.15

Redundancy of information

R(σ) � 1 ⇒There are many copies of the information about σ in theenvironment.

Many independent observers can learn about σ withoutinvalidating each other’s information.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

R(σ) � 1 and IN (σ) ≈ H(σ)

implies a unique observable.

Waterloo, November 2004 – p.15

Redundancy of information

R(σ) � 1 ⇒There are many copies of the information about σ in theenvironment.

Many independent observers can learn about σ withoutinvalidating each other’s information.

a

µ

R0.1(σ

)

0

10

20

30

40

50

0π/2

π/4

0

π/4

π/8a

π/4µ

0.6

0.8

π/4

π/2 0π/8

1.0

0.4

0.2

00

I N(σ

)

m

0.4

0.8

1.0

π/2 0 10 20 3040 50

µ

0.6

0.2

00

π/4

I(σ

:e)

µ = 0.23

a) b) c)

R(σ) � 1 and IN (σ) ≈ H(σ)

implies a unique observable.

Waterloo, November 2004 – p.15

Consequences of redundancy

R(σ) � 1 and I(σ) ≈ H(σ) ⇒

Many independent observers can learn about σ frommonitoring disjoint fragments of E .

Reach a consensus about the properties of S.

Information available from fragments of E is only aboutthe preferred system observable.

The information about S extractable from fragmentsof E follows classical update rules.

Waterloo, November 2004 – p.16

Consequences of redundancy

R(σ) � 1 and I(σ) ≈ H(σ) ⇒Many independent observers can learn about σ frommonitoring disjoint fragments of E .

Reach a consensus about the properties of S.

Information available from fragments of E is only aboutthe preferred system observable.

The information about S extractable from fragmentsof E follows classical update rules.

Waterloo, November 2004 – p.16

Consequences of redundancy

R(σ) � 1 and I(σ) ≈ H(σ) ⇒Many independent observers can learn about σ frommonitoring disjoint fragments of E .

Reach a consensus about the properties of S.

Information available from fragments of E is only aboutthe preferred system observable.

The information about S extractable from fragmentsof E follows classical update rules.

Waterloo, November 2004 – p.16

Summary of Part I

Decoherence and Einselection ; Consensus amongindependent observers.

Arbitrary measurements.

Information through E ; Consensus amongindependent observers.

Basis ambiguity.

Redundant broadcasting of information ⇒ Consensusamong independent observers.

Redundant spreading ⇒ selection of preferredsystem observable.

Waterloo, November 2004 – p.17

Summary of Part I

Decoherence and Einselection ; Consensus amongindependent observers.

Arbitrary measurements.

Information through E ; Consensus amongindependent observers.

Basis ambiguity.

Redundant broadcasting of information ⇒ Consensusamong independent observers.

Redundant spreading ⇒ selection of preferredsystem observable.

Waterloo, November 2004 – p.17

Summary of Part I

Decoherence and Einselection ; Consensus amongindependent observers.

Arbitrary measurements.

Information through E ; Consensus amongindependent observers.

Basis ambiguity.

Redundant broadcasting of information ⇒ Consensusamong independent observers.

Redundant spreading ⇒ selection of preferredsystem observable.

Waterloo, November 2004 – p.17

Part II: Macroscopic observables

What is the observers only have access to macroscopicobservables?

a: a|0〉 = λ0|0〉 and a|1〉 = λ1|1〉.

Ensemble of N systems: AN =∑N

k=1 a(k).

Basis |X〉 = |b1〉 ⊗ |b2〉 ⊗ . . .⊗ |bN 〉:AN |X〉 = Λh(X)|X〉

Hamming weight: h(X) = #1′s in XN

Eigenvalues: Λh = N [(1− h)λ0 + hλ1]

Spectral decomposition AN =∑

h ΛhQh.Qh =

X:h(X)=h |X〉〈X|.

Waterloo, November 2004 – p.18

Part II: Macroscopic observables

What is the observers only have access to macroscopicobservables?

a: a|0〉 = λ0|0〉 and a|1〉 = λ1|1〉.

Ensemble of N systems: AN =∑N

k=1 a(k).

Basis |X〉 = |b1〉 ⊗ |b2〉 ⊗ . . .⊗ |bN 〉:AN |X〉 = Λh(X)|X〉

Hamming weight: h(X) = #1′s in XN

Eigenvalues: Λh = N [(1− h)λ0 + hλ1]

Spectral decomposition AN =∑

h ΛhQh.Qh =

X:h(X)=h |X〉〈X|.

Waterloo, November 2004 – p.18

Part II: Macroscopic observables

What is the observers only have access to macroscopicobservables?

a: a|0〉 = λ0|0〉 and a|1〉 = λ1|1〉.

Ensemble of N systems: AN =∑N

k=1 a(k).

Basis |X〉 = |b1〉 ⊗ |b2〉 ⊗ . . .⊗ |bN 〉:AN |X〉 = Λh(X)|X〉

Hamming weight: h(X) = #1′s in XN

Eigenvalues: Λh = N [(1− h)λ0 + hλ1]

Spectral decomposition AN =∑

h ΛhQh.Qh =

X:h(X)=h |X〉〈X|.

Waterloo, November 2004 – p.18

Part II: Macroscopic observables

What is the observers only have access to macroscopicobservables?

a: a|0〉 = λ0|0〉 and a|1〉 = λ1|1〉.

Ensemble of N systems: AN =∑N

k=1 a(k).

Basis |X〉 = |b1〉 ⊗ |b2〉 ⊗ . . .⊗ |bN 〉:AN |X〉 = Λh(X)|X〉

Hamming weight: h(X) = #1′s in XN

Eigenvalues: Λh = N [(1− h)λ0 + hλ1]

Spectral decomposition AN =∑

h ΛhQh.Qh =

X:h(X)=h |X〉〈X|.

Waterloo, November 2004 – p.18

Part II: Macroscopic observables

What is the observers only have access to macroscopicobservables?

a: a|0〉 = λ0|0〉 and a|1〉 = λ1|1〉.

Ensemble of N systems: AN =∑N

k=1 a(k).

Basis |X〉 = |b1〉 ⊗ |b2〉 ⊗ . . .⊗ |bN 〉:AN |X〉 = Λh(X)|X〉

Hamming weight: h(X) = #1′s in XN

Eigenvalues: Λh = N [(1− h)λ0 + hλ1]

Spectral decomposition AN =∑

h ΛhQh.Qh =

X:h(X)=h |X〉〈X|.

Waterloo, November 2004 – p.18

von Neumann measurement theory

Measuring average value of a over an ensemble of Nsystems.

⇔Making a von Neumann measurement {Qh} on the

collection of systems.

From state ρN = |ψ〉〈ψ|⊗N and ψ = α|0〉+ β|1〉, we expect

〈a〉 = Tr{a|ψ〉〈ψ|} = |α|2λ0 + |β|2λ1.

State after measurement ≈ ρN (as N becomes large).

Finkelstein, Hartle, Graham, FGG,etc.

limN→∞

∣AN |ψ〉⊗N − 〈a〉|ψ〉⊗N

∣= 0

Waterloo, November 2004 – p.19

von Neumann measurement theory

Measuring average value of a over an ensemble of Nsystems.

⇔Making a von Neumann measurement {Qh} on the

collection of systems.From state ρN = |ψ〉〈ψ|⊗N and ψ = α|0〉+ β|1〉, we expect

〈a〉 = Tr{a|ψ〉〈ψ|} = |α|2λ0 + |β|2λ1.

State after measurement ≈ ρN (as N becomes large).

Finkelstein, Hartle, Graham, FGG,etc.

limN→∞

∣AN |ψ〉⊗N − 〈a〉|ψ〉⊗N

∣= 0

Waterloo, November 2004 – p.19

von Neumann measurement theory

Measuring average value of a over an ensemble of Nsystems.

⇔Making a von Neumann measurement {Qh} on the

collection of systems.From state ρN = |ψ〉〈ψ|⊗N and ψ = α|0〉+ β|1〉, we expect

〈a〉 = Tr{a|ψ〉〈ψ|} = |α|2λ0 + |β|2λ1.

State after measurement ≈ ρN (as N becomes large).

Finkelstein, Hartle, Graham, FGG,etc.

limN→∞

∣AN |ψ〉⊗N − 〈a〉|ψ〉⊗N

∣= 0

Waterloo, November 2004 – p.19

von Neumann measurement theory

Measuring average value of a over an ensemble of Nsystems.

⇔Making a von Neumann measurement {Qh} on the

collection of systems.From state ρN = |ψ〉〈ψ|⊗N and ψ = α|0〉+ β|1〉, we expect

〈a〉 = Tr{a|ψ〉〈ψ|} = |α|2λ0 + |β|2λ1.

State after measurement ≈ ρN (as N becomes large).

Finkelstein, Hartle, Graham, FGG,etc.

limN→∞

∣AN |ψ〉⊗N − 〈a〉|ψ〉⊗N

∣= 0

Waterloo, November 2004 – p.19

von Neumann measurement theory

Measuring average value of a over an ensemble of Nsystems.

⇔Making a von Neumann measurement {Qh} on the

collection of systems.From state ρN = |ψ〉〈ψ|⊗N and ψ = α|0〉+ β|1〉, we expect

〈a〉 = Tr{a|ψ〉〈ψ|} = |α|2λ0 + |β|2λ1.

State after measurement ≈ ρN (as N becomes large).

Finkelstein, Hartle, Graham, FGG,etc.

limN→∞

∣AN |ψ〉⊗N − 〈a〉|ψ〉⊗N

∣= 0

Waterloo, November 2004 – p.19

von Neumann measurement theory

So for N finite but large...

Conditional post-measurement state:

ρNh−→ ρN |h = QhρNQh

P (Qh) .

Average post-measurement state:ρN → ρ′N =

h ρN |hP (Qh) =∑

hQhρNQh.

F (ρN , ρ′N ) = 〈ψ|⊗Nρ′N |ψ〉⊗N ≤ 1

2π√N |α| · |β|

Fidelity between pre- and post-measurement states goes to zero.

Waterloo, November 2004 – p.20

von Neumann measurement theory

So for N finite but large...

Conditional post-measurement state:

ρNh−→ ρN |h = QhρNQh

P (Qh) .

Average post-measurement state:ρN → ρ′N =

h ρN |hP (Qh) =∑

hQhρNQh.

F (ρN , ρ′N ) = 〈ψ|⊗Nρ′N |ψ〉⊗N ≤ 1

2π√N |α| · |β|

Fidelity between pre- and post-measurement states goes to zero.

Waterloo, November 2004 – p.20

von Neumann measurement theory

So for N finite but large...

Conditional post-measurement state:

ρNh−→ ρN |h = QhρNQh

P (Qh) .

Average post-measurement state:ρN → ρ′N =

h ρN |hP (Qh) =∑

hQhρNQh.

F (ρN , ρ′N ) = 〈ψ|⊗Nρ′N |ψ〉⊗N ≤ 1

2π√N |α| · |β|

Fidelity between pre- and post-measurement states goes to zero.

Waterloo, November 2004 – p.20

von Neumann measurement theory

So for N finite but large...

Conditional post-measurement state:

ρNh−→ ρN |h = QhρNQh

P (Qh) .

Average post-measurement state:ρN → ρ′N =

h ρN |hP (Qh) =∑

hQhρNQh.

F (ρN , ρ′N ) = 〈ψ|⊗Nρ′N |ψ〉⊗N ≤ 1

2π√N |α| · |β|

Fidelity between pre- and post-measurement states goes to zero.

Waterloo, November 2004 – p.20

Realistic measurement model

Coupling between S and A: HSA = AN ⊗ P ,where [P,R] = i.

ShiftR0

|h〉 |h〉

h

R0

Subsequent measurement of R⇔ measurement ofQh =

h′

qh′(h)Qh′.

The precision of the “measurement” depends on thecoherence length of the measurement apparatus.

Waterloo, November 2004 – p.21

Realistic measurement model

Coupling between S and A: HSA = AN ⊗ P ,where [P,R] = i.

ShiftR0

|h〉 |h〉

h

R0

Subsequent measurement of R⇔ measurement ofQh =

h′

qh′(h)Qh′.

The precision of the “measurement” depends on thecoherence length of the measurement apparatus.

Waterloo, November 2004 – p.21

Realistic measurement model

Coupling between S and A: HSA = AN ⊗ P ,where [P,R] = i.

ShiftR0

|h〉 |h〉

h

R0

Subsequent measurement of R⇔ measurement ofQh.

Subsequent measurement of R⇔ measurement ofQh =

h′

qh′(h)Qh′.

The precision of the “measurement” depends on thecoherence length of the measurement apparatus.

Waterloo, November 2004 – p.21

Realistic measurement model

Coupling between S and A: HSA = AN ⊗ P ,where [P,R] = i.

ShiftR0

|h〉 |h〉

h

R0

Subsequent measurement of R⇔ measurement ofQh =

h′

qh′(h)Qh′.

The precision of the “measurement” depends on thecoherence length of the measurement apparatus.

Waterloo, November 2004 – p.21

Realistic measurement model

Coupling between S and A: HSA = AN ⊗ P ,where [P,R] = i.

ShiftR0

|h〉 |h〉

h

R0

Subsequent measurement of R⇔ measurement ofQh =

h′

qh′(h)Qh′.

The precision of the “measurement” depends on thecoherence length of the measurement apparatus.

Waterloo, November 2004 – p.21

Smooth POVM’s

Qh =1

(2π)1/4√σ

h′

exp

{

−(h− h′)2

4σ2

}

Qh′

The measurement result will be h ≈ |β|2 within themeasurement accuracy σ:

P (|h− |β|2| ≥ ε) ≈ exp{

− ε2

2σ2+1/N

}

The state disturbance is negligeable:

F (ρN , ρ′N ) ≥ 1− 1+ln(4Nσ2)

2Nσ2

F (ρN , ρN |h) ≥ (more complicated but same scaling)

Desired behaviour (N = ∞):measurement coarseness σ � 1/

√N .

Waterloo, November 2004 – p.22

Smooth POVM’s

Qh =1

(2π)1/4√σ

h′

exp

{

−(h− h′)2

4σ2

}

Qh′

The measurement result will be h ≈ |β|2 within themeasurement accuracy σ:

P (|h− |β|2| ≥ ε) ≈ exp{

− ε2

2σ2+1/N

}

The state disturbance is negligeable:

F (ρN , ρ′N ) ≥ 1− 1+ln(4Nσ2)

2Nσ2

F (ρN , ρN |h) ≥ (more complicated but same scaling)

Desired behaviour (N = ∞):measurement coarseness σ � 1/

√N .

Waterloo, November 2004 – p.22

Smooth POVM’s

Qh =1

(2π)1/4√σ

h′

exp

{

−(h− h′)2

4σ2

}

Qh′

The measurement result will be h ≈ |β|2 within themeasurement accuracy σ:

P (|h− |β|2| ≥ ε) ≈ exp{

− ε2

2σ2+1/N

}

The state disturbance is negligeable:

F (ρN , ρ′N ) ≥ 1− 1+ln(4Nσ2)

2Nσ2

F (ρN , ρN |h) ≥ (more complicated but same scaling)

Desired behaviour (N = ∞):measurement coarseness σ � 1/

√N .

Waterloo, November 2004 – p.22

Smooth POVM’s

Qh =1

(2π)1/4√σ

h′

exp

{

−(h− h′)2

4σ2

}

Qh′

The measurement result will be h ≈ |β|2 within themeasurement accuracy σ:

P (|h− |β|2| ≥ ε) ≈ exp{

− ε2

2σ2+1/N

}

The state disturbance is negligeable:

F (ρN , ρ′N ) ≥ 1− 1+ln(4Nσ2)

2Nσ2

F (ρN , ρN |h) ≥ (more complicated but same scaling)

Desired behaviour (N = ∞):measurement coarseness σ � 1/

√N .

Waterloo, November 2004 – p.22

Classicality

Exchangeable state: ρN =∫

P (ρ)ρ⊗Ndρ

ρN |h =QhρN Qh

P (Qh)

=

P (ρ)

P (Qh)Qhρ

⊗N Qhdρ

≈∫

P (ρ)P (Qh|ρ)P (Qh)

ρ⊗Ndρ

This is like Bayesian updating:

P (ρ)h−→ P (ρ|h) = P (ρ)P (Qh|ρ)/P (Qh)

Waterloo, November 2004 – p.23

Classicality

Exchangeable state: ρN =∫

P (ρ)ρ⊗Ndρ

ρN |h =QhρN Qh

P (Qh)

ρN |h =QhρN Qh

P (Qh)

=

P (ρ)

P (Qh)Qhρ

⊗N Qhdρ

≈∫

P (ρ)P (Qh|ρ)P (Qh)

ρ⊗Ndρ

This is like Bayesian updating:

P (ρ)h−→ P (ρ|h) = P (ρ)P (Qh|ρ)/P (Qh)

Waterloo, November 2004 – p.23

Classicality

Exchangeable state: ρN =∫

P (ρ)ρ⊗Ndρ

ρN |h =QhρN Qh

P (Qh)

=

P (ρ)

P (Qh)Qhρ

⊗N Qhdρ

ρN |h =QhρN Qh

P (Qh)

=

P (ρ)

P (Qh)Qhρ

⊗N Qhdρ

≈∫

P (ρ)P (Qh|ρ)P (Qh)

ρ⊗Ndρ

This is like Bayesian updating:

P (ρ)h−→ P (ρ|h) = P (ρ)P (Qh|ρ)/P (Qh)

Waterloo, November 2004 – p.23

Classicality

Exchangeable state: ρN =∫

P (ρ)ρ⊗Ndρ

ρN |h =QhρN Qh

P (Qh)

=

P (ρ)

P (Qh)Qhρ

⊗N Qhdρ

≈∫

P (ρ)P (Qh|ρ)P (Qh)

ρ⊗Ndρ

This is like Bayesian updating:

P (ρ)h−→ P (ρ|h) = P (ρ)P (Qh|ρ)/P (Qh)

Waterloo, November 2004 – p.23

Classicality

Exchangeable state: ρN =∫

P (ρ)ρ⊗Ndρ

ρN |h =QhρN Qh

P (Qh)

=

P (ρ)

P (Qh)Qhρ

⊗N Qhdρ

≈∫

P (ρ)P (Qh|ρ)P (Qh)

ρ⊗Ndρ

This is like Bayesian updating:

P (ρ)h−→ P (ρ|h) = P (ρ)P (Qh|ρ)/P (Qh)

Waterloo, November 2004 – p.23

Classicality

For coarse macroscopic observables, the averagestates ρ of an ensemble behave as objective elementsof reality.

They can be “discovered without being disturbed”.Upon measurements, their update rule is Bayesian,just like for phase space coordinates of classicalsystems.

The argument can be extended to entangled states; thecondition becomes σ �

ξ/N .Macroscopic measurements behave classicallywhen coarse on the quantum correlation lengthscale ξ of the ensemble.

Waterloo, November 2004 – p.24

Classicality

For coarse macroscopic observables, the averagestates ρ of an ensemble behave as objective elementsof reality.

They can be “discovered without being disturbed”.Upon measurements, their update rule is Bayesian,just like for phase space coordinates of classicalsystems.

The argument can be extended to entangled states; thecondition becomes σ �

ξ/N .Macroscopic measurements behave classicallywhen coarse on the quantum correlation lengthscale ξ of the ensemble.

Waterloo, November 2004 – p.24

Summary of Part II

von Neumann measurement (textbook) ⇒ Importantdisturbance.

Coarse grained POVMs ⇒ No disturbance for large N .

Macroscopic limit requires large N and coarse graining.

Under these conditions, the average state of an ensemblebehaves as an objective element of reality.

Waterloo, November 2004 – p.25

Summary of Part II

von Neumann measurement (textbook) ⇒ Importantdisturbance.

Coarse grained POVMs ⇒ No disturbance for large N .

Macroscopic limit requires large N and coarse graining.

Under these conditions, the average state of an ensemblebehaves as an objective element of reality.

Waterloo, November 2004 – p.25

Summary of Part II

von Neumann measurement (textbook) ⇒ Importantdisturbance.

Coarse grained POVMs ⇒ No disturbance for large N .

Macroscopic limit requires large N and coarse graining.

Under these conditions, the average state of an ensemblebehaves as an objective element of reality.

Waterloo, November 2004 – p.25

Summary of Part II

von Neumann measurement (textbook) ⇒ Importantdisturbance.

Coarse grained POVMs ⇒ No disturbance for large N .

Macroscopic limit requires large N and coarse graining.

Under these conditions, the average state of an ensemblebehaves as an objective element of reality.

Waterloo, November 2004 – p.25

Conclusion

Distinction between quantum and classical: lack of anobjective reality.

Distinction in the update rule.

An objective reality is operationally recovered when1. The information is acquired by probing a fragment of

the environment, or2. The information is about macroscopic quantities.

Under realistic assumptions, classical realityemerges from the underlying quantum theory.

Waterloo, November 2004 – p.26

Acknowledgments

Special thanks to Raymond Laflamme, Harold Ollivier, and WojciechZurek.

Howard Barnum, Jonathan Baugh, Charlie Bennett, Alexandre Blais, Robin Blume-Kohout,

Jean-Christian Boileau, Gilles Brassard, Carl Caves, David Cory, Fay Dowker, Artur Ekert,

Joseph Emerson, Marie Ericsson, Chris Fuchs, Ernesto Galvão, Florian Girelli, Daniel

Gottesman, Patrick Hayden, Lucien Hardy, Dominik Janzing, Alexei Kitaev, Manny Knill,

David Kribs, Frédéric Leblond, Etera Livine, Fotini Markopoulou, Gerard Milburn, Mike

Mosca, Casey Myers, Camille Negrevergne, Juan Pablo Paz, Ruediger Schack, Lee Smolin,

Rolando Somma, Raphael Sorkin, Rob Spekkens, Alain Tapp, Hugo Touchette, Antony

Valentini, Guifre Vidal, Lorenza Viola, Christof Zalka, Paolo Zanardi, all other members and

staff of IQC and PI!

NSERC, FCAR, CIAR, CRC, ORDC, and Big Mike.

Merci à ma famille, à mes amis, et à Isabelle.

Waterloo, November 2004 – p.27