Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If...
Transcript of Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If...
![Page 1: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/1.jpg)
Free Probability Theory
and
Random Matrices
Roland Speicher
Universitat des Saarlandes
Saarbrucken
![Page 2: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/2.jpg)
We are interested in the limiting eigenvalue distribution of
N ×N random matrices for N →∞.
Typical phenomena for basic random matrix ensembles:
• almost sure convergence to a deterministic limit eigenvalue
distribution
• this limit distribution can be effectively calculated
![Page 3: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/3.jpg)
We are interested in the limiting eigenvalue distribution of
N ×N random matrices for N →∞.
Typical phenomena for basic random matrix ensembles:
• almost sure convergence to a deterministic limit eigenvalue
distribution
• this limit distribution can be effectively calculated
![Page 4: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/4.jpg)
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
eine Realisierung2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
N=102.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.5
1
1.5
2
2.5
3
N=1002.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
5
10
15
20
25
30
N=1000
![Page 5: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/5.jpg)
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
eine Realisierung2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
N=102.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.5
1
1.5
2
2.5
3
N=1002.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
5
10
15
20
25
30
N=1000
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
zweite Realisierung2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
N=102.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.5
1
1.5
2
2.5
3
N=1002.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
5
10
15
20
25
30
N=1000
![Page 6: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/6.jpg)
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
eine Realisierung2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
N=102.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.5
1
1.5
2
2.5
3
N=1002.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
5
10
15
20
25
30
N=1000
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
zweite Realisierung2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
N=102.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.5
1
1.5
2
2.5
3
N=1002.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
5
10
15
20
25
30
N=1000
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
dritte Realisierung2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
N=102.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.5
1
1.5
2
2.5
3
3.5
4
N=1002.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
5
10
15
20
25
30
N=1000
![Page 7: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/7.jpg)
Consider selfadjoint Gaussian N ×N random matrix.
We have almost sure convergence (convergence of ”typical” re-
alization) of its eigenvalue distribution to
Wigner’s semicircle.
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
... one realization ...2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
... another realization ...
N = 4000
![Page 8: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/8.jpg)
Consider Wishart random matrix A = XX∗, where X is N ×Mrandom matrix with independent Gaussian entries.Its eigenvalue distribution converges almost surely to
Marchenko-Pastur distribution.
0.5 0 0.5 1 1.5 2 2.5 3 3.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
... one realization ...0.5 0 0.5 1 1.5 2 2.5 3 3.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
... another realization ...
N = 3000,M = 6000
![Page 9: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/9.jpg)
We want to consider more complicated situations, built out of
simple cases (like Gaussian or Wishart) by doing operations like
• taking the sum of two matrices
• taking the product of two matrices
• taking corners of matrices
![Page 10: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/10.jpg)
Note: If several N × N random matrices A and B are involvedthen the eigenvalue distribution of non-trivial functions f(A,B)(like A+B or AB) will of course depend on the relation betweenthe eigenspaces of A and of B.
However: we might expect that we have almost sure convergenceto a deterministic result
• if N →∞ and
• if the eigenspaces are almost surely in a ”typical” or
”generic” position.
This is the realm of free probability theory.
![Page 11: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/11.jpg)
Note: If several N × N random matrices A and B are involvedthen the eigenvalue distribution of non-trivial functions f(A,B)(like A+B or AB) will of course depend on the relation betweenthe eigenspaces of A and of B.
However: we might expect that we have almost sure convergenceto a deterministic result
• if N →∞ and
• if the eigenspaces are almost surely in a ”typical” or
”generic” position.
This is the realm of free probability theory.
![Page 12: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/12.jpg)
Note: If several N × N random matrices A and B are involvedthen the eigenvalue distribution of non-trivial functions f(A,B)(like A+B or AB) will of course depend on the relation betweenthe eigenspaces of A and of B.
However: we might expect that we have almost sure convergenceto a deterministic result
• if N →∞ and
• if the eigenspaces are almost surely in a ”typical” or
”generic” position.
This is the realm of free probability theory.
![Page 13: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/13.jpg)
Consider N ×N random matrices A and B such that
• A has an asymptotic eigenvalue distribution for N →∞B has an asymptotic eigenvalue distribution for N →∞
• A and B are independent(i.e., entries of A are independent from entries of C)
• B is a unitarily invariant ensemble(i.e., the joint distribution of its entries does not changeunder unitary conjugation)
Then, almost surely, eigenspaces of A and of B are in genericposition.
![Page 14: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/14.jpg)
Consider N ×N random matrices A and B such that
• A has an asymptotic eigenvalue distribution for N →∞B has an asymptotic eigenvalue distribution for N →∞
• A and B are independent(i.e., entries of A are independent from entries of C)
• B is a unitarily invariant ensemble(i.e., the joint distribution of its entries does not changeunder unitary conjugation)
Then, almost surely, eigenspaces of A and of B are in genericposition.
![Page 15: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/15.jpg)
Consider N ×N random matrices A and B such that
• A has an asymptotic eigenvalue distribution for N →∞B has an asymptotic eigenvalue distribution for N →∞
• A and B are independent(i.e., entries of A are independent from entries of C)
• B is a unitarily invariant ensemble(i.e., the joint distribution of its entries does not changeunder unitary conjugation)
Then, almost surely, eigenspaces of A and of B are in genericposition.
![Page 16: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/16.jpg)
Consider N ×N random matrices A and B such that
• A has an asymptotic eigenvalue distribution for N →∞B has an asymptotic eigenvalue distribution for N →∞
• A and B are independent(i.e., entries of A are independent from entries of C)
• B is a unitarily invariant ensemble(i.e., the joint distribution of its entries does not changeunder unitary conjugation)
Then, almost surely, eigenspaces of A and of B are in genericposition.
![Page 17: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/17.jpg)
In such a generic case we expect that the asymptotic eigenvaluedistribution of functions of A and B should almost surely dependin a deterministic way on the asymptotic eigenvalue distributionof A and of B the asymptotic eigenvalue distribution.
Basic examples for such functions:
• the sum
A+B
• the product
AB
• corners of the unitarily invariant matrix B
![Page 18: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/18.jpg)
In such a generic case we expect that the asymptotic eigenvaluedistribution of functions of A and B should almost surely dependin a deterministic way on the asymptotic eigenvalue distributionof A and of B the asymptotic eigenvalue distribution.
Basic examples for such functions:
• the sum
A+B
• the product
AB
• corners of the unitarily invariant matrix B
![Page 19: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/19.jpg)
Example: sum of independent Gaussian and Wishart (M = 2N)
random matrices, for N = 3000
![Page 20: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/20.jpg)
Example: sum of independent Gaussian and Wishart (M = 2N)random matrices, for N = 3000
2 1 0 1 2 3 40
0.05
0.1
0.15
0.2
0.25
0.3
0.35
... one realization ...
![Page 21: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/21.jpg)
Example: sum of independent Gaussian and Wishart (M = 2N)random matrices, for N = 3000
2 1 0 1 2 3 40
0.05
0.1
0.15
0.2
0.25
0.3
0.35
... one realization ...2 1 0 1 2 3 4
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
... another realization ...
![Page 22: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/22.jpg)
Example: product of two independent Wishart (M = 5N) ran-
dom matrices, N = 2000
![Page 23: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/23.jpg)
Example: product of two independent Wishart (M = 5N) ran-dom matrices, N = 2000
0 0.5 1 1.5 2 2.5 30
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
... one realization ...
![Page 24: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/24.jpg)
Example: product of two independent Wishart (M = 5N) ran-dom matrices, N = 2000
0 0.5 1 1.5 2 2.5 30
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
... one realization ...0 0.5 1 1.5 2 2.5 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
... another realization ...
![Page 25: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/25.jpg)
Example: upper left corner of size N/2 × N/2 of a randomlyrotated N ×N projection matrix,with half of the eigenvalues 0 and half of the eigenvalues 1
![Page 26: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/26.jpg)
Example: upper left corner of size N/2 × N/2 of a randomly
rotated N ×N projection matrix,
with half of the eigenvalues 0 and half of the eigenvalues 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
1.5
... one realization ...
N = 2048
![Page 27: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/27.jpg)
Example: upper left corner of size N/2 × N/2 of a randomly
rotated N ×N projection matrix,
with half of the eigenvalues 0 and half of the eigenvalues 1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
1.5
... one realization ...0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
... another realization ...
N = 2048
![Page 28: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/28.jpg)
Problems:
• Do we have a conceptual way of understand-
ing the asymptotic eigenvalue distributions in such
cases?
• Is there an algorithm for actually calculating the
corresponding asymptotic eigenvalue distributions?
![Page 29: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/29.jpg)
Problems:
• Do we have a conceptual way of understand-
ing the asymptotic eigenvalue distributions in such
cases?
• Is there an algorithm for actually calculating the
corresponding asymptotic eigenvalue distributions?
![Page 30: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/30.jpg)
Instead of eigenvalue distribution of typical realization we will
now look at eigenvalue distribution averaged over ensemble.
This has the advantages:
• convergence to asymptotic eigenvalue distribution happens
much faster; very good agreement with asymptotic limit for
moderate N
• theoretically easier to deal with averaged situation than with
almost sure one (note however, this is just for convenience;
the following can also be justified for typical realizations)
![Page 31: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/31.jpg)
Example: Convergence of averaged eigenvalue distribution of
N ×N Gaussian random matrix to semicircle
3 2 1 0 1 2 30
0.05
0.1
0.15
0.2
0.25
0.3
0.35
N=52.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
N=202.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
N=50
trials=10000
![Page 32: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/32.jpg)
Examples: averaged sums, products, corners for moderate N
2 1 0 1 2 3 40
0.05
0.1
0.15
0.2
0.25
0.3
0.35
averaged Wigner + Wishart; N=1000 0.5 1 1.5 2 2.5 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
averaged Wishart x Wishart; N=1000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
averaged upper corner; N=64
What is the asymptotic eigenvalue distribution inthese cases?
![Page 33: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/33.jpg)
Examples: averaged sums, products, corners for moderate N
2 1 0 1 2 3 40
0.05
0.1
0.15
0.2
0.25
0.3
0.35
averaged Wigner + Wishart; N=1000 0.5 1 1.5 2 2.5 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
averaged Wishart x Wishart; N=1000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
averaged upper corner; N=64
What is the asymptotic eigenvalue distribution inthese cases?
![Page 34: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/34.jpg)
How does one analyze asymptotic eigenvalue
distributions?
![Page 35: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/35.jpg)
How does one analyze asymptotic eigenvalue
distributions?
• analyticaltry to derive equation for resolvent of the limit distributionadvantage: powerful complex analysis machinery; allows todeal with probability measures without momentsdisadvantage: cannot deal directly with several matrices A,B; has to treat each function f(A,B) separately
• combinatorialtry to calculate moments of the limit distributionadvantage: can, in principle, deal directly with several matri-ces A, B; by looking on mixed moments
![Page 36: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/36.jpg)
How does one analyze asymptotic eigenvalue
distributions?
• analytical: resolvent methodtry to derive equation for resolvent of the limit distributionadvantage: powerful complex analysis machinery; allows todeal with probability measures without momentsdisadvantage: cannot deal directly with several matrices A,B; has to treat each function f(A,B) separately
• combinatorialtry to calculate moments of the limit distributionadvantage: can, in principle, deal directly with several matri-ces A, B; by looking on mixed moments
![Page 37: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/37.jpg)
How does one analyze asymptotic eigenvalue
distributions?
• analytical: resolvent methodtry to derive equation for resolvent of the limit distributionadvantage: powerful complex analysis machinery; allows todeal with probability measures without momentsdisadvantage: cannot deal directly with several matrices A,B; has to treat each function f(A,B) separately
• combinatorial: moment methodtry to calculate moments of the limit distributionadvantage: can, in principle, deal directly with several matri-ces A, B; by looking on mixed moments
![Page 38: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/38.jpg)
How does one analyze asymptotic eigenvalue
distributions?
• analytical: resolvent methodtry to derive equation for resolvent of the limit distributionadvantage: powerful complex analysis machinery; allows todeal with probability measures without momentsdisadvantage: cannot deal directly with several matrices A,B; has to treat each function f(A,B) separately
• combinatorial: moment methodtry to calculate moments of the limit distributionadvantage: can, in principle, deal directly with several matri-ces A, B; by looking on mixed moments
![Page 39: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/39.jpg)
How does one analyze asymptotic eigenvalue
distributions?
• analytical: resolvent methodtry to derive equation for resolvent of the limit distributionadvantage: powerful complex analysis machinery; allows todeal with probability measures without momentsdisadvantage: cannot deal directly with several matrices A,B; has to treat each function f(A,B) separately
• combinatorial: moment methodtry to calculate moments of the limit distributionadvantage: can, in principle, deal directly with several matri-ces A, B; by looking on mixed moments
![Page 40: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/40.jpg)
How does one analyze asymptotic eigenvalue
distributions?
• analytical: resolvent methodtry to derive equation for resolvent of the limit distributionadvantage: powerful complex analysis machinery; allows todeal with probability measures without momentsdisadvantage: cannot deal directly with several matrices A,B; has to treat each function f(A,B) separately
• combinatorial: moment methodtry to calculate moments of the limit distributionadvantage: can, in principle, deal directly with several matri-ces A, B; by looking on mixed moments
![Page 41: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/41.jpg)
Moment Method
eigenvalue distributionof matrix A
=knowledge of
traces of powers,tr(Ak)
1
N
(λk1 + · · ·+ λkN
)= tr(Ak)
averaged eigenvaluedistribution of
random matrix A=
knowledge ofexpectations of
traces of powers,E[tr(Ak)]
![Page 42: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/42.jpg)
Moment Method
Consider random matrices A and B in generic position.
We want to understand f(A,B) in a uniform way for many f !
We have to understand for all k ∈ N the moments
E[tr(f(A,B)k
)].
Thus we need to understand as basic objects
mixed moments ϕ (An1Bm1An2Bm2 · · · )
![Page 43: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/43.jpg)
Moment Method
Consider random matrices A and B in generic position.
We want to understand A+B, AB, AB −BA, etc.
We have to understand for all k ∈ N the moments
E[tr((A+B)k
)], E
[tr((AB)k
)], E
[tr((AB −BA)k
)], etc.
Thus we need to understand as basic objects
mixed moments ϕ (An1Bm1An2Bm2 · · · )
![Page 44: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/44.jpg)
Moment Method
Consider random matrices A and B in generic position.
We want to understand A+B, AB, AB −BA, etc.
We have to understand for all k ∈ N the moments
E[tr((A+B)k
)], E
[tr((AB)k
)], E
[tr((AB −BA)k
)], etc.
Thus we need to understand as basic objects
mixed moments E [tr (An1Bm1An2Bm2 · · · )]
![Page 45: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/45.jpg)
Use following notation:
ϕ(A) := limN→∞
E[tr(A)].
Question: If A and B are in generic position, can we understand
ϕ (An1Bm1An2Bm2 · · · )
in terms of
(ϕ(Ak)
)k∈N
and(ϕ(Bk)
)k∈N
![Page 46: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/46.jpg)
Example: independent Gaussian random
matrices
Consider two independent Gaussian random matrices A and B
Then, in the limit N →∞, the moments
ϕ (An1Bm1An2Bm2 · · · )are given by
#
non-crossing/planar pairings of pattern
A ·A · · ·A︸ ︷︷ ︸n1-times
·B ·B · · ·B︸ ︷︷ ︸m1-times
·A ·A · · ·A︸ ︷︷ ︸n2-times
·B ·B · · ·B︸ ︷︷ ︸m2-times
· · · ,
which do not pair A with B
![Page 47: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/47.jpg)
Example: ϕ(AABBABBA) = 2
because there are two such non-crossing pairings:
AABBABBAAABBABBA
![Page 48: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/48.jpg)
Example: ϕ(AABBABBA) = 2
one realization averaged over 1000 realizations
0 20 40 60 80 100 120 140 160 180 2000
0.5
1
1.5
2
2.5
3
3.5
4
N
tr(AA
BBAB
BA)
0 5 10 15 20 25 300
0.5
1
1.5
2
2.5
3
3.5
4
N
E[tr(
AABB
ABBA
)]
gemittelt uber 1000 Realisierungen
![Page 49: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/49.jpg)
ϕ (An1Bm1An2Bm2 · · · )
= #
non-crossing pairings which do not pair A with B
implies
ϕ
((An1 − ϕ(An1) · 1
)·(Bm1 − ϕ(Bm1) · 1
)·(An2 − ϕ(An2) · 1
)· · ·
)= #
non-crossing pairings which do not pair A with B,
and for which each blue group and each red group is
connected with some other group
![Page 50: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/50.jpg)
ϕ (An1Bm1An2Bm2 · · · )
= #
non-crossing pairings which do not pair A with B
implies
ϕ
((An1 − ϕ(An1) · 1
)·(Bm1 − ϕ(Bm1) · 1
)·(An2 − ϕ(An2) · 1
)· · ·
)= #
non-crossing pairings which do not pair A with B,
and for which each blue group and each red group is
connected with some other group
![Page 51: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/51.jpg)
ϕ (An1Bm1An2Bm2 · · · )
= #
non-crossing pairings which do not pair A with B
implies
ϕ
((An1 − ϕ(An1) · 1
)·(Bm1 − ϕ(Bm1) · 1
)·(An2 − ϕ(An2) · 1
)· · ·
)
= 0
![Page 52: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/52.jpg)
Actual equation for the calculation of the mixed moments
ϕ1 (An1Bm1An2Bm2 · · · )
is different for different random matrix ensembles.
However, the relation between the mixed moments,
ϕ
((An1 − ϕ(An1) · 1
)·(Bm1 − ϕ(Bm1) · 1
)· · ·
)= 0
remains the same for matrix ensembles in generic position and
constitutes the definition of freeness.
![Page 53: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/53.jpg)
Actual equation for the calculation of the mixed moments
ϕ1 (An1Bm1An2Bm2 · · · )
is different for different random matrix ensembles.
However, the relation between the mixed moments,
ϕ
((An1 − ϕ(An1) · 1
)·(Bm1 − ϕ(Bm1) · 1
)· · ·
)= 0
remains the same for matrix ensembles in generic position and
constitutes the definition of freeness.
![Page 54: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/54.jpg)
Definition [Voiculescu 1985]: A and B are free (with respect
to ϕ) if we have for all n1,m1, n2, · · · ≥ 1 that
ϕ
((An1−ϕ(An1) ·1
)·(Bm1−ϕ(Bm1) ·1
)·(An2−ϕ(An2) ·1
)· · ·
)= 0
ϕ
((Bn1 − ϕ(Bn1) · 1
)·(Am1 − ϕ(Am1) · 1
)·(Bn2 − ϕ(Bn2) · 1
)· · ·
)= 0
ϕ
(alternating product in centered words in A and in B
)= 0
![Page 55: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/55.jpg)
Definition [Voiculescu 1985]: A and B are free (with respect
to ϕ) if we have for all n1,m1, n2, · · · ≥ 1 that
ϕ
((An1−ϕ(An1) ·1
)·(Bm1−ϕ(Bm1) ·1
)·(An2−ϕ(An2) ·1
)· · ·
)= 0
ϕ
((Bn1−ϕ(Bn1)·1
)·(Am1−ϕ(Am1)·1
)·(Bn2−ϕ(Bn2)·1
)· · ·
)= 0
ϕ
(alternating product in centered words in A and in B
)= 0
![Page 56: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/56.jpg)
Definition [Voiculescu 1985]: A and B are free (with respect
to ϕ) if we have for all n1,m1, n2, · · · ≥ 1 that
ϕ
((An1−ϕ(An1) ·1
)·(Bm1−ϕ(Bm1) ·1
)·(An2−ϕ(An2) ·1
)· · ·
)= 0
ϕ
((Bn1−ϕ(Bn1)·1
)·(Am1−ϕ(Am1)·1
)·(Bn2−ϕ(Bn2)·1
)· · ·
)= 0
ϕ
(alternating product in centered words in A and in B
)= 0
![Page 57: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/57.jpg)
Theorem [Voiculescu 1991]: Consider N×N random matricesA and B such that
• A has an asymptotic eigenvalue distribution for N →∞B has an asymptotic eigenvalue distribution for N →∞
• A and B are independent(i.e., entries of A are independent from entries of C)
• B is a unitarily invariant ensemble(i.e., the joint distribution of its entries does not changeunder unitary conjugation)
Then, for N →∞, A and B are free.
![Page 58: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/58.jpg)
Definition of Freeness
Let (A, ϕ) be non-commutative probability space, i.e., A isa unital algebra and ϕ : A → C is unital linear functional (i.e.,ϕ(1) = 1)
Unital subalgebras Ai (i ∈ I) are free or freely independent, ifϕ(a1 · · · an) = 0 whenever
• ai ∈ Aj(i), j(i) ∈ I ∀i, j(1) 6= j(2) 6= · · · 6= j(n)
• ϕ(ai) = 0 ∀i
Random variables x1, . . . , xn ∈ A are free, if their generated unitalsubalgebras Ai := algebra(1, xi) are so.
![Page 59: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/59.jpg)
What is Freeness?
Freeness between A and B is an infinite set of equations relating
various moments in A and B:
ϕ
(p1(A)q1(B)p2(A)q2(B) · · ·
)= 0
Basic observation: freeness between A and B is actually a rule
for calculating mixed moments in A and B from the moments
of A and the moments of B:
ϕ
(An1Bm1An2Bm2 · · ·
)= polynomial
(ϕ(Ai), ϕ(Bj)
)
![Page 60: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/60.jpg)
Example:
ϕ
((An − ϕ(An)1
)(Bm − ϕ(Bm)1
))= 0,
thus
ϕ(AnBm)− ϕ(An · 1)ϕ(Bm)− ϕ(An)ϕ(1 ·Bm) + ϕ(An)ϕ(Bm)ϕ(1 · 1) = 0,
and hence
ϕ(AnBm) = ϕ(An) · ϕ(Bm)
![Page 61: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/61.jpg)
Example:
ϕ
((An − ϕ(An)1
)(Bm − ϕ(Bm)1
))= 0,
thus
ϕ(AnBm)−ϕ(An·1)ϕ(Bm)−ϕ(An)ϕ(1·Bm)+ϕ(An)ϕ(Bm)ϕ(1·1) = 0,
and hence
ϕ(AnBm) = ϕ(An) · ϕ(Bm)
![Page 62: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/62.jpg)
Example:
ϕ
((An − ϕ(An)1
)(Bm − ϕ(Bm)1
))= 0,
thus
ϕ(AnBm)−ϕ(An·1)ϕ(Bm)−ϕ(An)ϕ(1·Bm)+ϕ(An)ϕ(Bm)ϕ(1·1) = 0,
and hence
ϕ(AnBm) = ϕ(An) · ϕ(Bm)
![Page 63: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/63.jpg)
Freeness is a rule for calculating mixed moments, analogous
to the concept of independence for random variables.
Thus freeness is also called free independence
![Page 64: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/64.jpg)
Freeness is a rule for calculating mixed moments, analogousto the concept of independence for random variables.
Note: free independence is a different rule from classical indepen-dence; free independence occurs typically for non-commutingrandom variables, like operators on Hilbert spaces or (random)matrices
Example:
ϕ
((A− ϕ(A)1
)·(B − ϕ(B)1
)·(A− ϕ(A)1
)·(B − ϕ(B)1
))= 0,
which results in
ϕ(ABAB) = ϕ(AA) · ϕ(B) · ϕ(B) + ϕ(A) · ϕ(A) · ϕ(BB)
− ϕ(A) · ϕ(B) · ϕ(A) · ϕ(B)
![Page 65: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/65.jpg)
Freeness is a rule for calculating mixed moments, analogousto the concept of independence for random variables.
Note: free independence is a different rule from classical indepen-dence; free independence occurs typically for non-commutingrandom variables, like operators on Hilbert spaces or (random)matrices
Example:
ϕ
((A− ϕ(A)1
)·(B − ϕ(B)1
)·(A− ϕ(A)1
)·(B − ϕ(B)1
))= 0,
which results in
ϕ(ABAB) = ϕ(AA) · ϕ(B) · ϕ(B) + ϕ(A) · ϕ(A) · ϕ(BB)
− ϕ(A) · ϕ(B) · ϕ(A) · ϕ(B)
![Page 66: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/66.jpg)
Motivation for the combinatorics of freeness:
the free (and classical) CLT
Consider a1, a2, · · · ∈ (A, ϕ) which are
• identically distributed
• centered and normalized: ϕ(ai) = 0 and ϕ(a2i ) = 1
• either classically independent or freely independent
What can we say about
Sn :=a1 + · · ·+ an√
n
n→∞−→ ???
![Page 67: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/67.jpg)
We say that Sn converges (in distribution) to s if
ϕ(Smn ) = ϕ(sm) ∀m ∈ N
We have
ϕ(Smn ) =1
nm/2ϕ[(a1 + · · · an)m
]
=1
nm/2
n∑i(1),...,i(m)=1
ϕ[ai(1) · · · ai(m)]
Note:
ϕ[ai(1) · · · ai(m)] = ϕ[aj(1) · · · aj(m)]
whenever
ker i = ker j
![Page 68: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/68.jpg)
For example, i = (1,3,1,5,3) and j = (3,4,3,6,4):
ϕ[a1a3a1a5a3] = ϕ[a3a4a3a6a4]
because independence/freeness allows to express
ϕ[a1a3a1a5a3] = polynomial(ϕ(a1), ϕ(a2
1), ϕ(a3), ϕ(a23), ϕ(a5)
)ϕ[a3a4a3a6a4] = polynomial
(ϕ(a3), ϕ(a2
3), ϕ(a4), ϕ(a24), ϕ(a6)
)and
ϕ(a1) = ϕ(a3), ϕ(a21) = ϕ(a2
3)
ϕ(a3) = ϕ(a4), ϕ(a23) = ϕ(a2
4), ϕ(a5) = ϕ(a6)
We put
κπ := ϕ[a1a3a1a5a3] where π := ker i = ker j = 1,3, 2,5, 4
π ∈ P(5) is a partition of 1,2,3,4,5.
![Page 69: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/69.jpg)
Thus
ϕ(Smn ) =1
nm/2
n∑i(1),...,i(m)=1
ϕ[ai(1) · · · ai(m)]
=1
nm/2
∑π∈P(m)
κπ ·#i : ker i = π
Note:
#i : ker i = π = n(n− 1) · · · (n−#π − 1) ∼ n#π
So
ϕ(Smn ) ∼∑
π∈P(m)
κπ · n#π−m/2
![Page 70: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/70.jpg)
No singletons in the limit
Consider π ∈ P(m) with singleton:
π = . . . , k, . . . ,
thus
κπ = ϕ(ai(1) · · · ai(k) · · · ai(m))
= ϕ(ai(1) · · · ai(k−1)ai(k+1) · · · ai(m)) · ϕ(ai(k))︸ ︷︷ ︸=0
Thus: κπ = 0 if π has singleton; i.e.,
κπ 6= 0 =⇒ π = V1, . . . , Vr with #Vj ≥ 2∀j
=⇒ r = #π ≤m
2
![Page 71: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/71.jpg)
So in
ϕ(Smn ) ∼∑
π∈P(m)
κπ · n#π−m/2
only those π survive for n→∞ with
• π has no singleton, i.e., no block of size 1
• π has exactly m/2 blocks
Such π are exactly those, where each block has size 2, i.e.,
π ∈ P2(m) := π ∈ P(m) | π is pairing
![Page 72: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/72.jpg)
Thus we have:
limn→∞ϕ(Smn ) =
∑π∈P2(m)
κπ
In particular: odd moments are zero (because no pairings of odd
number of elements), thus limit distribution is symmetric
Question: What are the even moments?
This depends on the κπ’s.
The actual value of those is now different for the classical and
the free case!
![Page 73: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/73.jpg)
Classical CLT: assume ai are independent
If the ai commute and are independent, then
κπ = ϕ(ai(1) · · · ai(2k)) = 1 ∀π ∈ P2(2k)
Thus
limn→∞ϕ(Smn ) = #P2(m) =
0, m odd
(m− 1)(m− 3) · · ·5 · 3 · 1, m even
Those limit moments are the moments of a Gaussian distribution
of variance 1.
![Page 74: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/74.jpg)
Free CLT: assume ai are free
If the ai are free, then, for π ∈ P2(2k),
κπ =
0, π is crossing
1, π is non-crossing
E.g.,
κ1,6,2,5,3,4 = ϕ(a1a2a3a3a2a1)
= ϕ(a3a3) · ϕ(a1a2a2a1)
= ϕ(a3a3) · ϕ(a2a2) · ϕ(a1a1)
= 1
but
κ1,5,2,3,4,6 = ϕ(a1a2a2a3a1a3)
= ϕ(a2a2) · ϕ(a1a3a1a3)︸ ︷︷ ︸0
![Page 75: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/75.jpg)
Free CLT: assume ai are free
Put
NC2(m) := π ∈ P2(m) | π is non-crossing
Thus
limn→∞ϕ(Smn ) = #NC2(m) =
0, m odd
ck = 1k+1
(2kk
), m = 2k even
Those limit moments are the moments of a semicircular distri-
bution of variance 1,
limn→∞ϕ(Smn ) =
1
2π
∫ 2
−2tm√
4− t2dt
![Page 76: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/76.jpg)
How to recognize the Catalan numbers ck
Put
ck := #NC2(2k).
We have
ck =∑
π∈NC(2k)
1 =k∑i=1
∑π=1,2i∪π1∪π2
1 =k∑i=1
ci−1ck−i
This recursion, together with c0 = 1, c1 = 1, determines the
sequence of Catalan numbers:
ck = 1,1,2,5,14,42,132,429, ....
![Page 77: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/77.jpg)
Understanding the freeness rule:
the idea of cumulants
• write moments in terms of other quantities, which we call
free cumulants
• freeness is much easier to describe on the level of free cu-
mulants: vanishing of mixed cumulants
• relation between moments and cumulants is given by sum-
ming over non-crossing or planar partitions
![Page 78: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/78.jpg)
Non-crossing partitions
A partition of 1, . . . , n is a decomposition π = V1, . . . , Vr with
Vi 6= ∅, Vi ∩ Vj = ∅ (i 6= y),⋃i
Vi = 1, . . . , n
The Vi are the blocks of π ∈ P(n).
π is non-crossing if we do not have
p1 < q1 < p2 < q2
such that p1, p2 are in same block, q1, q2 are in same block, butthose two blocks are different.
NC(n) := non-crossing partitions of 1,. . . ,n
NC(n) is actually a lattice with refinement order.
![Page 79: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/79.jpg)
Moments and cumulants
For unital linear functional
ϕ : A → C
we define cumulant functionals κn (for all n ≥ 1)
κn : An → C
as multi-linear functionals by moment-cumulant relation
ϕ(A1 · · ·An) =∑
π∈NC(n)
κπ[A1, . . . , An]
Note: classical cumulants are defined by a similar formula, where
only NC(n) is replaced by P(n)
![Page 80: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/80.jpg)
A1
ϕ(A1) =κ1(A1)
A1A2
ϕ(A1A2) = κ2(A1, A2)
+ κ1(A1)κ1(A2)
thus
κ2(A1, A2) = ϕ(A1A2)− ϕ(A1)ϕ(A2)
![Page 81: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/81.jpg)
A1A2A3
ϕ(A1A2A3) = κ3(A1, A2, A3)
+ κ1(A1)κ2(A2, A3)
+ κ2(A1, A2)κ1(A3)
+ κ2(A1, A3)κ1(A2)
+ κ1(A1)κ1(A2)κ1(A3)
![Page 82: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/82.jpg)
ϕ(A1A2A3A4) = + + + +
+ + + + +
+ + + +
= κ4(A1, A2, A3, A4) + κ1(A1)κ3(A2, A3, A4)
+ κ1(A2)κ3(A1, A3, A4) + κ1(A3)κ3(A1, A2, A4)
+ κ3(A1, A2, A3)κ1(A4) + κ2(A1, A2)κ2(A3, A4)
+ κ2(A1, A4)κ2(A2, A3) + κ1(A1)κ1(A2)κ2(A3, A4)
+ κ1(A1)κ2(A2, A3)κ1(A4) + κ2(A1, A2)κ1(A3)κ1(A4)
+ κ1(A1)κ2(A2, A4)κ1(A3) + κ2(A1, A4)κ1(A2)κ1(A3)
+ κ2(A1, A3)κ1(A2)κ1(A4) + κ1(A1)κ1(A2)κ1(A3)κ1(A4)
![Page 83: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/83.jpg)
Freeness = vanishing of mixed cumulants
Theorem [Speicher 1994]: The fact that A and B are free is
equivalent to the fact that
κn(C1, . . . , Cn) = 0
whenever
• n ≥ 2
• Ci ∈ A,B for all i
• there are i, j such that Ci = A, Cj = B
![Page 84: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/84.jpg)
Freeness = vanishing of mixed cumulants
free product = direct sum of cumulants
ϕ(An) given by sum over blue planar diagrams
ϕ(Bm) given by sum over red planar diagrams
then: for A and B free, ϕ(An1Bm1An2 · · · ) is given by sum over
planar diagrams with monochromatic (blue or red) blocks
![Page 85: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/85.jpg)
Vanishing of Mixed Cumulants
ϕ(ABAB) =
κ1(A)κ1(A)κ2(B,B)+κ2(A,A)κ1(B)κ1(B)+κ1(A)κ1(B)κ1(A)κ1(B)
ABAB ABAB ABAB
![Page 86: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/86.jpg)
Sum of Free Variables
Consider A, B free.
Then, by freeness, the moments of A+B are uniquely determined
by the moments of A and the moments of B.
Notation: We say the distribution of A+B is the
free convolution
of the distribution of A and the distribution of B,
µA+B = µA µB.
![Page 87: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/87.jpg)
Sum of Free Variables
In principle, freeness determines this, but the concrete nature ofthis rule on the level of moments is not apriori clear.
Example:
ϕ((A+B)1
)= ϕ(A) + ϕ(B)
ϕ((A+B)2
)= ϕ(A2) + 2ϕ(A)ϕ(B) + ϕ(B2)
ϕ((A+B)3
)= ϕ(A3) + 3ϕ(A2)ϕ(B) + 3ϕ(A)ϕ(B2) + ϕ(B3)
ϕ((A+B)4
)= ϕ(A4) + 4ϕ(A3)ϕ(B) + 4ϕ(A2)ϕ(B2)
+ 2(ϕ(A2)ϕ(B)ϕ(B) + ϕ(A)ϕ(A)ϕ(B2)
− ϕ(A)ϕ(B)ϕ(A)ϕ(B))
+ 4ϕ(A)ϕ(B3) + ϕ(B4)
![Page 88: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/88.jpg)
Sum of Free Variables
Corresponding rule on level of free cumulants is easy: If A and
B are free then
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
+κn(. . . , A,B, . . . ) + · · ·
![Page 89: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/89.jpg)
Sum of Free Variables
Corresponding rule on level of free cumulants is easy: If A and
B are free then
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
+κn(. . . , A,B, . . . ) + · · ·
![Page 90: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/90.jpg)
Sum of Free Variables
Corresponding rule on level of free cumulants is easy: If A and
B are free then
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
i.e., we have additivity of cumulants for free variables
κA+Bn = κAn + κBn
Also: Combinatorial relation between moments and cumulants
can be rewritten easily as a relation between corresponding for-
mal power series.
![Page 91: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/91.jpg)
Sum of Free Variables
Corresponding rule on level of free cumulants is easy: If A and
B are free then
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
i.e., we have additivity of cumulants for free variables
κA+Bn = κAn + κBn
Also: Combinatorial relation between moments and cumu-
lants can be rewritten easily as a relation between corre-
sponding formal power series.
![Page 92: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/92.jpg)
Sum of Free Variables
Consider one random variable A ∈ A and define theirCauchy transform G and their R-transform R by
G(z) =1
z+∞∑n=1
ϕ(An)
zn+1, R(z) =
∞∑n=1
κn(A, . . . , A)zn−1
Theorem [Voiculescu 1986, Speicher 1994]: Then we have
• 1G(z) +R(G(z)) = z
• RA+B(z) = RA(z) +RB(z) if A and B are free
![Page 93: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/93.jpg)
This, together with the relation between Cauchy transform and
R-transform and with the Stieltjes inversion formula, gives an
effective algorithm for calculating free convolutions, i.e., the
asymptotic eigenvalue distribution of sums of random matri-
ces in generic position:
A GA RA
↓
RA +RB = RA+B GA+B A+B
↑B GB RB
![Page 94: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/94.jpg)
Example: Wigner + Wishart (M = 2N)
2 1 0 1 2 3 40
0.05
0.1
0.15
0.2
0.25
0.3
0.35
averaged Wigner + Wishart; N=1002 1 0 1 2 3 4
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
... one realization ...
trials=4000 N=3000
![Page 95: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/95.jpg)
Sum of Free Variables
If A and B are free, then the
free (additive) convolution
of its distributions is given by
µA+B = µA µB.
How can we describe this???
![Page 96: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/96.jpg)
On the level of moments this is getting complicated ....
ϕ((A+B)1
)= ϕ(A) + ϕ(B)
ϕ((A+B)2
)= ϕ(A2) + 2ϕ(A)ϕ(B) + ϕ(B2)
ϕ((A+B)3
)= ϕ(A3) + 3ϕ(A2)ϕ(B) + 3ϕ(A)ϕ(B2) + ϕ(B3)
ϕ((A+B)4
)= ϕ(A4) + 4ϕ(A3)ϕ(B) + 4ϕ(A2)ϕ(B2)
+ 2(ϕ(A2)ϕ(B)ϕ(B) + ϕ(A)ϕ(A)ϕ(B2)
− ϕ(A)ϕ(B)ϕ(A)ϕ(B))
+ 4ϕ(A)ϕ(B3) + ϕ(B4)
... and its better to consider cumulants.
![Page 97: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/97.jpg)
Sum of Free Variables
Consider free cumulants
κAn := κn(A,A, . . . , A)
Then we have, for A and B free:
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
+κn(. . . , A,B, . . . ) + · · ·
![Page 98: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/98.jpg)
Sum of Free Variables
Consider free cumulants
κAn := κn(A,A, . . . , A)
Then we have, for A and B free:
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
+κn(. . . , A,B, . . . ) + · · ·
![Page 99: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/99.jpg)
Sum of Free Variables
Consider free cumulants
κAn := κn(A,A, . . . , A)
Then we have, for A and B free:
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
i.e., we have additivity of cumulants for free variables
κA+Bn = κAn + κBn
But: how good do we understand the relation between
moments and free cumulants???
![Page 100: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/100.jpg)
Sum of Free Variables
Consider free cumulants
κAn := κn(A,A, . . . , A)
Then we have, for A and B free:
κn(A+B,A+B, . . . , A+B) =κn(A,A, . . . , A) + κn(B,B, . . . , B)
i.e., we have additivity of cumulants for free variables
κA+Bn = κAn + κBn
But: how good do we understand the relation between
moments and free cumulants???
![Page 101: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/101.jpg)
Relation between moments and free cumulants
We have
mn := ϕ(An) moments
and
κn := κn(A,A, . . . , A) free cumulants
Combinatorially, the relation between them is given by
mn = ϕ(An) =∑
π∈NC(n)
κπ
Example:
m1 = κ1, m2 = κ2 + κ21, m3 = κ3 + 3κ2κ1 + κ3
1
![Page 102: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/102.jpg)
m3 = κ + κ + κ + κ + κ = κ3 + 3κ2κ1 + κ31
Theorem [Speicher 1994]: Consider formal power series
M(z) = 1 +∞∑k=1
mnzn, C(z) = 1 +
∞∑k=1
κnzn
Then the relation
mn =∑
π∈NC(n)
κπ
between the coefficients is equivalent to the relation
M(z) = C[zM(z)]
![Page 103: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/103.jpg)
Proof
First we get the following recursive relation between cumulants
and moments
mn =∑
π∈NC(n)
κπ
=n∑
s=1
∑i1,...,is≥0
i1+···+is+s=n
∑π1∈NC(i1)
· · ·∑
πs∈NC(is)
κsκπ1 · · ·κπs
=n∑
s=1
∑i1,...,is≥0
i1+···+is+s=n
κsmi1 · · ·mis
![Page 104: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/104.jpg)
mn =n∑
s=1
∑i1,...,is≥0
i1+···+is+s=n
κsmi1 · · ·mis
Plugging this into the formal power series M(z) gives
M(z) = 1 +∑nmnz
n
= 1 +∑n
n∑s=1
∑i1,...,is≥0
i1+···+is+s=n
kszsmi1z
i1 · · ·miszis
= 1 +∞∑s=1
κszs(M(z)
)s= C[zM(z)]
![Page 105: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/105.jpg)
Remark on classical cumulants
Classical cumulants ck are combinatorially defined by
mn =∑
π∈P(n)
cπ
In terms of generating power series
M(z) = 1 +∞∑n=1
mn
n!zn, C(z) =
∞∑n=1
cn
n!zn
this is equivalent to
C(z) = log M(z)
![Page 106: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/106.jpg)
From moment series to Cauchy transform
Instead of M(z) we consider Cauchy transform
G(z) := ϕ(1
z −A) =
∫ 1
z − tdµA(t) =
∑ ϕ(An)
zn+1=
1
zM(1/z)
and instead of C(z) we consider R-transform
R(z) :=∑n≥0
κn+1zn =
C(z)− 1
z
Then M(z) = C[zM(z)] becomes
R[G(z)] +1
G(z)= z or G[R(z) + 1/z] = z
![Page 107: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/107.jpg)
What is advantage of G(z) over M(z)?
For any probability measure µ is its Cauchy transform
G(z) :=∫ 1
z − tdµ(t)
an analytic function G : C+ → C− and we can recover µ from G
by Stieltjes inversion formula
dµ(t) = −1
πlimε→0=G(t+ iε)dt
![Page 108: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/108.jpg)
Example: semicircular distribution µs
µs has moments given by the Catalan numbers or, equivalently,has cumulants
κn =
0, n 6= 2
1, n = 2
(because mn =∑π∈NC2(n) κπ says that κπ = 0 for π ∈ NC(n)
which is not a pairing), thus
R(z) =∑n≥0
κn+1zn = κ2 · z = z
and hence
z = R[G(z)] +1
G(z)= G(z) +
1
G(z)
or G(z)2 + 1 = zG(z)
![Page 109: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/109.jpg)
G(z)2 + 1 = zG(z) thus G(z) =z ±
√z4 − 4
2
We have ”-”, because G(z) ∼ 1/z for z →∞; then
dµs(t) = −1
π=(t−√t2 − 4
2
)dt
=
1
2π
√4− t2dt, if t ∈ [−2,2]
0, otherwise
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
![Page 110: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/110.jpg)
What is the free Binomial(
12δ−1 + 1
2δ+1)2
µ :=1
2δ−1 +
1
2δ+1, ν := µ µ
Then Gµ(z) =∫ 1
z − tdµ(t) =
1
2
( 1
z + 1+
1
z − 1
)=
z
z2 − 1
and so z = Gµ[Rµ(z) + 1/z] =Rµ(z) + 1/z
(Rµ(z) + 1/z)2 − 1
thus Rµ(z) =
√1 + 4z2 − 1
2z
and so Rν(z) = 2Rµ(z) =
√1 + 4z2 − 1
z
![Page 111: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/111.jpg)
Rν(z) =
√1 + 4z2 − 1
zgives Gν(z) =
1√z2 − 4
and thus
dν(t) = −1
π=
1√t2 − 4
dt =
1
π√
4−t2, |t| ≤ 2
0, otherwise
So (1
2δ−1 +
1
2δ+1
)2= ν = arcsine-distribution
![Page 112: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/112.jpg)
−1.5 −1 −0.5 0 0.5 1 1.5
0.15
0.2
0.25
0.3
0.35
0.4
x
1/(π (4 − x2)1/2
)
2800 eigenvalues of A + UBU∗, where A and B are diagonalmatrices with 1400 eigenvalues +1 and 1400 eigenvalues -1,and U is a randomly chosen unitary matrix
![Page 113: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/113.jpg)
Lessons to learn
Free convolution of discrete distributions is in general non dis-
crete
Since it is true that
δx δy = δx+y
we see, in particular, that is not linear, i.e, for α+ β = 1
(αµ1 + βµ2) ν 6= α(µ1 ν) + β(µ2 ν)
Non-commutativity matters: the sum of two commuting pro-
jections is a quite easy object, the sum of two non-commuting
projections is much harder to grasp
![Page 114: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/114.jpg)
Product of Free Variables
Consider A, B free.
Then, by freeness, the moments of AB are uniquely determined
by the moments of A and the moments of B.
Notation: We say the distribution of AB is the
free multiplicative convolution
of the distribution of A and the distribution of B,
µAB = µA µB.
![Page 115: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/115.jpg)
Caveat: AB not selfadjoint
Note: is in general not operation on probability measures onR. Even if both A and B are selfadjoint, AB is only selfadjoint ifA and B commute (which they don’t, if they are free)
But: if B is positive, then we can consider B1/2AB1/2 instead ofAB. Since A and B are free it follows that both have the samemoments; e.g.,
ϕ(B1/2AB1/2) = ϕ(B1/2B1/2)ϕ(A) = ϕ(B)ϕ(A) = ϕ(AB)
So the ”right” definition is
µA µB = µB1/2AB1/2.
If we also restrict A to be positive, then this gives a binaryoperation on probability measures on R+
![Page 116: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/116.jpg)
Meaning of for random matrices
If A and B are symmetric matrices with positive eigenvalues,then the eigenvalues of AB = (AB1/2)B1/2 are the same as theeigenvalues of B1/2(AB1/2) and thus are necessarily also real andpositive. If A and B are asymptotically free, the eigenvalues ofAB are given by µA µB.
However: this is not true any more for three or more matrices!!!If A, B, C are symmetric matrices with positive eigenvalues, thenthere is no reason that the eigenvalues of ABC are real.
If A,B,C are asymptotically free, then still the moments of ABCare the same as the moments of C1/2B1/2AB1/2C1/2, i.e., thesame as the moments of µAµBµC. But since the eigenvaluesof ABC are not real, the knowledge of the moments is not enoughto determine them.
![Page 117: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/117.jpg)
−2 0 2 4 6 8 10−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
3000 complex eigenvalues of the product of three independent
3000× 3000 Wishart matrices
![Page 118: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/118.jpg)
Back to the general theory of
In principle, the moments of AB are, for A and B free, determinedby the moments of A and the moments of B; but again theconcrete nature of this rule on the level of moments is not clear...
ϕ((AB)1
)= ϕ(A)ϕ(B)
ϕ((AB)2
)= ϕ(A2)ϕ(B)2 + ϕ(A)2ϕ(B2)− ϕ(A)2ϕ(B)2
ϕ((AB)3
)= ϕ(A3)ϕ(B)3 + ϕ(A)3ϕ(B3) + 3ϕ(A)ϕ(A2)ϕ(B)ϕ(B2)
− 3ϕ(A)ϕ(A2)ϕ(B)3 − 3ϕ(A)3ϕ(B)ϕ(B2) + 2ϕ(A)3ϕ(B)3
... so let’s again look on cumulants.
![Page 119: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/119.jpg)
Product of Free Variables
Corresponding rule on level of free cumulants is relatively easy
(at least conceptually): If A and B are free then
κn(AB,AB, . . . , AB) =∑
π∈NC(n)
κπ[A,A, . . . , A] · κK(π)[B,B, . . . , B],
where K(π) is the Kreweras complement of π: K(π) is the
maximal σ with
π ∈ NC(blue), σ ∈ NC(red), π ∪ σ ∈ NC(blue ∪ red)
ABABAB
K( ) =
![Page 120: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/120.jpg)
Product of Free Variables
Corresponding rule on level of free cumulants is relatively easy
(at least conceptually): If A and B are free then
κn(AB,AB, . . . , AB) =∑
π∈NC(n)
κπ[A,A, . . . , A] · κK(π)[B,B, . . . , B],
where K(π) is the Kreweras complement of π: K(π) is the
maximal σ with
π ∈ NC(blue), σ ∈ NC(red), π ∪ σ ∈ NC(blue ∪ red)
ABABAB
K( ) =
![Page 121: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/121.jpg)
ABABAB
![Page 122: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/122.jpg)
ABABAB
![Page 123: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/123.jpg)
ABABAB
κ3(A,A,A)
κ2(A,A)κ1(A)
κ2(A,A)κ1(A)
κ2(A,A)κ1(A)
κ1(A)κ1(A)κ1(A)
![Page 124: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/124.jpg)
ABABAB
κ3(A,A,A) κ1(B)κ1(B)κ1(B)
κ2(A,A)κ1(A) κ2(B,B)κ1(B)
κ2(A,A)κ1(A) κ2(B,B)κ1(B)
κ2(A,A)κ1(A) κ2(B,B)κ1(B)
κ1(A)κ1(A)κ1(A) κ3(B,B,B)
![Page 125: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/125.jpg)
Product of Free Variables
Theorem [Voiculescu 1987; Haagerup 1997; Nica,
Speicher 1997]:
Put
MA(z) :=∞∑
m=1
ϕ(Am)zm
and define
SA(z) :=1 + z
zM<−1>A (z) S-transform of A
Then: If A and B are free, we have
SAB(z) = SA(z) · SB(z).
![Page 126: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/126.jpg)
Example: Wishart x Wishart (M = 5N)
0 0.5 1 1.5 2 2.5 30
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
averaged Wishart x Wishart; N=1000 0.5 1 1.5 2 2.5 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
... one realization ...
trials=10000 N=2000
![Page 127: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/127.jpg)
Corners of Random Matrices
Theorem [Nica, Speicher 1996]: The asymptotic eigenvalue
distribution of a corner B of ratio α of a unitarily invariant random
matrix A is given by
µB = µ1/ααA
In particular, a corner of size α = 1/2, has up to rescaling the
same distribution as µA µA.
![Page 128: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/128.jpg)
Corners of Random Matrices
Theorem [Nica, Speicher 1996]: The asymptotic eigenvalue
distribution of a corner B of ratio α of a unitarily invariant random
matrix A is given by
µB = µ1/ααA
In particular, a corner of size α = 1/2, has up to rescaling the
same distribution as µA µA.
![Page 129: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/129.jpg)
Upper left corner of size N/2×N/2 of a projection matrix, withN/2 eigenvalues 0 and N/2 eigenvalues 1 is, up to rescaling, thesame as a free Bernoulli, i.e., the arcsine distribution
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
1.5
averaged upper corner; N=640 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.5
1
1.5
... one realization ...
trials=5000 N=2048
![Page 130: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/130.jpg)
This actually shows
Theorem [Nica, Speicher 1996]: For any probability measure
µ on R, there exists a semigroup (µt)t≥1 of probability measures,
such that µ1 = µ and
µs µt = µ(s+t) (s, t ≥ 1)
(Note: if this µt exists for all t ≥ 0, then µ is freely infinitely
divisible.)
In the classical case, such a statement is not true!
![Page 131: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/131.jpg)
Polynomials of independent Gaussian rm
Consider m independent Gaussian random matrices
X(1)N , . . . , X
(m)N .
They converge, for N →∞, to m free semicircular elements
s1, . . . , sm.
In particular, for any selfadjoint polynomial in m non-commutingvariables
p(X(1)N , . . . , X
(m)N )→ p(s1, . . . , sm)
Question: What can we say about the distribution ofp(s1, . . . , sm)?
![Page 132: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/132.jpg)
Eigenvalue distributions of
p(x1) = x21, p(x1, x2) = x1x2 + x2x1
p(x1, x2, x3) = x1x22x1 + x1x
23x1 + 2x2
where x1, x2, and x3 are independent 4000 × 4000 Gaussianrandom matrices; in the first two cases the asymptotic eigenvaluedistribution can be calculated by free probability tools as the solidcurve, for the third case no explicit solution exists.
![Page 133: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/133.jpg)
What are qualitative features of such
distributions?
What can we say about distribution of
limN→∞
p(X(1)N , . . . , X
(m)N ) = p(s1, . . . , sm)
for arbitrary polynomials p?
Conjectures:
• It has no atoms.
• It has a density with respect to Lebesgue measure.
• This density has some (to be specified) regularity properties.
![Page 134: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/134.jpg)
How to attack such questions?
... maybe with ...
• free stochastic analysis
• free Malliavin calculus
![Page 135: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/135.jpg)
A free Brownian motion is given by a family(S(t)
)t≥0⊂ (A, ϕ)
of random variables (A von Neumann algebra, ϕ faithful trace),such that
• S(0) = 0
• each increment S(t)− S(s) (s < t) is semicircular with mean= 0 and variance = t− s, i.e.,
dµS(t)−S(s)(x) =1
2π(t− s)
√4(t− s)− x2dx
• disjoint increments are free: for 0 < t1 < t2 < · · · < tn,
S(t1), S(t2)− S(t1), . . . , S(tn)− S(tn−1) are free
![Page 136: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/136.jpg)
A free Brownian motion is given
• abstractly, by a family(S(t)
)t≥0
of random variables with
– S(0) = 0
– each S(t)− S(s) (s < t) is (0, t− s)-semicircular
– disjoint increments are free
• concretely, by the sum of creation and annihilation operatorson the full Fock space
• asymptotically, as the limit of matrix-valued (Dyson) Brow-nian motions
![Page 137: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/137.jpg)
Free Brownian motions as matrix limits
Let (XN(t))t≥0 be a symmetric N × N-matrix-valued Brownianmotion, i.e.,
XN(t) =
B11(t) . . . B1N(t)... . . . ...
BN1(t) . . . BNN(t)
, where
• Bij are, for i ≥ j, independent classical Brownian motions
• Bij(t) = Bji(t).
Then,(XN(t)
)t≥0
distr−→(S(t)
)t≥0, in the sense that almost surely
limN→∞
tr(XN(t1) · · ·XN(tn)
)= ϕ
(S(t1) · · ·S(tn)
)∀0 ≤ t1, t2, . . . , tn
![Page 138: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/138.jpg)
Intermezzo on realisations on Fock spaces
Classical Brownian motion can be realized quite canonically by
operators on the symmetric Fock space.
Similarly, free Brownian motion can be realized quite canonically
by operators on the full Fock space
![Page 139: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/139.jpg)
First: symmetric Fock space ...
For Hilbert space H put
Fs(H) :=∞⊕n≥0
H⊗symn, where H⊗0 = CΩ
with inner product
〈f1 ⊗ · · · ⊗ fn, g1 ⊗ · · · ⊗ gm〉sym = δnm∑π∈Sn
n∏i=1
〈fi, gπ(i)〉.
Define creation and annihilation operators
a∗(f)f1 ⊗ · · · ⊗ fn = f ⊗ f1 ⊗ · · · ⊗ fn
a(f)f1 ⊗ · · · ⊗ fn =n∑i=1
〈f, fi〉f1 ⊗ · · · ⊗ fi ⊗ · · · ⊗ fn
a(f)Ω = 0
![Page 140: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/140.jpg)
... and classical Brownian motion
Put ϕ(·) := 〈Ω, ·Ω〉, x(f) := a(f) + a∗(f)
then (x(f))f∈H,f real is Gaussian family with covariance
ϕ(x(f)x(g)
)= 〈f, g〉.
In particular, choose H := L2(R+), ft := 1[0,t), then
Bt := a(1[0,t)) + a∗(1[0,t)) is classical Brownian motion,
meaning
ϕ(Bt1 · · ·Btn) = E[Wt1 · · ·Wtn] ∀0 ≤ t1, . . . , tn
![Page 141: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/141.jpg)
Now: full Fock space ...
For Hilbert space H put
F(H) :=∞⊕n≥0
H⊗n, where H⊗0 = CΩ
with usual inner product
〈f1 ⊗ · · · ⊗ fn, g1 ⊗ · · · ⊗ gm〉 = δnm〈f1, g1〉 · · · 〈fn, gn〉.
Define creation and annihilation operators
b∗(f)f1 ⊗ · · · ⊗ fn = f ⊗ f1 ⊗ · · · ⊗ fnb(f)f1 ⊗ · · · ⊗ fn = 〈f, f1〉f2 ⊗ · · · ⊗ fn
b(f)Ω = 0
![Page 142: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/142.jpg)
... and free Brownian motion
Put ϕ(·) := 〈Ω, ·Ω〉, x(f) := b(f) + b∗(f)
then (x(f))f∈H,f real is semicircular family with covariance
ϕ(x(f)x(g)
)= 〈f, g〉.
In particular, choose H := L2(R+), ft := 1[0,t), then
St := b(1[0,t)) + b∗(1[0,t)) is free Brownian motion.
![Page 143: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/143.jpg)
Semicircle as real part of one-sided shift
Consider case of one-dimensional H. Then this reduces to
basis: Ω = e0, e1, e2, e3, . . .
and one-sided shift l
len = en+1, l∗en =
en−1, n ≥ 1
0, n = 0
One-sided shift is canonical non-unitary isometry
l∗l = 1, l∗l 6= 1 (= 1− projection on Ω)
With ϕ(a) := 〈Ω, aΩ〉 we claim: distribution of l+ l∗ is semicircle.
![Page 144: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/144.jpg)
Moments of l + l∗
In the calculation of 〈Ω, (l + l∗)nΩ〉 only such products in cre-
ation and annihilation contribute, where we never annihilate the
vacuum, and where we start and end at the vacuum. So odd
moments are zero.
Examples
ϕ((l + l∗)2
): l∗l
ϕ((l + l∗)4
): l∗l∗ll, l∗ll∗l
ϕ((l + l∗)6
): l∗l∗l∗lll, l∗ll∗l∗ll, l∗ll∗ll∗l, l∗l∗ll∗ll, l∗l∗lll∗l
Those contributing terms are in clear bijection with non-crossing
pairings (or with Dyck paths).
![Page 145: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/145.jpg)
(l∗, l∗, l∗, l, l, l) @@R
@@R@@R
(l∗, l∗, l, l∗, l, l) @@R@@R
@@R
(l∗, l, l∗, l∗, l, l) @@R
@@R@@R
(l∗, l∗, l, l, l∗, l) @@R@@R
@@R
(l∗, l, l∗, l, l∗, l) @@R@@R@@R
![Page 146: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/146.jpg)
Stochastic Analysis on ”Wigner” space
Starting from a free Brownian motion(S(t)
)t≥0
we define mul-
tiple “Wigner” integrals
I(f) =∫· · ·
∫f(t1, . . . , tn)dS(t1) . . . dS(tn)
for scalar-valued functions f ∈ L2(Rn+), by avoiding the diagonals,
i.e. we understand this as
I(f) =∫· · ·
∫all ti distinct
f(t1, . . . , tn)dS(t1) . . . dS(tn)
![Page 147: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/147.jpg)
Definition of Wigner integrals
More precisely: for f of form
f = 1[s1,t1]×···×[sn,tn]
for pairwisely disjoint intervals [s1, t1], . . . , [sn, tn] we put
I(f) := (St1 − Ss1) · · · (Stn − Ssn)
Extend I(·) linearly over set of all off-diagonal step functions
(which is dense in L2(Rn+). Then observe Ito-isometry
ϕ[I(g)∗I(f)] = 〈f, g〉L2(Rn+)
and extend I to closure of off-diagonal step functions, i.e., to
L2(Rn+).
![Page 148: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/148.jpg)
Note: free stochastic integrals are usuallybounded operators
Free Haagerup Inequality [Bozejko 1991; Biane, Speicher1998]:
∥∥∥∫ · · · ∫ f(t1, . . . , tn)dS(t1) . . . dS(tn)∥∥∥ ≤ (n+ 1)‖f‖L2(Rn+)
![Page 149: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/149.jpg)
Intermezzo: combinatorics and norms
Consider free semicirculars s1, s2, . . . of variance 1. Since (s1 +
· · ·+ sn)/√n is again a semicircular element of variance 1 (and
thus of norm 2), we have∥∥∥∥∥∥(s1 + · · ·+ cn√
n
)k∥∥∥∥∥∥ = 2k
The free Haagerup inequality says that this is drastically reduced
if we subtract the diagonals, i.e.,
limn→∞
∥∥∥∥∥∥∥∥∥1
nk/2
n∑i(1),...,ı(k)=1
all i(.) different
si(1) · · · si(k)
∥∥∥∥∥∥∥∥∥ = k + 1
![Page 150: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/150.jpg)
Intermezzo: combinatorics and norms
Note: one can calculate norms from asymptotic knowledge ofmoments!
If x is selfadjoint and ϕ faithful (as our ϕ for the free Brownianmotion is) then one has
‖x‖ = limp→∞ ‖x‖p = lim
p→∞p√ϕ(|x|p) = lim
m→∞2m√ϕ(x2m)
So, if s is a semicircular element, then
ϕ(s2m) = cm =1
1 +m
(2mm
)∼ 4m,
thus
‖s‖ = limm→∞
2m√cm ∼ 2m√4m = 2
![Page 151: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/151.jpg)
Exercise: combinatorics and norms
Consider free semicirculars s1, s2, . . . of variance 1. Prove by
considering moments that∥∥∥∥∥∥1
n
n∑i,j=1
sisj
∥∥∥∥∥∥ = 4, limn→∞
∥∥∥∥∥∥1
n
n∑i,j=1,i 6=j
sisj
∥∥∥∥∥∥ = 3
Realize that the involved NC pairings are in bijection with NC
partitions and NC partitions without singletons, respectively.
![Page 152: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/152.jpg)
2000 eigenvalues of the matrix
1
12
12∑ij=1,i 6=j
XiXj,
where the Xi are independent Gaussian random matrices
−1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 30
0.1
0.2
0.3
0.4
0.5
0.6
0.7
![Page 153: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/153.jpg)
Multiplication of Multiple Wigner Integrals
The Ito formula shows up in multiplicaton of two multiple Wigner
integrals
∫f(t1)dS(t1) ·
∫g(t2)dS(t2)
=∫∫
f(t1)g(t2)dS(t1)dS(t2) +∫f(t)g(t) dS(t)dS(t)︸ ︷︷ ︸
dt
=∫∫
f(t1)g(t2)dS(t1)dS(t2) +∫f(t)g(t)dt
![Page 154: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/154.jpg)
Multiplication of Multiple Wigner Integrals
The Ito formula shows up in multiplicaton of two multiple Wigner
integrals
∫f(t1)dS(t1) ·
∫g(t2)dS(t2)
=∫∫
f(t1)g(t2)dS(t1)dS(t2) +∫f(t)g(t) dS(t)dS(t)︸ ︷︷ ︸
dt
=∫∫
f(t1)g(t2)dS(t1)dS(t2) +∫f(t)g(t)dt
![Page 155: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/155.jpg)
Multiplication of Multiple Wigner Integrals
The Ito formula shows up in multiplicaton of two multiple Wigner
integrals
∫f(t1)dS(t1) ·
∫g(t2)dS(t2)
=∫∫
f(t1)g(t2)dS(t1)dS(t2) +∫f(t)g(t) dS(t)dS(t)︸ ︷︷ ︸
dt
=∫∫
f(t1)g(t2)dS(t1)dS(t2) +∫f(t)g(t)dt
![Page 156: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/156.jpg)
Multiplication of Multiple Wigner Integrals
∫∫f(t1, t2)dS(t1)dS(t2) ·
∫g(t3)dS(t3)
=∫∫∫
f(t1, t2)g(t3)dS(t1)dS(t2)dS(t3)
+∫∫
f(t1, t)g(t)dS(t1) dS(t)dS(t)︸ ︷︷ ︸dt
+∫∫
f(t, t2)g(t) dS(t)dS(t2)dS(t)︸ ︷︷ ︸dtϕ[dS(t2)]=0
![Page 157: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/157.jpg)
Multiplication of Multiple Wigner Integrals
∫∫f(t1, t2)dS(t1)dS(t2) ·
∫g(t3)dS(t3)
=∫∫∫
f(t1, t2)g(t3)dS(t1)dS(t2)dS(t3)
+∫∫
f(t1, t)g(t)dS(t1) dS(t)dS(t)︸ ︷︷ ︸dt
+∫∫
f(t, t2)g(t) dS(t)dS(t2)dS(t)︸ ︷︷ ︸dtϕ[dS(t2)]=0
![Page 158: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/158.jpg)
Multiplication of Multiple Wigner Integrals
Free Ito Formula [Biane. Speicher 1998]:
dS(t)AdS(t) = ϕ(A)dt for A adapted
![Page 159: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/159.jpg)
Multiplication of Multiple Wigner Integrals
Consider f ∈ L2(Rn+), g ∈ L2(Rm+)
For 0 ≤ p ≤ min(n,m), define
fp_ g ∈ L2(Rn+m−2p
+ )
by
fp_ g(t1, . . . , tm+n−2p)
=∫f(t1, . . . , tn−p, s1, . . . , sp)g(sp, . . . , s1, tn−p+1, . . . , tn+m−2p)ds1 · · · dsp
Then we have
I(f) · I(g) =min(n,m)∑p=0
I(fp_ g)
![Page 160: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/160.jpg)
Free Chaos Decomposition
One has the canonical isomorphism
L2(S(t) | t ≥ 0) =∞⊕n=1
L2(Rn+), f =∞⊕n=0
fn,
via
f =∞∑n=0
I(fn) =∞∑n=0
∫· · ·
∫fn(t1, . . . , tn)dS(t1) . . . dS(tn).
The set
I(fn) | fn ∈ L2(Rn+)
is called n-th chaos.
![Page 161: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/161.jpg)
Note: Polynomials in free semicircular elements
p(s1, . . . , sm)
can be realized as elements from finite chaos
I(f) | f ∈⊕
finite
L2(Rn+)
![Page 162: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/162.jpg)
Problems
We are interested in properties of selfadjoint elements from fixed(or finite) chaos, like:
• regularity properties of distribution of I(f) from fixed (orfinite) chaos
– atoms or density ????
• distinguish distributions of I(f) and I(g) from different chaos
Looking on elements in fixed chaos gives some quite unexpectedconstraints
analogue of classical result of Nualart and Peccati 2005
![Page 163: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/163.jpg)
Problems
We are interested in properties of selfadjoint elements from fixed(or finite) chaos, like:
• regularity properties of distribution of I(f) from fixed (orfinite) chaos
– atoms or density ????
• distinguish distributions of I(f) and I(g) from different chaos
Looking on elements in fixed chaos gives some quite unexpectedconstraints
We have analogue of classical result of Nualart and Peccati 2005.
![Page 164: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/164.jpg)
Theorem (Kemp, Nourdin, Peccati, Speicher 2012): Con-
sider, for fixed n, a sequence f1, f2, · · · ∈ L2(Rn+) with f∗k = fkand ‖fk‖2 = 1 for all k ∈ N. Then the following statements are
equivalent.
(i) We have limk→∞ϕ[I(fk)4] = 2.
(ii) We have for all p = 1,2, . . . , n− 1 that
limk→∞
fkp_ fk = 0 in L2(R2n−2p
+ ).
(iii) The selfadjoint variable I(fk) converges in distribution to a
semicircular variable of variance 2.
![Page 165: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/165.jpg)
Corollary: For n ≥ 2 and f ∈ L2(Rn+), the law of I(f) is not
semicircular
![Page 166: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/166.jpg)
Quantitative Estimates ...
Given two self-adjoint random variables X,Y , define the distance
dC2(X,Y ) := sup|ϕ[h(X)]− ϕ[h(Y )]| : I2(h) ≤ 1;
where
I2(h) = ‖∂h′‖ and ∂Xn =n−1∑k=0
Xk ⊗Xn−1−k
Rigorously: If h is the Fourier transform of a complex measureν on R,
h(x) = ν(x) =∫Reixξ ν(dξ)
then we define
I2(h) =∫Rξ2 |ν|(dξ)
![Page 167: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/167.jpg)
Quantitative Estimates ...
Given two self-adjoint random variables X,Y , define the distance
dC2(X,Y ) := sup|ϕ[h(X)]− ϕ[h(Y )]| : I2(h) ≤ 1;
where
I2(h) = ‖∂h′‖ and ∂Xn =n−1∑k=0
Xk ⊗Xn−1−k
Rigorously: If h is the Fourier transform of a complex measureν on R,
h(x) = ν(x) =∫Reixξ ν(dξ)
then we define
I2(h) =∫Rξ2 |ν|(dξ)
![Page 168: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/168.jpg)
... in Terms of the Free Gradient Operator
Define the free Malliavin gradient operator by
∇t(∫
f(t1, . . . , tn) dSt1 · · · dStn)
:=n∑
k=1
∫f(t1, . . . , tk−1, t, tk+1, . . . , tn)
dSt1 · · · dStk−1 ⊗ dStk+1 · · · dStn
Theorem (Kemp, Nourdin, Peccati, Speicher):
dC2(F, S) ≤
1
2ϕ⊗ ϕ
(∣∣∣∣∫ ∇s(N−1F )](∇sF )∗ ds− 1⊗ 1∣∣∣∣)
But no estimate against Fourth Moment in this case!
![Page 169: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/169.jpg)
... in Terms of the Free Gradient Operator
Define the free Malliavin gradient operator by
∇t(∫
f(t1, . . . , tn) dSt1 · · · dStn)
:=n∑
k=1
∫f(t1, . . . , tk−1, t, tk+1, . . . , tn)
dSt1 · · · dStk−1 ⊗ dStk+1 · · · dStn
Theorem (Kemp, Nourdin, Peccati, Speicher):
dC2(F, S) ≤
1
2ϕ⊗ ϕ
(∣∣∣∣∫ ∇s(N−1F )](∇sF )∗ ds− 1⊗ 1∣∣∣∣)
But no estimate against Fourth Moment in this case!
![Page 170: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/170.jpg)
... in Terms of the Free Gradient Operator
Define the free Malliavin gradient operator by
∇t(∫
f(t1, . . . , tn) dSt1 · · · dStn)
:=n∑
k=1
∫f(t1, . . . , tk−1, t, tk+1, . . . , tn)
dSt1 · · · dStk−1 ⊗ dStk+1 · · · dStn
Theorem (Kemp, Nourdin, Peccati, Speicher):
dC2(F, S) ≤
1
2ϕ⊗ ϕ
(∣∣∣∣∫ ∇s(N−1F )](∇sF )∗ ds− 1⊗ 1∣∣∣∣)
But no estimate against Fourth Moment in this case!
![Page 171: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/171.jpg)
In the classical case one can estimate the corresponding expres-sion of the above gradient, for F living in some n-th chaos, interms of the fourth moment of the considered variable, thusgiving a quantitative estimate for the distance between the con-sidered variable (from a fixed chaos) and a normal variable interms of the difference between their fourth moments. In thefree case such a general estimate does not seem to exist; at themoment we are only able to do this for elements F from thesecond chaos.
Corollary: Let F = I(f) = I(f)∗ (f ∈ L2(R2+)) be an element
from the second chaos with variance 1, i.e., ‖f‖2 = 1, and let Sbe a semiciruclar variable with mean 0 and variance 1. Then wehave
dC2(F, S) ≤
1
2
√3
2
√ϕ(F4)− 2.
![Page 172: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/172.jpg)
Some general literature
• D. Voiculescu, K. Dykema, A. Nica: Free Random Variables.CRM Monograph Series, Vol. 1, AMS 1992
• F. Hiai, D. Petz: The Semicircle Law, Free Random Variablesand Entropy. Math. Surveys and Monogr. 77, AMS 2000
• A. Nica, R. Speicher: Lectures on the Combinatorics of FreeProbability. London Mathematical Society Lecture Note Se-ries, vol. 335, Cambridge University Press, 2006
• J. Mingo, R. Speicher: Free Probability and Random Matri-ces. Coming soon
![Page 173: Free Probability Theory and Random Matrices...This is the realm of free probability theory. Note: If several N N random matrices Aand B are involved then the eigenvalue distribution](https://reader035.fdocuments.in/reader035/viewer/2022071504/61244e80713d341daa00f6e3/html5/thumbnails/173.jpg)
Some literature on free stochastic analysis
Biane, Speicher: Stochastic Calculus with Respect to Free Brow-
nian Motion and Analysis on Wigner Space. Prob. Theory Rel.
Fields, 1998
Kemp, Nourdin, Peccati, Speicher: Wigner Chaos and the
Fourth Moment. Ann. Prob. (to appear)
Nourdin, Peccati, Speicher: Multidimensional Semicircular Lim-
its on the Free Wigner Chaos. Preprint, 2011