1 Lecture 13 Fluctuations. Fluctuations of macroscopic variables. Correlation functions. Response...
-
Upload
wilfrid-rice -
Category
Documents
-
view
216 -
download
1
Transcript of 1 Lecture 13 Fluctuations. Fluctuations of macroscopic variables. Correlation functions. Response...
1
Lecture 13Lecture 13
•Fluctuations.
•Fluctuations of macroscopic variables.
•Correlation functions.
•Response and Fluctuation.
•Density correlation function.
•Theory of random processes.
•Spectral analysis of fluctuations: the Wiener-Khintchine theorem.
•The Nyquist theorem.
•Applications of Nyquist theorem.
2
We considered the system in equilibriumin equilibrium, where we did different statistical averages of the various physical quantities. Nevertheless, there do occur deviations from, or fluctuationsfluctuations about these mean values. Though they are generally small, a study of these fluctuations is of great physical interest for several reasons.1. It enables us to develop a mathematical scheme with the
help of which the magnitude of the relevant fluctuationsfluctuations,
under a variety of physical situations, can be estimated.
We find that while in a single-phase system the
fluctuations are thermodynamically negligible they can
assume considerable importance in multi-phase systemsmulti-phase systems,
especially in the neighborhood of the critical pointsneighborhood of the critical points. In the
latter case we obtain a rather high degree of spatial spatial
correlationcorrelation among the molecules of the system which in
turn gives rise to phenomena such as critical opalescencecritical opalescence.
3
2. It provides a natural framework for understanding a class of physical phenomena which come under the common heading of “Brownian motionBrownian motion”; these phenomena relate properties such as the mobility of a fluid system, its coefficient of diffusion, etc., with temperature trough the so-called Einstein’s relationsEinstein’s relations. The mechanism of the Brownian motion is vital in formulating, and in a certain sense solving, problems as to how “a given physical a given physical system, which is not in a state of equilibrium, finally system, which is not in a state of equilibrium, finally approaches a state of equilibriumapproaches a state of equilibrium”, while “a physical a physical system, which is already in a state of equilibrium, persists system, which is already in a state of equilibrium, persists to be in that stateto be in that state”.
3. The study of fluctuations, as a function of time, leads to the concept of correlation functionscorrelation functions which play an important role in relating the dissipation properties of a dissipation properties of a systemsystem, such as the viscose resistanceviscose resistance of fluid or the electrical resistanceelectrical resistance of a conductor, with the microscopic properties of the system in a state of the equilibrium. This relationship (between irreversible processes on one-hand and equilibrium properties on the other) manifests itself in the so-called fluctuation-dissipation theorem.fluctuation-dissipation theorem.
4
At the same time, a study of the “frequency spectrumfrequency spectrum” of fluctuations, which is related to the time-dependent time-dependent correlation functioncorrelation function through the fundamental theorem of theorem of Wiener and KhinthchineWiener and Khinthchine, is of considerable value in assessing the “noisenoise” met with in electrical circuits as well as in the transmission of electromagnetic signals.FluctuationsFluctuations
x x x (13.1)
We note that
0xxx (13.2)
We look to the mean square deviation for the first rough measure of the fluctuation:
( ) ( )x x x x xx x x x2 2 2 2 2 22 (13.3)
The deviation xx of a quantity xx from its average value is defined as
x
5
Consider the distributiondistribution g(x)dxg(x)dx which gives the number of
systems in dxdx at xx. In principle the distribution g(x)g(x) can be determined from a knowledge of all the moments, but in practice this connection is not always of help. The theorem is usually proved; we take the Fourier transformFourier transform of the distribution:
u t g x e dxixt( ) ( )
1
2(13.4)
Now it is obvious on differentiating u(t)u(t) that
x id
dtu tn n
n
n
t
20
( ) (13.5)
We usually work with the mean square deviationmean square deviation, although it
is sometimes necessary to consider also the mean fourth mean fourth
deviationdeviation. This occurs, for example, in considering nuclear
resonance line shape in liquids. One refers to as the n-n-th
moment of the distribution.
x n
6
Thus if u(t)u(t) is an analytic function we know from the moments all the information needed to obtain the Taylor series Taylor series
expansionexpansion of u(t)u(t) the inverse Fourier transformFourier transform of u(t)u(t) gives
g(x)g(x) as required. However, the higher moments are really needed to use this theorem, and they are sometimes hard to
calculate. The function u(t)u(t) is sometimes called the characteristic functioncharacteristic function of the distribution.Energy Fluctuations in a Canonical Ensemble Energy Fluctuations in a Canonical Ensemble
When a system is in thermal equilibrium with a reservoir the reservoir the
temperaturetemperature ss of the system is defined to be equal to the
temperature temperature rr of the reservoir, and it has strictly no
meaning to ask questions about the temperature fluctuation. The energy of the system will however, fluctuate as energy is exchanged with the reservoir. For a canonical ensemble we have E E e e E e en
E En
E En n n n2 2 2 / // / (13.6)
where =-1/=-1/. Now
7
Z e En (13.7)
so that
EZ
Z2
2 2
/ (13.8)
EZ
Z
/ (13.9)
E
Z
Z
Z
Z
1 12
2 2
2
(13.10)
E
E E E 2 2 2( ) (13.11)
Further
and
thus
Now the heat capacityheat capacity at constant values of the external parameters is given by
8
CE
T
E d
dT
E
kTV
12
(13.12)
thus
E kT CV2 2 (13.13)
Here CCvv refers to the heat capacity at the Actual volumeActual volume of the
system. The fractional fluctuation in energy is defined by
222
2
2
2
/)()(
ECkTE
E
E
EF V
(13.14)
We note then that the act of defining the temperature of a system by bringing it into contact with a heat reservoirheat reservoir leads to an uncertainty in the value of the energy. A system in thermal equilibrium with a heat reservoir does not have energy, which is precisely constant. Ordinary thermodynamics is useful only so long as the fractional fractional
fluctuation in energy is small.fluctuation in energy is small.
9
For perfect gasperfect gas for example we have
NkTE
NkCV
thus F
N
1(13.15)
For N=10N=102222, F, F1010-11-11, which is negligibly small.
For For solid at low temperatures.solid at low temperatures. According to the Debye low the
heat capacity of a dielectric solid for T<<T<<DD is
C Nk TV D ( / ) 3 (13.16)
3)/( DTNkTE (13.17)
FN N T
D
1 1 3 1 2
/
(13.18) so that
also
10
Suppose that T=10T=10-2-2deg K; deg K; DD=200 deg K; N=200 deg K; N10101616 for a for a
particle 0.01 cm on a side.particle 0.01 cm on a side. Then F0.03 (13.19)
which is not inappreciable. At very low temperatures thermodynamics fails for a fine particle, in the sense that we cannot know E E and T T simultaneously to reasonable accuracy. At 1010-5-5 degree K degree K the fractional fluctuation in energy is of the order of unity for a dielectric particle of the volume 1cm3
Concentration Fluctuations in a Grand Canonical EnsembleConcentration Fluctuations in a Grand Canonical Ensemble
We have the grand partition function
from which we may calculate
N ZZ
Z
ln (13.21)
Z e N E
N i
N i ( )/
,
, (13.20)
11
and
Thus
Perfect Classical Gas Perfect Classical Gas
From an earlier result
N
N e
e Z
ZN E
N iN E
N i
N i
N i
2
22 2
2
( )/
,( )/
,
,
,
(13.22)
( )
N N NZ
Z
Z
Z N2 2 2 22
2 2
21 1
(13.23)
N e V / ( / )3 (13.24)
thus
and using (13.23)
N N/ / (13.25)
12
The fractional fluctuation is given by
( )N N2 (13.26)
F N NN
( ) //
2 21 2 1 (13.27)
Random Process Random Process A stochastic or randomstochastic or random variable quantity with a definite range of values, each one of which, depending on chance, can be attained with a definite probability. A stochastic variablestochastic variable is defined 1. if the set of possible values is given, and
2. if the probability attaining each value is also given.
Thus the number of points on a die that is tossed is a stochastic variable with six values, each having the probability 1/6.
13
The sum of a large number of independent stochastic stochastic
variablesvariables is itself a stochastic variablestochastic variable. There exists a very
important theorem known as a central limit theoremcentral limit theorem, which
says that under very general conditions the distribution of the
sum tends toward a normal (Gaussian) distribution lawnormal (Gaussian) distribution law as the
number of terms is increased. The theorem may be stated
rigorously as follows: Let xx11, x, x
22,…, x,…, xnn be independent stochastic variables with their
means equal to 00, possessing absolute moments 2+2+(i)(i) of the
order 2+order 2+,where is some number >0 is some number >0. If denoting by BBnn the
mean square fluctuation of the sum xx11+ x+ x22+…+ x+…+ x
nn , the
quotient
tends to zero as nn, the probability of the inequality
wBn
i
i
n
n
21
1 2
( )
( / )(13.28)
14
x x x
Btn
n
1 1
...(13.28)
tends uniformly to the limit
For a distribution f(xf(xii),), the absolute moment of order is defined as
Almost all the probability distributions f(x)f(x) of stochastic
variables x x of interest to us in physical problems will satisfy
the requirements of the central limit theorem.central limit theorem. Let us consider several examples.
due2
1 t2u 2
/
(13.29)
( ) ( )ii i ix f x dx
(13.30)
15
Example 13aExample 13a
The variable x distributes uniformly between 1. Then
f(x)=1/2f(x)=1/2, -1-1 x x 1 1, and f(x)=0f(x)=0 otherwise. The absolute moment of order 3 exists:
41
1
1
3
21
3 dxx
(13.32)
The mean square fluctuation is
( )x x x2 2 2 (13.33)
but . We have x 2 0
x x x dx2 2 2
0
1
13 (13.34)
If there are n n independent variables xxii it is easy to see that
the mean square fluctuation BBnn of their sum (under the same
distribution) is
16
Thus (for =1=1) we have for (13.28) the result
which does tend to zero as nn. Therefore the central limit central limit theoremtheorem holds for this example.
w
n
nn /
/ /
4
3 3 2 (13.36)
B nn / 3 (13.35)
Example 13b Example 13b
The variable x x is a normal variable with standard deviation - that means, that it is distributed according to the Gaussian distribution
f x e x( )/
1
2
2 2 2
(13.37)
where 22 is the mean square deviation; is called standard
deviation. The absolute moment of order 3 exists:
17
The mean square fluctuation is
If there are nn independent variables xxii, then
For =1
3
3 2
0
32
2
4
2
2 2
x e dxx /(13.38)
x x x e dxx2 2 2 2
0
22
2
2 2
/ (13.39)
B nn 2 (13.40)
w
n
nn
4 23
2 3 2
//
(13.41)
which approaches 0 as nn approaches .. Therefore the central central limit theorem applieslimit theorem applies to this example. A Gaussian random process is one for which all the basic distribution functions
f(xf(xii)) are Gaussian distributions.
18
But this integral does not converge for 1,1, and thus not for
=2+=2+, >0>0. We see that central limit theoremcentral limit theorem does not apply
to a Lorentzian distribution.Lorentzian distribution.
Example 13cExample 13cThe variable x x has a Lorentzian distributionLorentzian distribution:
f xx
( ) 1
1 2(13.42)
The absolute moment of order is proportional to
xx
dx 1
1 20
(13.43)
19
Random Process or Stochastic Process Random Process or Stochastic Process
By a random process or stochastic processrandom process or stochastic process x(t)x(t) we mean a process in which the variable xx does not depend in a completely definite way on the independent variable tt, which may denote the time. In observations on the different systems of a representative ensemble we find different functions x(t).x(t). All we can do is to study certain probability certain probability
distributionsdistributions - we cannot obtain the functions x(t)x(t) themselves for the members of the ensemble. In Figure 13.1 one can see a sketch of a possible x(t)x(t) for one system.
t
x
Figure 13.1 Sketch of a random process x(t)x(t)
20
The plot might, for example, be an oscillogram of the thermal
noise current x(t)x(t)I(t)I(t) obtained from the output of a filter
when a thermal noise voltage is applied to the input.
We can determine, for example
pp11(x,t)dx(x,t)dx =Probability of finding xx in the range (x, x+dx)(x, x+dx)at
time tt;
(13.44)
p2(x1,t1; x2,t2)dx1dx2 =Probability of finding x in (x1, x1+dx1) at time
t1; and in the range (x2, x2+dx2) at time t2 (13.45)
If we had an actual oscillogram record covering a long period of time we might construct an ensemble by cutting the record
up into strips of equal length T T and mounting them one over
the other, as in Figure 13.2.
21
T
x
T
T
1x(t)
2x(t)
3x(t)
Figure 13.2 Recordings
of x(t)x(t) versus t for three
system of an ensemble, as simulated by taking three intervals of duration T from a single long recording. Time averages are taken in a horizontal direction in such a display; ensemble averages are taken in a vertical direction.
The probabilities pp11 and pp22
will be found from the ensemble.
Proceeding similarly we can form pp33, p, p44,….,…. The whole set of
probability distributions ppn n (n=1,2,…,(n=1,2,…,)) may be necessary to
describe the random process completely.
22
In many important cases pp22 contains all the information we
need. When this is true the random process is called a Markoff Markoff
process. A stationary random processprocess. A stationary random process is one for which the the
joint probability distributions pjoint probability distributions pnn are invariant under a are invariant under a
displacement of the origin of time.displacement of the origin of time. We assume in all our
further discussion that we are dealing with stationary Markoff stationary Markoff
processesprocesses.It is useful to introduce the conditional probability conditional probability
PP22(x(x11,0,0xx22,t)dx,t)dx
22 for the probability that given xx11 one finds xx in dxdx
22
at xx22 a time tt later. Than it is obvious that
),0,()0,(),;0,( 21211212 txxPxptxxp (13.46)
23
Wiener-Khintchine Wiener-Khintchine TheoremTheoremWiener-Khintchine Wiener-Khintchine TheoremTheoremThe Wiener-Khintchine theoremWiener-Khintchine theorem states a relationship between two important characteristics of a random process: the power spectrum of the process and the the power spectrum of the process and the correlation function of the process.correlation function of the process. Suppose we develop one of the records in Fig.13.2 of x(t)x(t) for
0<t<T0<t<T in a Fourier series:
x t a f t b f tn n n nn
( ) cos sin
2 21
(13.47)
where ffnn=n/T=n/T. We assume that <x(t)>=0<x(t)>=0, where the angular
parentheses <><> denote time average; because the average is assumed zero there is no constant term in the Fourier series. The Fourier coefficients are highly variable from one record of duration T T to another. For many type of noise the aann, b, b
nn have
Gaussian distributionsGaussian distributions. When this is true the process (13.47) is said to be a Gaussian random processGaussian random process.
24
Let us now imagine that x(t)x(t) is an electric current flowing
through unit resistance. The instantaneous power dissipation
is xx22(t).(t). Each Fourier component will contribute to the total
power dissipation. The power in the n-n-th component is
22sin2cos tfbtfa nnnnn P (13.48)
We do not consider cross products terms in the power of the form
tfbtfatfbtfa mmmmnnnn 2sin2cos2sin2cos (13.49)
because for nnmm the time average of such terms will be zero. The time average of PP is
2/22 nnn baP (13.50)
because
.02sin2cos ;2sin ;2cos 212
212 tftftftf nnnn (13.51)
25
We now turn to ensemble averages, denoted here by a bar over the quantity. As we mentioned above, every record in Fig.13.2 running in time from 00 to TT. We will consider that an ensemble average is an average over a large set of independent records. From a random process we will have
0a ;0 ;0 n nnn bba (13.52)
nmnmm bbba 2nn (13.53)
where for a Gaussian random process Gaussian random process nn is just the standard
deviation, as in example 13b
22 2/
2
1)(
xexf
Thus
22222 )2sin2(cos2sin2cos nnnnnnnn tftftfbtfa (13.54)
26
Thus from (13.49) the ensemble average of the time average power dissipation associated with n-thn-th component of x(t)x(t) is
2nn P (13.55)
Power SpectrumPower Spectrum
We define the power spectrum or spectral density G(f) G(f) of the random process as the ensemble average of the time average of the power dissipation in unit resistance per unit frequency
bandwidth. If ffnn equal to the separation between two
adjacent frequencies TT
n
T
nfff nnn
111
(13.56)
we have 2nnnn ffG P)( (13.57)
Now by (13.51), (13.52) and (13.53)
27
n
2n
2 tx )( (13.58)
Using (13.56)
n
nnn dffGffGtx0
2 )()()( (13.59)
The integral of the power spectrum over all frequencies gives the ensemble average total powerensemble average total power.
Correlation FunctionCorrelation FunctionCorrelation FunctionCorrelation Function
Let us consider now the correlation function
)()()( txtxC (13.60)
where the average is over the time tt. This is the
autocorrelation function. Without changing the result we may take an ensemble average of the time average )()( txtx so that
28
n nnnnnn
mnnnnnnn
ffba
tfbtfatfbtfatxtxC
2cos2cos)(
)(2sin)(2cos2sin2cos)()()(
22221
,
(13.61)
Using (13.57)
Thus the correlation function is the Fourier cosine transform of the power spectrum.
0
2cos)()( dfffGC (13.62)
Using the inverse Fourier transform we can write
0
2cos)(4)( dfCfG (13.63)
This, together with (13.62) is the Winer-Khitchine theorem.Winer-Khitchine theorem. It
has an obvious physical content. The correlation function tells
us essentially how rapidly the random process is changing.
29
Example 13d. Example 13d.
If ceC /)( (13.64)
we may say that cc is a measure of the above time the system
exists without changing its state, as measured by x(t),x(t), by
more than ee-1-1. cc in this case have a meaning of correlation correlation
time.time. We then expect physically that frequencies much higher
than, 1/1/cc will not be represented in an important way in the
power spectrum. Now if C(C()) is given by (13.64), the Wiener-Khintchine theorem tells us that
20
/
)2(1
42cos4)(
c
c
fdfefG c
(13.65)
Thus, as shown in Fig. 13.3, the power spectrum is flat (on a
log. frequency scale) out to 22ff1/1/cc, and then decreases as
1/f1/f22 at high frequencies. Note that the noise spectrum for the
correlation function is “white” out of cutoff ffcc1/21/2cc,
30
0.5
0
1
log102f1 2 3 4
Figure 13.3 Plot of spectral density versus loglog
101022ff for
an exponential function with cc=10=10-4-4
cc.The Nyquist TheoremThe Nyquist Theorem
The Nyquist theoremNyquist theorem is of great importance in experimental physics and in electronics. The theorem gives a quantitative expression for the thermal noise generated by a system in thermal equilibrium and is therefore needed in any estimate of the limiting signal-to-noise ratio of experimental set-ups. In the original form the Nyquist theorem states that the Nyquist theorem states that the
mean square voltage across a resistor of resistance mean square voltage across a resistor of resistance RR in in
thermal equilibrium at thermal thermal equilibrium at thermal T T is given byis given by fRkTV 42 (13.66)
31
where ff is the frequency band width which the voltage voltage fluctuationsfluctuations are measured; all Fourier components outside the given range are ignored. Remember the definition of the spectral density G(f),G(f), we may write Nyquist results as
This is not strictly the power density, which would be G(f)/RG(f)/R.
R R’
FilterNoise generator
Figure 13.4 The noise generator produces a power spectrum
G(f)=4RkTG(f)=4RkT. If the filter passes unit frequency range, the resistance R’ R’ will absorb power 2RkT2RkT. R’R’ is matched to RR. The maximum thermal noise power per unit frequency range
delivered by a resistor to a matched load will be G(f)/4R=kTG(f)/4R=kT; factor of 4 enters where it does because the power delivered to the load R’ R’ is
222 )'/('' RRRVRI (13.68)
RkTfG 4)( (13.67)
32
which at match (R’=R)(R’=R) is (Figure.13.4). RV 4/2
We will derive the Nyquist theoremNyquist theorem in two ways: first, following the original transmission line derivationtransmission line derivation, and, second, using a microscopic argument. microscopic argument.
Transmission line derivationTransmission line derivationTransmission line derivationTransmission line derivation
Figure 13.5 Transmission line of length l with matched terminations.
Zc=R
l
R R
Consider as in Figure 13.5 a loss less transmission line of
length l l and characteristic impedance ZZcc=R=R terminated at
each end by a resistance R. The line is therefore matched at each end in the sense that all energy traveling down the line will be absorbed without reflection in the appropriate resistance.
33
The entire circuit is maintained at temperature TT. In analogy
to the argument on the black-body radiation (Lecture 8) the
transmission line has two electromagnetic modes (one
propagation in each direction) in the frequency range
where c’c’ is the propagation velocity on the line. Each mode has energy
in equilibrium. We are usually concerned here with the classical limit , so that the thermal energy on the line in the
frequency range ff
The rate at which energy comes off the line in one direction is
l
cf
' (13.69)
1/ kTe
(13.70)
'c
fkTl(13.71)
34
Because the thermal impedance is matched to the line, the power coming off the line at one end is absorbed in the terminal impedance RR at that end. The load emits energy at the same rate. The power input to the load is
But V=I(2R),V=I(2R), so that
which is the Nyquist Nyquist theorem. theorem.
fkT (13.72)
fkTRI 2 (13.73)
fkTRV 4/2 (13.74)
Microscopic Derivation Microscopic Derivation
We consider a resistance RR with N N electrons per unit volume;
length ll, area A A and carrier relaxation time cc. We treat the
electrons as Maxwellianelectrons as Maxwellian but it was shown that the noise voltage is independent of such details, involving only the value of the resistance regardless of the details of the mechanisms contributing to the resistance.
35
First note that
uRANeRAjIRV (13.75)
here V V is the voltage, I I the current, jj the current density, and is the average (or drift) velocity component of the electrons
down the resistor. Observing that NAl NAl is the total number of
electrons in the specimen
iuuNAl (13.76)
Summed over all electrons. Thus
ii VulV )(Re/ (13.77)
where uui i and VVii are the random variables. The spectral
density G(f)G(f) has the property that in the range ff
ffGVi )(2 (13.78)
u
36
We suppose that the correlation function may be written as
ceVtVtVC iii /2)()()( (13.79)
Then, from the Wiener-Khintchine theorem we have
2c
c22
0
22
f21ul4df2eul4fG c
)()(Re/cos)(Re/)( /
(13.80)
Usually in metals at room temperature cc<10<10-13 -13 ss, so from dcdc
through the microwave range cc<<1<<1 and may be neglected.
We recall that kTum 2
1221 (13.81)
(m- mass of electronm- mass of electron, average velocity of electron)u
So that mkTu /2 (13.82)
37
Thus in the frequency range ff
flm
kTNAlffNAlGVNAlV ci
222 Re
4)( (13.83)
or fRkTV 42 fRkTV 42 (13.84)
Here we have used the relation
mNe c /2 (13.85)
from the theory of conductivity and also elementary relation
AlR / (13.86)
is the electrical conductivity.
38
The simplest way to establish (13.85) in a plausible way is to solve the drift velocity equation
so that in the steady state ( or for cc<<1<<1 ) we
have
eEudt
dm
c
1
(13.87)
mEeu c / (13.88)
giving for the mobility (drift velocity per unit electric field)
Then we have for the electric conductivity
meEu c // (13.89)
mNeEuNeEj c /// 2 (13.90)