limit theorems and infinite divisibility
Transcript of limit theorems and infinite divisibility
Fractional parts of random variables : limit theorems andinfinite divisibilityWilms, R.J.G.
DOI:10.6100/IR423641
Published: 01/01/1994
Document VersionPublisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differencesbetween the submitted version and the official published version of record. People interested in the research are advised to contact theauthor for the final version of the publication, or visit the DOI to the publisher's website.• The final author version and the galley proof are versions of the publication after peer review.• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
Citation for published version (APA):Wilms, R. J. G. (1994). Fractional parts of random variables : limit theorems and infinite divisibility Eindhoven:Technische Universiteit Eindhoven DOI: 10.6100/IR423641
General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright ownersand it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ?
Take down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediatelyand investigate your claim.
Download date: 24. Mar. 2018
FRACTIONAL OF
PARTS
RANDOM VARIABLES
Limit theorems and infinite divisibility
PROEFSCHRIFT
ter verkrijging van de graad van doctor aan de Technische Universiteit Eindhoven, op ger-ag van de Redor Magnificus, prof. dr. J.H. van Lint, vOOr ceo commissie aangewezen door het College van Dekanen in het openbaar te verdedigen
op woensdag 26 oktober 1994 om 16.00 uur
door
Roeland Joannes Gerardus Wilms geborcn tc Maasbracht
Woord vooraf - Acknowledgment
Dit proefschrift is tot stand gekomen gedurende een vierjarige aanstelling als \., assistent in opleiding (a.i.o.) bij de vakgroep Besliskunde en Stochastiek aan de Technische Universiteit Eindhoven. Het onderwerp van dit onderzoek is . voorgesteld door mijn promotor Fred Steutel, die ik hierbij wil bedanken voor zijn gedreven en inspirerende begeleiding. Fred was te allen tijde bereid mijn vragen te beantwo?rden, en soms werd de voortgang van mijn onderzoek zelfsbesproken tijdens onze hardloopontspanning langs de Karpendonkse plas. Tevens wi! ik dank betuigen aan mijn copromotor Jan Thiemann voor zijn inzet en doorzett.ingsvermogen om elk det.ail van dit proefschrift na te pluizen. Dit is van groot belang geweest voor de totstandkoming van dit werk.
Verder ben ik veel dank verschuldigd aan Piet Holewijn en Laurens de Haan voor hun stimulerende interesse in mijn onderzoek, en voor hun nuttige suggesties en commentaar. I am grateful to Peter Schatte for the time and effort spent on this dissertation. Tevens heb ik prettig samengewerkt met Jos Brands, die mij onder' andere de beginselen van de asymptotiek bijgebracht heeft-.
Mijn collegae zorgden voor een vriendschappelijke werksfeer en de nodige afleiding; in bet bijzonder mijn kamergenoot Frank Coolell, die bijna elke dag de juiste hoeveelheid humor tevoorschijn toverde. Tijdens mijn a.i.o.periode gaf Marjan mij liefdevolle steun en goede adviezen; wallneer ik over de gebruikelijke a.i.o.-frustaties wilde praten; yond ik bij haar altijd een welwillend oor.
Tenslotte ben ik mijn ouders dankbaar voor de mogelijkheid en de steun om te gaan studeren en VOOI' hun belangstelling in mijn werk.
v
Contents
I Introduction and PreHminaries 1 1.1 Notations, conventions and preliminaries 4 1.2 Properties of Fourier~Stielt.ies sequences, 7 1.3 Asymptotic uniformity (mod 1) . . . . . 14
1.3.1 Uniform distribution (mod 1) " 14
1.3,2 Uniform distribution (mod 1) almost surely 15
1.3.3 Asymptotic uniformity in distribution (mod 1) , 16 1.4 Eulcr-MacLaurin sum formula 17
1.5 Summary .... 18
2 Sums and Products 19 20 20
23
2.1 Sums ...... . 2.1.1
2.1.2 Surns of iid rv'S . . . . . .
Sums of independent (V'S.
2.1.3 Asymptotic uniformity in distribution (mod 1) , 26 2.2 Products ......................... , 30
3 Maxima 37 3.l Converg{~nce to some limit , , . . . . . . . , . . . 38
3.1.1 Necessary and sufficient conditions .," 38
3.1.2 Convergence to possibly degenerate limits 42
3.1.3 N umher of maxima in a discrete sample 47
3.2 Asymptot.ic uniformity in distribution (mod 1) 52 :t3 Maxima do not converw~ . . . , . . 59
~1.4 Relation with extreme value theory 61
3.4,1 Preliminaries ",."., 61
3.4.2 Convergence to some limit . 65
3.4.3 Asymptotic uniformity in distribution (mod 1) . 69
3.4.4 Maxima do not converge 73
3.4.5 Conclusions.., 77
Vll
Vlll
1 Covariances 4.] Pn~lillliIlhri!'s
4.2 Mllltipks of tv's. 4.:~ Sum::; of iid rv'~ . ·1.1 Maxima. of iid rv's
5 Infinit.e divisibility
Contents
79 .so 89 92
9G
115 S.l rU~plicati()!l !lumher of h dixt.rihutioll 115 r ~) ~) . ..:... C('neralization of GoldnlhlJ'S oqnfl.tioll 120 5.3 Infinite divisibility (mod 1) . . . 124
G.3.1 Frrlirninarics . . . . . . 125 ~1.;3.2 Hepr('::;cntation tlworern::; 12M :-).:~.J Charnl:t.Hizat.ion of infinit.ely divisible (rnod 1) distl'i·-
bll tiolHi. . . . . . . . . . . , . , . . . . . . . . . _ . _ 1 :\1 :-).~~.4 Alt'('mpLod rOllSl.rllction of infinitely divisible (mod 1)
di::;tributions . 1:17 SC,\bilit.y (mod]) _ . 139
Notation - Index A bhreviatioll;:; Notationt; . COil v('n ti Oll:-i
Authol'-Index
Suhjed-Index
Sarnenvatting
Curriculum Vitae
141
147 117 117 148
149
151
153
155
Chapter 1
Introduction and Preliminaries
Thil:> monograph deals with the behavior of fractional parts of real random variables, and particularly, the convergence of sequences of fractional parts, and t.he behavior with respect to infinite divisibility. The integer part of a real number is the largest integer not. excee~ing t.his number; the fractional part of a real number is the difference of this number and its integer part. We give uAcesflary and sufficient conditions for a sequence of fractional parts of random variables to converge in distribution, and in particular, to converge to a uniformly distributed limit. In addition, in some cases we derive sufficient. condit,ions for stich a sequence to diverge. Fradional parts of random variables have been studied in the context of Operations Research, and play a role in generating pseudorandom numbers and in the analysis of rounding errors.
We first give some historical background. The development of the asymp~ toUc uniform distribution modulo 1 (mod 1) started in number theory. Weyt's ([69]) celebrated paper had an essential impact on this field, and became the source of mauy results in t.he large domain where probability t}wory, measlll'e theory and ergodic theory meet with number theory (see Koksma [37]). Kcmperman ([36]) and Holcwijn ([29]) showed that many of the methods and result.s of probabilit.y theory play an important role in the theory of disttibutions (mod 1). Loyncs ([42)) explicitly formulated two rather: different reasons for applying probabilistic ideas to the concept of asymptot.ic uniform distribution (mod 1), namely: 1. The basic idea of asymptotic distribution is a particular case of a standard probabilistic notion, that of weak convergence of distribut.ions. 2. Th(~ property of asymptot.ic distribution is essentially an expression of flOIlH~ long-term regularity, of just the kind that is of illterest in probability theory, S(>veral concepts of asymptotic tlniforrn dist.ribution (mod 1) have been ext.ellsively studit'd; we mention here t.he concept of almost surely uni-
1
ChRpter 1, introduction and Pn~lillJiIll:U'i~':-;
fonnly distribuU:d (mod 1) sequences of random varihbk:-; iutrodnccd by Hoil>wijn ([3o],[31]), He g(~neralizes W(WI\ crit.Nion for random S(~q1\(,llce::-;.
Another COl)C(~pt, which has hoCll considered in t.he mOl"(: ["('c('nt litcrat.\l)"(\ is asymptotic uniformity in distribution (mod 1). A seqllcllcc of random varihbl~s is sfi.id to be asymptotically ulliform in distribution (mod 1) if t.he fractional parh of tbe:)(~ T"h.lldOlll vi\,ria,b[es converge iu (listribllLinll t.o a ulliformly distributed random variable nn [0, I). It tllI"llS (Hit that t..lH: lattpr cotlcept. is mol'(: rcst.rictiv(~ tkUl tlt(: concept of almost. su ["(~Iy 11 fl i furmly dist,rihut.('d (mod 1) sequences of r;u)dolll vaT"iabl(~:-;. It is well knowl) t.b<l.t sums of iid ralldolll variablos are <'tsymp(.o(.ically Hniform ill diht.ribu(.ion (mod J) Wll(:ll t.h(' llnderlying distrihllt.ioll is uOIl-htlJice, Schatte has don!: h I!;ood deal of intr:rcsting work 011 dist.ri lnl t.ion (mod 1) ,md OJI asym pt.otic Ilniformity in disCrihutioTl (1l1fHI I) (sml c.g, [G,S[, [G6], [57], [ri8]). In [t)7] he deals with fractiollal paris of SHms of iid latt.ict random v(Hiablm; showing Chat. ull(lc:r additiollal cOJ)diti()lls tlH'SO fract.ional parts cOllw'rgn t.o a discrete uniform limit, ,hgers ([:.C]), solvillg (t problem by StPllt.d, Rhows that the fractioT);11 pa.rt.s of llH\xilllfi. of independent. ~l.lld ~~Xl)(HleIltially distribnt.ed raTldolll varia1>1n:-; do not. converge, T)-l{.~rdo["(~ th(~ (1I1(lstion arifOes wlwt.bPT" fract.ional partfO of maxima of iid ra.lldom variables ever cOllvnrg(~ in distributioll, ;tJl(l, if so, whal lillli t. distri bl1 t.iom: an~ possibln'{
Tbe HI a.in pnI"])(n;(~ of the present monogra.ph is to contri blTt!, t.o thf~ d(~
vclopment of the theory of distrihution (mod 1), Alld ill particular, to tlH.~
l.lteot'y of aSYnJpt,Ot.if' uniformit.y in dist.rihutioll (mod 1), The st.art.iTlg point. of (jilr n:s~~a.rdl witS 1.0 derive cOlldil,iolls nnder which maxima. or iid random varirtblns are a~yll\ptot.i('ally llnifol'Jn in distrihllt.ioll (mod 1), We will s(:(~
tha.t the behavior of the ha7,fl,rd rate of the undl'rlying distributiOll dl:t.l'rlnillcs in many CaK('S w hAther thet->e fradiollal pa.rts converge t.o a illliform liHlit or do Bot (:nllVl'rw~, While workillg OIl this problem, Wl' got nIOl'(~ or less collvi llc('d that fract.ional 1mrts of maxima of iid random variables either COIlVPIW' to a u))iforrnly dist.rihnt.ed limit or do 1101, converge, Lo~s(~n; ([41]) t.nckk:-: a pl'ohl(:ll) Oll t,II(' llllllllwl' of ma.xima ill a discrete s<lmpk, fOl"llllllated hy Hii(k Smprisingly, t.his led to t.1I~~ idea to use thi~ COll(:~'pt of lLlaxima ill a dis(,l"l)t.c sample for proviug t.hat. fractional part.s of tnaxima of iid rltll
dOll! variables ll"lay ('()llV(~rgr' (.0 allY givell lilllit. Recently, smut" ill,v'nt.ion has hpPll paid to t.lH' concept, ~'.I!;., t.o st.udy the llllTlllwr of winners ill II I!;olf t.ou[")) :ullI'llt. N atnmlly, one w{)\t1d think t.hat then~ is an obviou::-; fl'lat.iollship h('twcclt t.ll(~ cOllvl~rW'll(:(' of t.hese fmctiolla.! parts and cla~siC<l.! (~x1.rmnc vahw t.!tcory. How('W'r, tltough thi:) theory is sometimes llsl'flll itt our cont.ext, we will ::;huw thaJ. t.h(' lilllit hdlaviol' of t.IWfie fraclional parts iii not. di ['('cUy I i II h~d 1.0 this t.heory, AKsll lIli Ilg th<.t, t.he llHderlyillg dis!ri b\l tion of tb(' lml.ximA. bP]ong~ to ow' of t.h!· Ihn'(' domain:) of a1,t.nlct.ion p;ives olily ill
3
smne particular cases additional information about this limit behavior-
In [55] Schatte is mainly concerned wit.h infinite divisibility in the modulo 1 sense; he introduces the notion of replication numbers to prove a representation theorem for infinite divisible (mod 1) distributions- Furthermore, he considers triangular arrays converging to infinite divisible (mod 1) distributions- However, while reading t.his int.eresting paper, one would like to know why distributions with finite replication number play such an important role in th(~ study of infinit.e divisibility (mod 1). Our research leads to a new characterization of infinite divisible (mod 1) distributions, kind therefore sheds some extra light on the work of Schatte ([55])- We also characterize distributions with finite replication number. These results are indirectly connected to generating uniform pseudorandom numbers (see Deng and George [12]). For generating such numbers the goal is to produce random variables, which should be a good approximation to a sequence of independent random variables from the uniform distributioIl on [0,1)_ It appears, moreover, that distributions with finite replication number are of interest for the solut.ion of a generalization of a (stochastic) equation studied by Goldman ([23]) and Arnold and Meeden ([3)). The equation introduced by Goldman originates from an application of fractional container loa.ds: if the distribution of the fractional backlog after two days is the sa.me as the distribution aft.er one day, what can then be said about the distribution of the fractional backlog?
Recently, there has been int.erest in the mantissa distribution of sums (see e.g- t.he survey paper by Schatte [58]), and with the mantissa dist.ribution of pwdHcts (see Scha.tte [54]). For a positive real number x, the mantissa to the base 10 is given by m(x) = lO{loh.!:;1;), where {V} denotes the fractional part of y. It has been observed in computing that mantissas arC logarithmically distributed. A consequence of this phenomenon is the logarithmical behavior of the first significant digit, which is known as Benford's law. In the study of Benford's law One finds t.he following somewhat remarkable expression: For a positive real number x, the first significant digit d(x) ea.n be written as d(:1:) = [m(a:)L where [y] denotes the integer part of y. As an extension of the results on sums of iid random variable~, melltioned in t.he third paragraph, we briefly consider the limit behavior of fract.ional parts of sums of independent, not necessarily iid, random variable::;, In addition, we give rat.her weak sufficient conditiolls for products of iid random variables to be asymptotically uuiform in distribution (mod 1)_
Hounding errors play a role in the decomposition of multiples of random variables into a discrete part and a rounding errOr (see Kolassa and McCullagh [38]). They consider covariances of the discrete part and the rounding error of multiples of random va.riables_ In this monograph we will
4 Chapter 1. III traduction and Prelim ifl,l.rie.,
also study the limit behavior of (:ovnriances of tho fractional part awl tlH~ integer part. of a random variable, alld covariances of it random varia1>1(~ awl it.s fmr:t.iomd part. We consider thnw types of randolll vari;!.l;b,: !Illiltiplns of a fi xed ralldom variable, SHms of iid ralldOln varia.blns alld maXiltla of lid I"<\.I\dO!ll variabks.
For cOlllpkt(:II(~SS, we mention sonw rd(:n:ncps, wbo~'il' subjects are closely rdit(.cd to t.he prohleUlt:> considt:n:d ill t.his mOtlOgraph: Dvordzky a.lld Wolfowity, ([Ull, H.obbins ([G3]), Cyire:o; ([25]), Lyolis ([441, [1S]), and finally, van der Gellugt.(:]) ([20]), who st lldies a pre-packing; pro hl(:m in dl:tail Mardia (146], :-)\:c1.. 4.:1.:~) treat.s PoincarC:.; tlH.:on:Ill about ,1 needle rotating; at. tll(: c("IlIt:l" of r1 disc.
Fillfl.lly, we give it brief outline of t.IlP contents of thi:s chapt.(I!". In Sect.i( HI 1.1 WI' giv() sotnc notatiolls, COIl VfIltiotls and prclirninarit's. III SecLioll 1.2 W(' review proptrti(~s of F'ouriN-St.inltjcs sequences; Oll::-)t' soqllAl1CeS play all important mit: ill the study of fraction,d IHI.rts of ril.udom variable::;. )ll S(,ctioll 1.:~ Wt:: survey some concept::; of asympt.otic uniformit.y (mod 1). In Section 1.1, Wt~ st.at.p t1lf~ Eulm'-MacLaurin ::;um formula, il.Ild ill Secti()n 1..5, W(: sll!lllllari!':f' tllt' contcnt1:: of the Sll hSt·~qllent chapters.
1.1 Notations, conventions and preliruinar-. les
Ld. (n, .F, P) 1)(: II probability SpitC,:. A raudom variable (rv) is a ![)(·:H.'illrablc fllllCt.ioll X : U .. , R Tlw distribution of X is ddiuI::d by the probabilit.y lllCa;-;\lt'(~ Fx 011 t.lit: nord sets .6(JR) ami Px (D) = r(X E H). Tbt~ (ldL COIl
ti1l1l0US) Ihsl.riiml'iou function (df) Fx of <l. rv X oniR is ddill()d by Fx(:r) := P(X <. :t:) (:r E JR). If Fx call h(' l'(']Hcsent.ed as Px (T) = .I~,X! Ix (y)(/y for
SOIllI: fllllf'l.ion .f x. tlwn Fx is call('d ai)soLutcly conh:n:lI.o'us. The functioll f x is called it dCH81JlI of X Or Fx. If a. rv X h;ts a. dnusity f, t.hen f is ~a.id to IH: UU~ dt:usity of f. For mort iliforlllation we rder t.o nillingsky (is]).
j,l'I, F, 1'1, F1 , . " IH' hOllll(\t'd, kft-continnouk, Ilo!ldecreasing fllnctions. We say that. F" corI1wnJ~:.'i weakLy to F (notH,tion F~, .!:~, F) if for ('Vl'ry contillllity poillt :17 of P
(71,', 00). ( I . I )
In ;~ddil.io!l, !J. d(,!lot(:,.; equality in distribution, i.~:., for any two rv\ X a.nd
y, X ~.~: Y if I" Fx = h' (set' J3jJli!lg:-;ky [81, sect. 25). Ji'or a ,,-;\[hs('(. A C 1ft, lpj, F(A) dOllotc Uw ::;et of dr's Fx wit,h P(X e A) = l. F1ll'til('nn()]"(:, .1'[0,1) drnoU's till' :wt of rv's X wit.h Fx ( F([O, 1)). Clearly,
1 . .1. Notations, CO(1ventions and preliminaries 5
for all FE .1'([0, 1)) we have F(O) = 0 and F(l) = 1. For a sequence (Fn) (Fn E F([O, 1)) it is possible that Fn converges weakly as n - 00 to some FE F([O, 1]) with positive mass at 1. In the following definition, the possible probability mass at 1 of a df F Ii' .1'([0, 1]) is transferred to O.
Definition 1.1. Let F €, .1'([0,1]). The dj F '" .1'([0, 1)) is said to be the reduced df oj F if
if x S; 0 if a: c (0, 1] if a; > 1.
Clearly, if F l1 .1'([0, 1)), then F = F. We nOw introduce the following conC(~pt.
Definition 1.2. Let F, FI , F2,··· be df's in .1'([0, I)). The dj Fn is said to converge reduced weakly to F as n - 00 (notation Fn ~ F) if th.ere exist.g a df G E .1'(10, 1]) with F,~ ~ G (n - (0) and G = F.
We note tha.t in Definition 1.2 we interpret convergence of Fn to F as con~ vergence on a circle, i.e., possible probability mass at 1 is transferred to O. For example, if Fn{x) =1(1-1!n.oo)(x), then Fn ~ F with F(:r;) =l(o,oo)(x).
In addition, ~ denotes reduced convergence in distribution, i.e., for any sequence X,X1,X2 , ... ofrv's in X[O, 1), Xn.!. X iff F;"'n ~ Fx (n - 00).
Remark 1.3. Schatte ([55]) u~e~ a slight.ly different defillition of 'weak convergence'; namely, a sequence (F,,) of left cont.inuous df's converges weakly to F, if for any two continuity poiJ)ts x, y of F
f~l(:r) - F,Jy) - F(x) - F(y) (n - (0). (1.2)
If we consider df's F, Fn in F( [0, 1)) (n E N), t.hen this definition and Definit-iou 1.2 are equivalent,: Let (1.2) hold. Take y = 1; then f:, ~ F, and hence Fn ~ F as n - 00. Let now F'l ~ F (n - (0). Then there is G ~ F([O, 1)) with Fn ~ G and (; == F. For continuity points x, y of G, F,/r) - Fn(Y) - G(:t) - G(y) = G(x) - G(y) = F(x) - F{y). So (1.2) holds.
6 Chapter 1_ Int.roduction <wd Prdimillaries
The (:lw.nu:lf;?'isli(; function (chf) I{! x: lR ---t C of it rv X i~ <I (_I Ii !led by
00
(t) -- ",.,,'.IX - J ilx IF ('l') 'P x .' . - ""'-' - f. (. X _. (tEIR), -QQ
'I/!x (T) := ~£-TX = J r~-n: tlFx (:r) (T ::::: 0).
{o.oo)
For:r t IR, tlw fmd.ional part of T is denoted by {:r:} :=" :r-[:l:j; [:rJ denote::'! thp inf(~y(01' pad of 1:, the largeKt iIlt.f~W'l' not exceeding T. The Fov.ria-St.idty-:s ,'i(~!fw:n(c (FSS) c,\ ; Z ---t (: 0[' it rv X is defiIl{~d by
(U)
S()[I1('1.iltl(~:-> ex is writt.en 'I.S (:FX
_ For any rv X wp ha.w: t.he llseful prop~~rt.y
(1.4 )
For ,I. rv X, the df FIX} can Iw given ttK follows; for :r E [0, I)
00 00
f{x}(:r) =::- L r(m::; X < m. + :r) = L (Px(m. + :r) -- f'x(rn))-TrI'== -·00 m.:'~ ,." QCl
h)r cxamph" if X is exponentia.lly distributed witli IIf)( = 1/.\ ,wd .\ > 0 (not.at.ion X ~ F:rp(.\)), t.lWll F(x}(:r) = (1 - (,-,\l')/(l - e >.) (:r ( [0,1))F'mtlH'rlllore, if t.1H~ l'V X has a delJKity f, tlltll for ;r ( 10,1)
L)I.."'l j +-\~~ ()L) ,7: J~ ~
V{X }(:J:) ::-: L .I f(y)dy =:: .2: .I fU + y)dy "~.I L f(j -+- y)dy, ./--"" j }--r:<.l 0 0 }=-w
;wd lipll~'p FIX) hits th(~ (kllsity h given by
11(:/:):= L .re) + .r) (:r,= [0, 1))- ( 1..5) ."} " .• ' L)I.")
We shall lH'pd tlll' following proposi tions,
1.2. Properties of Fourier-Stieltjes sequellces 7
Proposition 1.4. [Lukacs [43}, thm 3.2.2J Let i.p be (l chi sw-;h that J~oo 1<p(t)ldt < 00. Then the corresponding df F is absolutely continuous and its density f is give.n by
"" J(x) = }" J e-;txi.p(t)dt (:1: E IR).
-00
The Jv:nctwn J is mdjormly conttnttOM on lR.
Proposition 1.5. [Hiemann-Lebesgue lemmaJ Let .f : M ---+ lR be meas1L7'o.bie with. f~oo If(x)ldx < 00. Then
00
lim J eitx f(x)dx = O. 111-=
'''00
1.2 Properties of Fourier-Stieltjes sequences
FSS't> aH~ a useful t.ool for studying fractional parts of rv's. In this section we give a survey of properties of FSS'S. We have collected the available information from several books such as Elliott [15], Feller [18], Kawata [35L and Mardia [46]. Most proofs of these results are similar to the proofs of t.he analogous results for chf's (see e.g. Lukacs [43]). Clearly, for each X, cx(O) = 1, icx(k)1 :$ 1, and cx(-k) = cx(k) (k E z). In addition, we give a characterization of lat.tice distributions in terms of FSS's,
We now prove the uniqueness theorem for F'SS's (see also Kawata [35]' Hun :1.4.1).
Theorem 1.6. [uniqueness theorem]
fd X, Y E .-l'[O, 1). Then X 4 Y in·(;x = CY '
Proof: Clearly, if X :4 Y, i.e., Fx = Fy, then ex = cy . To prove t.he ot.her implicatioll, we give an inversion formula for a df FE .1""([0, 1)) with FSS c. Thj~ gives all explicit formula for F in terms of FSS's. Let x, y E [0,1) be cont.inuity points of F, and define
Clutpter.1_ TIJt,rodactiol1 and PrdimiIJ/l,riI'8
Then it. follows that, writillg fYk:(z) = sin 2"Trkz (k c I, Z E JR),
8,,(:1:, y) = :r-- y -I- L [(:-~'lfik~ -. e·-2.~I~'.~ f e~""kudFCiJ,)l 0 <1 "1 -'J- 21rzk .
" ::: \ [0,))
'-' :r _. Y -+- J [t (Td'lJ. - y) - -~~.(y-= X)] dFCu) rrk
lO.I) k=l
.I [Cu.- Y)-(ll [0.1)
.) ./!-- (T~Ja - y) '-. (Jdn -- :r:)] 11"(' ) .r + L -_._-_. .- (, iJ. •
• :-1 rck
The intcgralld of I,his integral is houTHbluniformly in K, ,uld InOr()()V(~r (d. [(awat.~t [::15], p.120)
if 0 < 'H < ] if'H ...... 0 (1.6 ) if - 1 < 'i/. < 0-
Applyinp.; tll(~ hounded cOllvergl~llcl~ theorem we get from (l.G)
!im 8,((:r,y) ~ .•. -b J dF(u) + ~ J clFCa)·-/ 1\ 'c...'
1 .I dFCu) -- ~ ./ dF( 11.) lll.y) (?I,l) 10.",) (:r.·,l)
= F(a:) - F(y). (1.7)
Now, the (k::-;in~d implication follow:-; from the definitioll of S[{. o
III a.ddit.ioll, we g'iv(~ th(' cont.inuity theon'Hl for FSS'r:; (::;e(~ M.'mlia [16], ticd. 4_:.\'1).
Theorem 1- 7. [nlIltinuity tlworeru 1 Let (f~,) Iii": {}, SCljtLCnce of clf's ,in F([O, 1)), (mel ld (t,,) be the (:o"''/"I:'8pondiny S(~(rltf.'n(.'(: of FSS's.
(i) I,d, Fe F([O, 1)) 'ulith FSS (". IfF" ~ F (n --4 (Xl), th(:rJ. ,n~~/,,,(k) = c(k)
fm' k e "j,_
(ii) II lim."_,,,) (',,(k) ~-- I:(k) e.rists fOT k E Z, then then: rs a ilf F ( J'([O, 1)) sw;/l that F:, ~ F (n ----I- 0:::1), Th(: s(:'I/'llence C i8 the.TI the FSS of F'.
Proof: (i) By Ddlllition 1.2 thf~n' is a df G ( .F( [0,1]) with F" ~ (; and G :c.: F. Thell IIdly's Srcond ThemI'm (S('(~ e.g- Luka.cs [13], tlllll ;3_;;.2) illlPli(~s that. lilll n '<X' c,,(k) = Gc(k) for all k f' Z. SillC(~ (; = 1" it follows t.hat (\: = (', a.lld h(~[]w lilll,,_(x:,c,,(k) = r:(k) for all k E Z.
1.2. Properties of Fourier-Stieltjes sequences 9
(ii) By Helly's First Theorem (see Lukacs [43L thm 3.5.1) there is a subsequellce (FnJ~1 of (Fn) that converges weakly to some df G E .1'([0, I]). Suppose there is another subsequence (Fm.)~l of (Fn) that converges weakly to some df HEF([O,IJ). From part (i) we have limi_ooc",;(k) = cH(k) and limi_ooc,,;(k) = co(k) for a.ll kEZ. By assumption we know that lim" ..... w <=~(k) = c(k) for all k E Z, and hence Co = C/I' By the ulliqueness them'em it follows that G = fl. Take F = 0; then Fn ~ F (n - 00). By part (i) it follows that en converges to the FSS of F, and Theorem 1.6 implies that the seqnence c is the FSS of F.
o
Next, we define t.he convolution P of df's G, H E .1'([0, 1)) such that FE.F([O,l)). Let G * H denote the convolution in t.he customary sense (see e.g. Feller [18], pp. 143-148), i.e.
(G * H)(:c) = J G(x - y)dH(y) = J Il(x - y)dG(y) (x E [0,2)). 10,:!;) IO.<e)
The definition of convolution (mod 1) is given by
Definition 1.8. Let. G, H E F([O, 1)). The function
is said to be the convolution (mod 1) of G and H .
Obviously, G 0 H ~ .F([O, I)). If G and H have densities 9 and h, respect.ively, t.hen G ® H has a density f, and
r 1
f(x) = J g(::r - y)h(y)dy + J !7(1 +:1: - YJ)h(y)dy (:r E [0, 1)) (1.9) o x
almost. everywhere.
Theorem 1.9. [convolution theorem] Let F, Fl , F2 E .1'([0,1)) with corresponding PSSls c, Cl , C2 • Then
10 Chapter 1. In trod Ilcfioll and Prelim irw.rif's
Proof: Since for (tHY two FSS's Cl and ('2 (or df's FI ann F2) t.li('n~ hrA indepcll(knt rv's XI a.nd X 2 wit.h FSS 1:1 hIld,,~ (or df'~ FJ ami F7 ), it f:llthces t.o gi v(' tho following rehttioll
Corollary 1.10. Lcl. X and Y 1)(: ·w.ilq!f:ndent rv's 'In .1'[0, 1). Then
(kEZ).
o
(1.10)
\Vp r('~ln<l.rk that. Grenn.Tldrc IH'oV('S more gPllNally t.lw uniq llPll('SS, ("011-I.illllit.y .and COllvnlntioll propert,i(~s for FOllrirr tmllsforrns Oil a. Nlllittlutatiw locally compact. topological gro1lp (eI. [21], tlllll ;U.1(a),(Il"),(d), ;\)1d sect. 3.4); ~(~e abo H(',Ycr ([28]).
III I.b(' followillg propositioll w(' state some fmt.ltcr prop('rt.i(~s of FSS's. J)) pa.rt. (i) wn focHl1llate an illV(~rsioll formula; it.s proof is tlw proof of TbooH'll! l.Ci (cf. (1.7); s(~e also Kawata ([.1~j, p.122)). W(~ n()U~ that. Mantia ([46], sect.. 4.2.2) gives all ,dterllativr inversion formllla. Elliott ([15], loullu<t 1.16) givcs a proof of part (il). For tlw proof of part. (iii) W(~ rnfer to Mardia ([46], p.82), and for pa.rt (iv) to Kawata. ([35], thIn ;J.:.L 1); ste al::;o WililiS ([71]). For a df Fx EF([O,l)), if W(~ definn the conjuY(Zlf; d(i'EF([O, 1)) of Fx by 1~' : = [-:'1-X 1 tltf~ll the proof of phrl (v) is very ::;irnibr to tile proof of allalogou1i re::mlts for cltf':=; (see Wilms [71)).
Pl-oposition 1.1 I. Ld F ( .1"([0, 1)) "IInlh F88 c. (i) PO'/" (ll! points T, "!/ f" [0, 1) uf contin1J.1.fy oj F w(: IWll(~
!,' r. ··7711.b: _ (.1-21'i~1?J
F(/)·· F(y) :::: lim L .. - ,"' ··c(k); 1\ • ,,,:, , '27r'/.k z··· .... I,
II.(~n: lhe lena (('-·h1"i~·." - (-' '1rr":Y)/'17rik '1.1 to be interp1ded:ll" :1: wh(~n k ::= O.
(ii) F ~5 conUnUO"ll.ii if I
" lim iuf -. -: - ~ IeU;)I·.! :::: lL 1\' -<XI ~ 1\ + I L"
. k,- /"
(iii) If Zr';'-C"1(l Ie(kW is COTliIlITiJr'rll, then P '1.8 absolutely CO'nJmltOU8 tU1Jh
dens'l,ly
'Xi
f(:r) = L (-"l7rikx C(k). "" ... <X>
1.2. Properties of FOllrier-Stieltjes sequences 11
in (I.ddition, if I:r'=-oo Ic( k) 1 i.') con'/J~TgflTd, then f is uniformly continuous on IR.
(iv) The limit P.~ = limK_'oo 21/+1 '[;i(=_KA-2jfik~C(k) exists for all XE [0, I), and is equal to the jump F{x+) - P(x).
(v) Let (p,)~ be an enumeration of the jnmp3 of F. Then
f{ 00
. 1 ~ ()I< ~.~ Iml 2/'.j[ D Ie k = DP;' 1\ --+~ \ I
k",-J{ j=l
Proposition 1.12. Let FE F{[O, 1)). Then
I
i:F'(k) = 1 - 27r'ik I F(x)e 2rr'kxdx (k E 2).
° Proof: We define fh(::t) :~ exp(27r'ikx) (k E z, x E IR). For k E Z
('F(k) = f fh(x)dF(x) = J dF(x) - f (1 - f3k{x))dF(x) [0.1) [0,1) [0.1)
x
= 1 + 21rik f J ;JJ.:(y)dydF(:r.) ;;;= 1 + 27rik f r,dv) J dF{x)dy [O,l) 0 [0,1) [y,l)
1
= 1 + 27rik f t1,.(y)(l - F(y))dy = 1 - 21rik J P{x)!3dy)dx. [O,l) [)
Schatt.e ([56]) proves the following.
Proposition 1.13. [Schatte [56]) Let FE FUO, 1)) with FSS c.
co
D
(i) Let P have a bounded density. If c{k) ~ 0 for k E Z, then E c(k) < 00.
(ii) If F }W8 ([ density, then SlIPk E Z() Ic(k)1 < l. (iii) If F is discrete, then sup~: E 7.0 Ic{k)1 = 1.
k=-oo
We now charact.erize lattice dist.ributions in terms of FSS's. To this end we use some properties of chf's.
12 Chapter 1. Introductioll /wd Preliminaries
Definition 1.14. 1\ rv X (or its df F) is said to lw lattice if its distributioll is concentrated OI! ~ + AI£. for SOlne E: c lR a.nd A > 0 with 1JA := {'Ila ; a E A l (A C lR, '1/ E oc). III that c:u,p wp ~(\,y that X is lattice on ~ + AZ, If X is 1)()Jldegcllcrate and hl.tic(', tlt(>tl the largest ,\ with the property tlt,·J,j, Llt(q'C ('xists ~ ( lR sllcll thfl,t X is conwntr,d,(:d OI! ~ + '\Z is Cidkd tlw spall of.Y (or F).
We note th,~t if X is lattice 011 E: + ,\Z for some E: ( lR aI!d A > 0 I then tlH' spall of X is ill .\N. The followillR proposition is adapted from a rc~mlt by FdicI' ([ 18]). Om formulation iK sligbt.ly more genen).l since Fdkr l1sp$
ariU)ltl(:t.ic dislriblltions, which a.n: concentrated on .\;r.. for some ,\ :_-,. fl.
Proposition 1.15. [Feller [18], lemma 3, p_500] [A:! X br' a tV unth ch] !.r'"J, and let ,\ > fl and /i E lit TJun !hl' /ol/owinq 8tatements (j,T(:' 1:'llu:walent,
(a) ~)(.\) = exp('i(1). (b) '-P(t + ,\) = exp(ifJ)r.p(t) U E IR), (c.) X i8 LaUze!; on ~ + 2: U' ..
Lemma 1.16. Ld X E .1:'10,1) with FSS c, and kt r f;: Nand (j I;:!)t. TJu·;n the following 8ta!(Jnerd.~ (IT'e Nluivalcnt,
(a) r(1') = e:qi('2n:ijJ)_ (h) ('(k + r) = (:;rp(2rrUnc(k) (k (" Z). (c) X '18 {uttiu.' on ~ + ~7.,.
Proof: By (1.3) aJ)d Proposit.ion 1.]5 we find the following equivalellt st.at.(:)II(;llt.S:
("(J'):.' r:rp(2TrI/J); 1~:.:-.21Ti'·(X 11/T) = I;
'P x _ ,i;r(21r1') = I; X is lat.tice Oil !!. + )- z,
'" r Thus (a) ¢:> (e). Fmt.lwrrnorc, front YX _ /u,.(27fT) .:. I it follows by Propositioll L I;) that
'Px /i/,.(21IT + '2rrk} = 'Px .. i,;,·('2n:k) (k f Z),
wlu:IlC(: c(k -~ 1') = (~:rJi(2n:ij3)c(k) (k (:; z). Thus (al -,;,. (b), TA1.ing k = 0 ill (b) we get (a), li(~nce (b) =? (<-\.).
[J
1.2. Propertie8 of Fourier-Stie1tjes sequences 13
Corollary 1.17. Let Xc.: X[O, 1) be a hi with FSS c. Let A := {k E Z: Ic(k)1 = 1}. If At- {OJ and r := min{A n N}, then A = rZ, and there exists a constant ;3 E [0, ;) s~tch that X i.5 lattice 011. !3 + ;Z. Proof: Let. pEA n N; then c(p) = exp(211'iO for some e E [0,1). The
equivalence (a)¢:db) of Lemma 1.16 implies that
c(k + p) = exp(2';dOc(k) "" c(p)c(k) (k E Z).
Let now p, q E' A. Then, for a, b €J Z, we get Ic(ap + bq)1 = lc(p)I"Ic(qW = 1; hence ap +- bq E A Since a and b arc arbitrary, we also have gcd (p, q) t: AHence A = TZ.
Next., since Ic(r)1 = 1, there is a E [0,1) such that c(r) = exp(21rika); thcn Lemma 1.16 sa.ys that. X is la.t.t.ice on {J + 1Z with (J = f!. . r r
o
Remark 1.18. (i) If T = 1 in case (b) of Corolla.ry 1.17(ii), t.hen X is degenerate, i.e.
P(X = {J) = 1.
(ii) Suppose that X is nondegenerate. Then in case (b) of Corollary 1.17(ii) we have r ~ 2, and the span of X is in ~N. One might expect tha.t. the span of X is equa.l t.o 1.. However, this is not true since we study rv's
r
in the modulo 1 sense. As an example wc consider: Let X be a. rv with PI := P(X = 0) > 0, P2 ;= P(X = ~) > 0 and PI + P'l. = 1. Obviously, the spa.n of X is equal to ~,whereas cx(k) = 1 if k E 42 and lcx(k)1 < 1 if k ¢ 4Z.
A t'v with (continuous) uniform distribution on 10,1) will be denoted by U; for TeN, Ur denotes a rv wit.h discrete uniform distribut.ion on [0, 1 L i.e. P(Ur = i) = ! for j := 0, ... , r' - 1. Finally, we need the following two r • . praposi tians.
Proposition 1.19, (i) cu(k) = 0 for all k E zoo
(") (1.) {I zJ kE1'Z 11 cUr '" = 0 otherwise.
Proposition 1.20. Let p, q, r., bEN, a ;= lcm(p, q) and gcd(p, b) = 1. Then
(i) {Up + Uq} -1::. Ua if Up and Uq are independent.
(ii) {prUqt,} ~ {pUq }.
(iii) {bUv } 4 Up'
14 Clulpt;er 1. [ntroilw:tioll and Frciilninarics
Pl'Oof; (i) If k E aZ, then k E pZ and k ( q'",f.,;:-:;n ('jUp
+ lI"f(k) :::: !://p(k)cflll(k) = 1 (d. (l.Jn)). If k rJ. aZ, th.m 1: llp (k) = 0 or clI,,(k) = 0; hp1)c(\ WP oht.ain
i:j!l,. + 1I"f (k) = 1:/l p (k)Cllq (k) = O. So, Ci!I" , u,,)(k) = 1: 11" (k) (k E .7.,), (ii) It suffices to :-;h()w that r:u~,(j!k'l') = (lI~(j!k) (k ~ z). If IJk ( ((J~, t.l\('11 Cu", (pic)") =: J. If IJ/;; ~ I}Z, then pkl' ~ (/I'Z, i,I~. ('!I., (pkr) =: n. T1(~rl('(', WI' gd I' (."Ih·) ~-:::: I' (l)~') (I., - '1J) . ( f 'I ~, I ' , I J,,/ I fl., I::: IZ..tl I
(iii) (:l('arly, w(' haN(, I:l,flp)(k) = cll,,(iJk) (!~ c 7.). Tf /,;!: pl., Lh(l[j ('u,,(bk) = 1. If kif- jiZ. t.hen bl,:" pl siTlC(~ gcd(/l, Ii) :;:: 1; so cUI'(bk) = O. H(~Jlc(', it follows l,haL I/lJ>(/il.:) :.-, ("Pl,(k) (k E Z),
n
1.3 ASYluptotic nniforrnity (lnod 1)
H(~l'(' W(~ give it hrid overvipw of XOlllf' concepts ('onn'J"lli JIg :u;.y rfl ptot.ic nniforlllit.y (mod 1). Thp coulr.nts of this section COJlsixt of Ute following. Xn Sed,ioll 1.:.L 1, w(~ introdllc(' the notion of uniformly dist.ributed (mod 1) :-:;(~
ql1(~nct~x or rnalllumhors, and in S(~ct.ion 1.3.2 that of ahnost, surdy ullirorrnly
di~tl"ilmtcd (mod 1) :-:;(~q\](.~rl(:ps of rv's. Finally, in Sect.ioTi 1.:\.3, we define t!tp COllC(~pt of asymptotic uniformity iTi distril)J!tion (mod 1). W(·~ fmt.bt>r show that this last concept is mOre rnstrictive than thl~ COJl(:ppt of ahnost surely uniformly distribntpd (mod 1) sequences.
1.3.1 Unifonn distribution (nl0d 1)
Tlw 1I0\.ion of n.symplotically uniformly di:-:;trihni.wl (mod J) :-:;(~ql](.~rIC(\S origimd,(~d ill nnmb('r theory. Tll(~ hasic definition of uTliformly distributed (mod 1) snqucllws it; as follows: A S()qllonC(~ (:r,,) of n~a,l rllllrlbms is said t() be 'Im'If(lnn/y Ihslribtded (mod J) if for nvnl'Y pair (a, b) ( ]R2 with 0 ::; 0, < b ~ 1
OrlP of t.b(~ I I 10S I. interesting cri kria for a sHquellce to be UTI i formly distri bu ted (mod 1) i1; (ilw to W('yl. H(' shows tha.t t.llP sequence Cr,)) is ltrliforrnly
distril)Jll,(~d (mod 1) iff
( 1 .11 )
(cf. l"':llipers ,Hid Nif,dern'it.8r [301, tlllll 2.1).
1_3. Asymptotic uniformity (mod 1) 15
Examples 1.21. (a) The sequence (nO) is uniformly distributed (mod 1) if () E IR \ Q (cf. [39], example 2.1). (b) The :;;eqll€n(:e (log 71,) is not llniformly distributed (mod 1) (d- [39], example 2.4)_
1.3.2 Uniform distribution (mod 1) almost surely
Holewijn ([3o],[31]) is concerned with almost surely uniformly distributed (mod 1) sequences of tv'S_ Tn [31], he assumes that the sequence (X,~) has st.ationary increments, i.e., the lfLw of t.he increments X n+m - Xn depends only on m., but not on n (m (; N). He gives sufficient conditions for (Xtl )
to be uniformly distributed (mod 1) almost surely. In [30L he considers a sequence of real numbers as a sequence of degenerate rv's, and introduces the following concept-
Definition 1.22. A sequence (Xn) of rv's is said to be uniformly distributed (mod 1) a.s. if P({wd2: (Xn(w)) is uniformly distributed (mod I)}) = 1.
Holewijn ([30]) generalizes Weyl's criterion (d. (1.11)) to the case of a sequence (Xl)) of independent rv's to be uniformly distributed (mod 1) almost surely.
Proposition 1.23. [Holewijn [30]] LI'l (X n) be a 8equence of independent ni'8 with corre8ponding sequence (ifn) of chI's. T'H~n (X,~) is uniformly distributed (mod 1) a.s. iff
(1.12)
Note that (1.12) is equiva.lent to limN_oo E C~- Z~=l e271'ikXn) = 0 for all k E zo.
Examples 1.24. (a) Let (X,,.) be a sequence of iid rv's with chf~, and let Sn := Xl + ". + Xn (n EN). Then (5,,) is uniformly distributed (mod 1) a.S. iff
16 Clwpter J. Introduction and Prdiminaries
if I' '"J(~hT L) . .). 1 (" . "') ..., ,N 1(/'(2 1.) - ,+'(~"k)(",N(2;rI.J=.!l TI 1"1 t ~ A T A E "'-0 , t>HI('(. L..".:::l "t'" IrA. _. Ip(2 .• kf:...·' . 1(' .... r-r COHditioll is e(Jllivaknt. with the fact tktt. t.h(·~ rv V(l} ii:: DOt. l;lI.tiC(~ on a set. of til(' form {~ : .J = 0, ... , r - I} for any r E N (d. Corol\A.I'Y 1.1/0j)).
(b) Ld 1)1, (:. N, OJ E IR. \ Q for j ... : 1, ... , ill. LPl. (X,,) be a ~,(~qllpnC(~ of rv 'sslH.:1l Lllal. P(X" = nB)) ::: .. JI; with L.;)~lP) ".: 1, [) < p, <.1. Thf'tl ~')x,,(2Tik) = L,:j'~'l J!/~"'hI6j. By Ex(U))pk 1.21(a), We' get.
;'-i . '" 1 N .. *" L 'px..(2rrk) = I:>j N L ()rr~J.,,,-O; ----t 0 (N ..... , 00). II •• I ]== 1 n!
Dy Proj)m-iitioll 1.2:\, (X,,) is \lllifol"lllly dislrib\l(.cd (mod 1) a .. '-;.
w(~ n~mark Ihat Scimtte W:)n]) giWlS necet:::sary and sllflicieut. r:()()(litions for a prodllct of probabilit.y mnasnrcs OIl [0,1) t.o br in a SellS(~ \lnifonnly dist.rihlll.pd (111Od 1). J·k abo :'-;!lOWS, as ,t corolhry, t.he followillg n~snlt: For 1"1, Fl , ... in F( [0, 1)), liU1N _IX' *" L~= I F" ():) = :r iff (1.12) holds. This ifl also a (·.on: .. \(~qlH'IlC() of t.he contillllil.y lhoorctn.
1.3.3 AsynlPtotic nniformity in distribution (rllod 1)
Wi' ,kfin(~ asymplo1ir-- llllifonni1y ill distriblltiOlI (lnod 1) ,~:-; follows.
Ddillitioll 1.25. A :--;CqlWllc(~ (X,,) of rv':o is said 1.0 1)(' asympt.otically unifonTI in distribution (mod 1) if
lX,,}.2.u (n----too). (1.13)
Dy Propositio])s 1.19(i) and 1.7(ii) w(>. know tbat-condition (1.1:.n if:nquival(~])1. 1.0
I i![l (:{ x ) ( k) = () ( k E 20)' I~.'oo ?I
( 1.14)
Clearly, (·{xrJf(k)···. () (N ---4 (0) illlpli('S H L.~~l cjx"Jk) ----t 0 for all k E Zoo So, colld ilion (1.14) is more res/.n:cfi'(l(~ than (1.12). If we take (X,,) as in Ex,\ll)pl~' 1.24(b), then '-Px..(2nk) = Zj~lP/~"ik'TlOj; tlwn c(x,,)(k) do(\s not
COIIVl'QI;r' to () as II, ----t 00, wh(~n'as, for all k c: ;"0, H L,~-l tpx" (27(k) > () hS
N ---4 (XI. l'h~])cp, (X,,) is lInifomlly distrib\lt(~d (mod 1) a.::;., while (Xll ) is
1101 H~Ylllpt.otic;dly llniform ill dislrilmtion (lllod 1).
1.4. Euler-MacLaurin sum formula
1.4 Euler-MacLaurin sum formula
Let mEN; for a subset A ~ lR we define
Cm(A) := { j : A ---+ IR I t.he first m derivatives of I exist and are
continuous} .
17
In t.he following proposition we give the Euler-MacLaurin sum formulaj for more details we refer t.o Whitt.aker and Watson ([70], sect. 7.21).
Proposition 1.26. (the Euler-MacLaurin sum formula) Let m, a, bEN, a < b, and let 9 E C2m([a, b]). Then
b I)
L g(k} :::; J g(x}d:t + tf/((1·) + tf/(b)+ k=<l a
where B,.,(t) (t E JR) are the Bernoulli polynomials, and Bn ;= BIl(O) 101' 7lE N U {O}.
By the fact. tha.t IBn(z)l"$ n! for all nE:N, xelR (see Abramowit.z and Stegun [2], p.805), we have
Corollary 1. 27. Let .,1/. EN.
(i) Let a EN, and 9 E C2m([a, 00)). If L~a g(k) and jaOO g(z)<lx both Converge, and il g(x) ---lo 0 and g(1"-I)(X) ...... 0 for IJ = 1, ... , m as x ..... 00,
then
<:lQ 00 m 00
I L g(k) - J g('t)d:rl S 1f/(a)1 + L IfP"-l\a.)I + ! Ig(:!m)(x)ldx, k=a (1. 1-'=1 l\
(ii) Let 9 E C 2m (IR). If I:~",-oo g(k) and J~oo g(:!:}d:r both con'verge, and if q(.1:) ---> 0 and g(,,,"I)(x) ...... 0 for r.l = 1, ... , m as Ixl ---t 00, then
00 00 00
I L g(k) ~ J g(x)d:rl S; ! Ig(2m)(":)ldx. 1.:=-00 .... <)0 -·00
18 Chapter 1, III tl'O<iucti()/] i'tJ1(1 Prcliminaries
1.5 Stllumary
The ~rcat.cr part of this monograph, narndy Chapters 2 t.hrough 4, is devoted to 1.1)(' lilll i t, bl'lwvior of seqlH.~nc(~s of fract.ional parts of r}u)(lorn variables. C:hhpU~r 2 deals mainly with the convergellce ill distribut.ion of fractiollal PMt,s of sums of illdt'pendent, not lH~C(~ssH.rily iid, randoDI variahbi. After a SllrV('y of so It 1(1 f(IS\llts by Schat.u~ ([;-)(;], [571), we cxt.cnd his rnslllt:-; on SllH)t; of iid lft1ti(:(~ mndOll1 variablns. In thiii eli avt.('r w(, also (kriv(' cOlld i tiollS II ndcr which frac1,iollhl pilrt.s of products of iid random variabl(~s am asymptotic<~lly unifol"lll ill distriimt.ion (mod 1).
Cllitptcr 3 t,l'(~at:-; the (:oIlV(lrgr~nce of fr(-l,ctiollfll parts of maxima. of iid random v,tri ,~hks. As far as we know 1,b is topic has not. 1)(~()11 tf(lakd hef()n~. w(~ fi r,,-;t fnrmlll;d,e r;tt.li(~r WNl,k fill [\icicnt conditiolls on the 1llHkrJyi ng liazard rat(' for these fraciiollal parts 10 b(' asymptot.ically uniforlll in distril))ltion (niOd 1). To giV(' S1l1!"icicllt, conditiolls for these fractiollal parts to (:()llV(>rgE~ to allY givPl1 li!lIit, w(' provE' asympt.ot.ic propert.ies of thA nUllliwl' of lllaxillia ill it dis(Tpt(' Sll,lllpl(', Though this gfH'S somewhat kI,gainst intuitioll, WP will show t.lIa,t, the bcilnvjo)" of fra.ct.iullid part~ of llIa,xi !I lit of iid l'alldo![J variables is !lot, din~ct.ly COllllN:l,cd with classica.l E'xtremc vahH~ tll(~()!"y.
Chap!,(~r 4 is CUllu'l'l\('d wit.b tlw lillLit beha.vior of covarianccs of t,h(, fractiollid part ,md t.hE~ illtq!,(~r part. of a ralidoIl1 variahlE\ a.!ld COVm-ia.ll(:(~S of a l'all(lotrl variil.hlc and itK fra,{:tiollal part, We (:ollsidcr the followi!lg three C,l'iPS: Sllll)S of iid random variables, ma.xima of lid random variabh's, <wi! tnultipks of a fix(ld random variabh' . Under C(~rtain condition:s on th(~ IIIHlcrlying disLrilJlltioll we show t.hat thc fractions'] part i1.nd tJli~ ill!,(~t!,(,f pmt of a randolll variabh~ a["(' asymptotically Ilrgatively cOlTt'iatcd, and that th(~ random varli)'blo itself alld it.s frfi,ctiollnl part (-I,n~ a.symptotically IIllcorre1atcd.
F'i !lally, Chapt,<.'r 5 is d(wot.(~d to t.lH~ bnhavior with ["(q)(\ct to inh fl i 1,(l
divisihility (mod I). Wi' cltarackriz(~ dist.ribntiolls wit.h fitlik replic(l.tion 1I1111l!H't' t.o prove a. lIi'w chMact(~riz:i,tion for infiniV: divisible didrilnl1,ions (1110([ 1) alld s(lml~ liI'W propcrti(~:-; of these di:::triblltinns. This giv(~s furtlipr illsight. in diP pa.prl' by Schatt(~ ([SSj), Finally, we briefly COlISid(lr a [!;('tl('}"(tiizat,ioll or it, (stodmst.ic) l~q lIat.ion and st.ablc distri bll tiolls (mod 1).
Chapter 2
Sums and Products
LE't SrI = L~=l XTn (n EN), where (X"J is a sequence of independent, not ner,essarily iid, t'v's. A major theorem in probability theory says that if Srl converges in distribution as n ----- 00, then S" converges almost surely (see e,g, Tucker [67]' sect, 5,2), A analogous result for {STI} is in general not t.rue: for example, if Sn = 1 - 1 with probability 1 for all n EN, then
11
{811 } ..5!., 0 (n ----- 00), where"!" denot.efl reduced convergence in distribntion, and {S71} converges almost surely to 1 as n - 00,
In this chapter we consider fractional parts of sums of independent, not necessarily iid, rv's. We give necessary and sufficient conditions for these fractional parts to converge in distribution, and in particular, to a uniformly distributed rv, We also consider fractional parts of sums of iid rv's, A lot is known about this (see e.g. Schatte [56]'[57]), and for completeness we review some results concerning convergence in distribution and rates of convergence. Whml considering ~urnS of iid rv'~ it is logical to ~tudy fractional parts of products of lid rv's as well. It seems that. these fractional parts have not yet been studied in the literature. We will derive rather weak sufficient conditions for these fractional parts to converge in distribution to a uniform rv.
The contents of this chapter is as follows. In Section 2.1.1, we briefly consider fractional parts of iid rv's. In Sections 2.1.2 and 2.1.3 we study {L~,=j X m }, where (Xm) is a ~equenC(~ of independent rv's. In Section 2,1.2, under the assumption that L Val' Xm is convergent.) we show that the exist.enC(~ of lirn'Hoo{L~'=llEXm} is necessary and sufficient for the convergence in distribut.ion of n:::~'=l Xm} as n - 00. h) Sect.ion 2,1.3, we formulate neC
essary and sufficient conditions, i)l terms of t.he sequence of FSS's of (X"l)' for {E~=l Xm} ~ U as n ----- 00. Finally, in Section 2.2, for a sequence (X",) of lid rv's wit.h density f on (1,00) we show that under the assump.tion go xIP(x)ldx is finite, {n~''''1 Xm} converges in distribution to U as n ~ 00.
19
2() Chapter 2. S'UlllS dljd rl'Odl!ct.'!
2.1 SUlllS
2.1.1 Sums of iid rv's
Til t.his s(~dion we give snfficiPllt cOllditions for t.l)(~ fractioll:-\'l part:.fi of sum:::: of iid rv':-; t.o coIlV(~r!!y ill distribution eit.her t.o U or to U .. , We ::::h~dl distillgni::::h t.wo ca.Rl'~:;: Uw underlying distribution is l,~ttic(~ or IlOIl-lattice, Both C~I:-;l\C; b<tvc been :st.udied by l:~.g. Manlia [4G) alld Sdl;lUe ([50],[,1)7)): for a gcn('l',tli;',a.l.inH of probability llW,USllWs Oil c()IJlpad. gl'OllPS SPP n.g. I~(~w<tcla
illld I th ([J4j) and Bhattacharya, ([7]). Th(, followillg rcsnlt ji) wdl kllowt!.
Thc()n~m 2.1.
Lr) (Xm) Iw 11, 8tljItUu.:e of 1:7(1 non'Iit-:qenemtc tV'S, and ld S'T/. := :[::"',-1 X".,. ('I). ( N). Suppose that the1"(~ 1m:' nO wn5tants T (N Imd ( (' [0, ~) mu'h that { X I} h(l.8 its dis/.nlmtion l.'o'II.I.'I:nlr-ated on the ,~d {~ + f. : j = 0, ... ,J" - 1},
Tf!l-.'rt {S',,} ~ u ('11'-" (X,")).
Proof: Ld (' be the FSS of {XI}' By assulllpt.ion Hk)[ < 1 for k: (. Zo (d. Corollary 1.17(ii)). IIHnc(l (:(:;,,) (k) = (('(k)}'\ _ .. ) 0 (n ---> 0;)) for ,dl ~: E Z(}, SO, by t.)H~ continllity theorem for FS8's (d. Proposition 1.7(ii)), thiH
is eq\1ival(~llt, to {,S,,} -2. U (n ---> (0), n
SchaUI' ([G0[) giws ratf~s for the convergnllco of Fj.';,,} hlld its density h~',,) to F(; and fu, )"('sp('ctiv(l]y. These reslllt.s arc given ill th(~ t.wo following propnsi t.i()Il~.
Proposition 2.2. fJi:'/, (X",) be 0. .~N1W~nI.'~ of 'lid f1l'.S, lI.nil ld 8" := E::'~l X rll (n f' N). ret c liI~rwti'.' the FSS of { X I}' Then the following "tatements 11.7"1~ (:q'l.cioaicnt.
(;t) SIIP~ ( "/'(' [c(k)[ < 1.-(b) Hllp()<:U:IIF{s,,)(T) - :r[ = O(W") (n ---> rxl) for slmu.' 0 < 10 <: J.
Proof; (a) ==} (Ii): We Wie tli(' illequalit.y of Fainkib ([16)) (sl'e e.g, Ell iot,t [lSj, lemllla l.4n): Let F, G ( Fun, 1)) with FSS e iUld d, n~spectively. TIH~1l tlHm' is a pnfii t,i Vl' COllstallt M > 0, indcpl.~{)d('Ilt of F, ~\H:h lilat for
all I] E N
2.1. Sums 21
wher:e Qo(y) = SUPO$:t:<::I_y (G(X + y) - G(X)) (0::; y ::; 1). Define tV := suPk EO:1:0 Ic(k)1 < 1, and let 0 > 1 be such that w := oil! < 1. Taking G(:r.) =.r (and hence d(k) :0::: 0 (k E Z())), q ::0: [1ii- n j + I we get for sufficiently large n
sup IF{Sn}(x)-xl < M(~+'u/'(l+logq)) 0::;:«1
So (b) it-: proved.
(6) :::} raj; Let k E 20 be fixed. From Proposition 1.12 it. follows that
1
(c(k)r' = 27rik f cxp(27rik:r)(:c - F{8n}(a:))d:r.. o
II~nctj, k(kW ~ 27rlkl suPo:::; J;<I IF{$n}(x) - xl = O(w") (n ---> 00). This implies that Ic(k)1 ::; w < 1. So (a.) is proved.
o
Pl"oposition 2.3. Let (Xm) be a sequence ofiid rv's with density f· Let Sn:= E~~=l X mt and denote the density 01 {Sn} by fn (nEN). Then the following statements are eq~tivalent.
(a) For some n, in i::; bounded. (b) :mpO:::;::t<l Ifn(x) - 11 = O(WIl) ('11, ---'> (0) /0'1' $Ome 0 < w < 1.
Proof: Clearly, (b) =? ( a). Let (a) hold. Then In(:r;) ~ M for some M > O. Let Z = Y - yl, where y and Y' are independent and distributed as {Sn}, Then by (1.9) it follows that {Z} ha.s a. bounded density. Let c be the FSS of {Xt}; we have
By Proposition U3(i), Lb:-oo ic(k)1 2t) < 00. Since Xl ha.s a density, we know that W := SUPk E 20 Ic(k)1 < 1 (d. Proposit.ion 1.13(ii)). By Proposition 1.11(iii) we get
I 2: e-2.-ihcP (k)1 ::; sup lc(k)jP-h L Ic(kWt) k Iii Zo k E Zo k E Zo
- O(wl') (p ---+ 00).
So (a) "* (b). o
22 Clwpter 2. SI1II1S dnd Product!:!
WE' IlOW turn t.o la.ttice di::-;t.ributioIlS, Let S" := 2::;:1_1 X", (n ~ N), when~ XL, "\~, , . , are ill(h~IH'Il(lent copies of X. III Theorem 2,1 we sIL()w(~d that 1111()('r t1H~ a:-;SLl mpt.ion that {X} is llot concentrated Oil {c: + 7 : j = 0, "., r .--:t } for <tlly r r- N alLi! ~ E [0, ~), {S'I} CO])V(Tg('S ill distrlhntioll to U as 'I/--l- 00,
\M~ IlOW cO!LsidE'r :-iLlmS of iid lattic(~ rv's, and without. ]o:-;s of h(~tlemlity we aSSllllH' tltht X 0:: ,l'[O, 1) ha.s its di:-;trihution conci~])trat('d Oil
-'(C ),).- {r. .\, { r' l "I 1}1 ,'J <.,. .-- '-. + .7 A , J ( 0, .... T .... . ,
wlt('rp (} ~; ( < I and (I < .\ < 1 arc COll::itallts. We consider four qlsns:
1. ".\ = () with probability 1; 2. X --' ( with proha.hility 1, hlld ( > 0; :~. ~ or ,\ i:-; irmlioll,\J ; 1- (C X) (([j, illld X is n()nd(~g(~IlNa((>,
In case 1 we SE~(~ (,bal. {ST!} = 0 ('II. i:: N), i.t, {S,,} ---t 0 ('II, - .... X~) ahnost snl'dy, J!L t.1t(~ S('cOtHI and third casn it. is easy to verify that {Sn} dOE~S llot
COllV('rg(~ ill distl'ibntimL as 11, ----1- (XI, With n~s]H"~d to case 4 it is (~a:-;.'Y· to see tILat the sd. S(C)') CHll he writ.ten as {~ : j E N U {O} } for SOllle r (N. TIt(~ following t.)li~(m~rrI is rt slight improwltlrmt of thcon'llL 1.2 in Schatti' ([;)7]). Con::;i(kriIL~ ill the parts (ii) hlLil (iii) of the followillf!; theorem t.lH~ SlllflJk:st
j for wbidL P(X = ;:l is positiw'. [Dads to a morn int,nitive n~tmlt.
Theorem 2.4, Lei (X,,,) be a N(:q1U~nr.'~ oj indq~end(~nt 7"'11 '8 ihsl.ribrdcd (],,~ X ~ ,Y[O, I), (Ind
SVP[J08(~ !hal. X is l().ttir:(~ on ~z for 807ne 'I' (; N. Lf:!. S'n := L::I~) x," (0" N).
(i) SlI'P1)()8e that X has span ~ sw.:h that gcd (/1,1") = 1. Then
(n ----1- eXl), (2.1 )
and for 80m!: () < w < 1
(n ----1- 00), (2.2)
(ii) Suppose X has span ~ 8w.'h that gcd(b, r) =: q > I, (lnd II:! p := ;;. LI:!
t:= llIiH{jeNU {O}: P(X :.:=: 1.) > O}, lf~1.NU {O}, then {8,,} dO(-:8 no!
con'pf."IYc in disf.l·iIIUJum as n .--t (X).
(iii) Under th.!~ IL~8n'fJIptwn8 of pad (Ii): If ~ ( N lJ {lJ}, then X 'l,~ lllt.tice on
f,z with SPII,r). ~, and {S,,} ~ Up (1/ ...... (X)).
2.1. Sums 23
Proof: (i) Let c denote the FSS of {X}. Lemma 1.16 implies that IeCr) I = 1. Since gcd(b, r) .:::: I, we have r "'" Ulin{k EN: lc(k)1 "'" I}. From Corollary 1.17(ii) it follows t.hat.lc(k)1 < 1 if kt.rZ, a.nd c(k) ~ 1 if k E rZ. By the fact that cur(k) = 0 if k f. rZ, and cu.(k) = 1 if k E rZ (cf. Proposition 1.19(ii)), we then obtain for k E Z
c(Sn)(k) - .. cur(k) (n -+ (0),
which proves relAtion (2.1). For t.he proof of (2.2) we refer to Schatte ([57], t.hm 1.2).
(ii) Let Y := X .- t; then Y is lattice on ;2. with span ~. We have
(:j$,,)(I.:) "" (c y(k)exp(27r'i.ktjr))n = (cy (k))n exp{27rikntjr). (2.3)
As in the proof of part (i) we have cy(k) = 1 if k E pZ. Then, from exprestiion (2.:3) it follows that Gjs.,)(p) = exp(27r'intj0, which does not converge fl.S n ~ 00 since ~ f. N U {O},
(iii) Clearly, X b concentrat.ed On Cjq~;bfq :.7 E NU{O} }. lienee X is lat.tice
on ~z with span ~, and gcd(~,p) = l. Evidently, part (i) is applicable. So,
{8,,} ~ Up (n ~ 00).
D
ExaITlples 2.5.
(a) Let (Xm) (Xm E X[O, 1)) be a sequence of iid rv's, and let Sn := L~'=l Xm (nEN). Suppose that Xl is such that Pj ;= P(XI = t) > 0 (j = 1,2,3),
Pl + h + P;l = 1; clearly, Xl has span l' Theorem 2.4(i) yields {St,} .!. U4
as 11. ---> 00.
(b) Let (Xm) and S" be defined as in part (i), and suppose that P2 = o. Then {S,,} has it.s distribution concentrated on 0 and ! if n is even, and on ~ and ~ if n is odd. So {SIl} docs not converge in distribution as n ~ 00.
This also follows by applying part (ii) of Theorem 2.4.
2.1.2 Sums of independent rv's
In this section we give necessary and sufficient condit.ions for the frad-ional p~r(.s of sums of indE'IWndent., not necess<'trily lid. rv's t.o converge in distri· but.ion. Part.icularly, the uniform limit will be considered in Section 2.1.3.
Since sums of independent rv'!; give rise to products of FSS's, we first l"€call t.hat, for Zm E C, n:''''1 z,.,. is called convergent if the sequence n!_l Zm
converges to a fillitc, nOll-zero limit. a.s 11, ~ 00; otherwise, n:=l ;(::m is called d·t'Of;rgent.
24 Chapter 2, Sum.') fwd Products
Proposition 2.6. [Titchmarsh, sect.. 1.4] Ld (an) lw (l 81~lrltenr:.c of real num.kT,~ 'III'ith 0." > 0 for' on 'I/. (. N. Then nO." is COn1I{Tyenl 't.ll Z( 1 - aT.) ts c()n'll(~'Iyent In wldition, 1/ Un :::; 1, t/U'H nO." i.~ dilw7:I/ent 'til n (in'" O.
WmlIluth ([681) prOI.)()K(~:-; to l1hllH~ the following c()llvnq~('Il(,C t(>,t for complex l))'o(hlctK aftpJ' Coriolis (SC(~ aho Titdlltl<l,nih [66], p.17). Corioli:-;' conditiolls (U'(~ llot, tWc(~ssa.ry for cOIlW:rgcllcc. in fact, as pointed out, by WNlfl1lth, it ~(:(~lt)S illq)()ssillle to find sirnpk ll(~C('SsH.r'y and iiuflici(:ut conditions I'nr the COHV(~lW'llC(: of lli(' product n z" ill t('rlus of ii(:ri(~s lib: 2::(,:;" - 1).
Proposition 2.7. I,d (;:,,) ('";11 i- 0) be (~se(j'/J.I~nu: IIf comple:r IInrni}(T.'i .'inch that L:(;:" _. 1) (),nd
Z I,cn .. 1 [~ arc COfl.'U/:n!l'nl. 'then n :;" 1.,~ Cm l.'11I:7:r)l'nt.
W(~ tlOW ret-\1l'1l to Ilt(: fraction;d pa.rts of SllIllS of in(k~IW1Hklll. rv's.
Theorem 2.8. L(.'I (X,,,) be a M~qw~nr.'e of 'mdcpendent. TV'8, and let S" := :2'::,._.1 Xm (II. ," N),
Lei. S c X[O, 1). Thcn {Sit} ..:!:.., S ()'h' ii, --4 00 1ff 11;:.=II.'P",J(k) ,(-,.,(k) (1,8
n"~ cx:.l fOT all k E Z,
Pmof: Ckarly, w(~ hav(' by (1, 10)
II cpm ) (k). l'I'l-'-1
Tll(~ll Ill(·) proof follow:) irnul(:diatdy from th() continuity tl)()orPlll [or FSS's (cf. Proposit.ioll 1.7).
o
Lernrna 2.9. Let (Xm) be a M.'(jlLrJ),a; of indepf.wient. 1'1118 with c(lrn~8pondmg Se(/ll,enu~ (em) of FSS\ (l,nd ld S,,:= 2::;:,=l x'" ('liEN), De/in!"
1\.: ;= {k f;:£/I: c:",(k) i- 0 f(l1' all 'In, EN},
and 811.P[J08!:' that both I::",(c",(k) -1) and 1:", Ir',I/(k) _1[< !m~ !.'ml:ul~rgcnt for ILll k " K.
,j
(i) If K i- 0, then {S',,} cottVCrqc5 in 1h.~tTib'fLtwn to sornt~ S' E ,YIO, 1), S' f U ('II"" OQ), and 20 \ K = {k E Z: i.',<;(k) = O}. (ii) If K = 0, then {Sn} ~ U (11. ---t (Xl).
2.1. Sums 25
Proof: (i) By Proposition 2.7, TIm c",(k) is convergent for all k E K,. For all A: ~ Zo \ K we have lirnn-->oo Il~~l=] cnJk) = 0. Proposition 1.7(ii)) yields t.hat {Sn} converges in distribution t.o some S E ,1'[O} 1) as n - 00 So}
:7.0 \ K ..:;; {A: E Z: cs(k) :.:; O}.
~
Since K -::j:. 0, there is an integer k E 20 such that cs(k) t= 0; hence S t= u. (ii) For all k E La, lirn'HOO n~,,"1 cm(k) = 0. Theorem 2.8 yields {Sn} ~ U <t!:> n - 00.
o
Kolrnogorov's three series th~~orern (see e.g. Tucker [67], p.1l3) gives necessary and sufficient conditions for L: Xm to converge almost surely; in this t.heorem t.here are no assumpt.ions ma.de on t.he existence of moments of Xm . In the following proposition we formulate sufficient conditions for 2:: X Tn t.o <;onverge almost. snrely.
Proposition 2.10. [Tucker [67], pp. 110-113J Let (Xm) be a sequence of independent rv's, and suppose that EVarXm M
firdte.
(i) Then 2::~=l(Xm - KXTn) converges almost surely as n - 00.
(ii) If 2:: JE:Xrn is convergent, then =~"'l Xm converges almost surely as
n - 00.
The following t.heorem says when O:~l:.;.l X"~} converges in distribution Or dO{$ not conV(~rw~·
Theorem 2.11. Let (X l1 .) be a -sequence of independent rv'.s .such thai I: Var X"l is finite. Then O:::::!,.:::1 X tn } converges in distribution (n ....-.I (0) iff lim {L::!,.=IlKX"m}
n-oo
Proof: Set /-im = JE:,\'"m for mEN. Then we have
11 n n
{L X m} = { L (X Tn - 11m) + { L 11m} }. (2.4) TI1-:-1 m;;:]
By Proposition 2.10(i) w(~ know that. L~,=](Xm - fi'm) ~. T (n - 00) for some rv T.
Let lirnn-oo{2:::~'=l Mm} = b for )SOme b ~ [0,1]. Then, from (2-4), it follows
that {2::~'=1 Xm} ~ {T + b}.
2(j ChapUr 2. Sums alld Pro(h!(~I,.<;
Ld 0::::::,-1 X;I'} converw' as n -. 00, Simila.rly, we have
n 7~
{L (Pm ... X",) + {L X",}}, u,'-:' 1 n).;".l
Thcll, by Propo:-:;il,ioll 2.10(i), Z::'~l (Jim - Xm) cOllwrw~s almost surely as 'II --, 00, So lim" .",,{E:~=lI-Lm} exists.
o
Remarks 2.12. (i) III the foregoing tlH~()rnm it is posKiblt~ that {I:;:,-l X",} ~ U (0 ) ex.)): Let. (X",) be illd()pl~Ildcnt rv\ with r(X", = 0) -:- P(X", = 2"''') .. ~ for all rile N. Then IX",I::; t, <wd EIEX", i:s coUWq-';t'llt and :SVar X", < 00;
so Proposition 2.10(ii) implies that I:;~'=I X"' converges ,tlTllost smdy. It. i:-:; wdl known that. (spp Lewis [10])
( it) 2 sin ~ t I' '. ('i t ) 2 si It ~ I exp - = lIn ('Xl' - --....... _-2 I. ,,-00 2 2" sin(t2-")
'it" t n it. t lim exp( -) II cos -. - ~ lim n cxp(-. -) cos ---:--11-~OO 2. 2]+1 TI.-OO 2J+ 1 2.J·1 1
J::-I }=I
" lim II 'P,>;(t.).
lL-+O".J ,7 .
j=1
(ii) III Example 2.17(c) we will give ". soqllencc (X",) for which LVal" X", is IlOt. convcrgE1lll, while 0=::,= 1 X,,,} convcrw~s in dist.ribul.iOlI :·\,S 'II. -, O() t.o
.!
some T -=I- u, (iii) In Section 2.1.:1 we consider the case that E 'Var X,n it-: d i vmW~llt. III litis case we giVE' Mlditional Kldfidnnt collditiolLS for {L:::::~'=l Xm.} t.o couvergE) in dist.ribu(.ion to U as 11 --+ 00. In fact, ill part (i) we have all cX::I,TlJple
when~ Z val" X", is finik, wlwrcas {L;:'~I Xm} "~ U (n --+ 00).
2.1.3 Asymptotic uniformity in distribution (mod 1)
III this section wf' give necessary fUHi sufficiPllt condi tioHs for the fract.icmai pa.rts of surns of indcptndclLt. rv's to COTlwr!!;l' ill di~t.rihllt.ion to U. As a sprxial case W(~ cow-;idE']" weight(~d 'iHlllfi of iid rv'::;.
2,1, Sums 27
Theorem 2.13. Let (Xm) be a sequence of independent rv's with corresponding sequence (c.n) of FSS's, and let S,t := E:!''''l Xm (n EN),
(j) {S'l} .!!.., u (11. - (0) iff n~=l cm(k) - 0 (n - 00) for all kE'l',o. (ii) Sv.ppose cm(k) #- o for all kEZ(), mEN. Then {S,,} ~ U (n -----> 00) iff L"J 1 - ic"J k) I) is divergent for all k E Zo,
Pwof; (i) This is a consequcnce of Theorem 2,8 since cu(k) = 0 (k E Zo)·
(ii) This is a conscquence of part (i) and Proposition 2.6. o
We now com;ider conditions under which t.he distribut.ion of normed sums of independent rv's converge to some limit distribution. We first prove a general result.
Lemma 2.14, Let (Y.l ) be a sequence of T'V'S with cOT'responding sequence (Fn) of df's, and having a density In for all n EN. Let (a,,) (an> 0) and (bn) be ,~equences of H::o.l nmnben:l such that an - 00 (n - 00). Suppose there is a density f such that
00 J Ifn(Y) - f(y)ldy - 0 (n - 00), (2,5) -0(>
Then {a"y'-, + bTl} ~ U as n - 00,
Proof: By Proposition 1.7(ii) it suffices to prove that, for all k E Zo, c{o"Y" Hd(k) - 0 (n - (0), or equivalently, for all k E Zo, c(anyn)(k) - O. Let A > 0, Then
c{~"Y"I(k) = J /3k(a""y)dFn(y) + f flk(any)dFn(y) =: 11 + 12.
lyl<:A 1!l1< A
Let E > O. By Scheffe's theorem we know that (2,5) implies Fn ~ F, where F if.; t.he df corresponding to f (see e.g, Billingsley [8], thm 16,11), We take point.s A and -A of F such that 1 - F(A) < g and F( -A) < G. Then for sufficiently large n we find
III I < I - Fl1 (A) + Fn( -A)
< 1 - F(A) + IF(A) - F",,(A)I + IFn( -A) - F( -A)I + F( -A)
< 4e.
28 Chapter- 2. Sums and Prndllcts
F1.1rtlt('r we have
A 11
Ihl :S I J (1da"y)(.ft1(Y) - f(Y))dvl I J fik(a"y)f(y)dyl· (2.6) -A -A
By (2,5) it follows that there <~xists NI EN such thll.t r:., If,,(y) - f(ylldy < E
for n ::? N I , H<~IlCP for the first t('l"lll of (2,G) we ohtailJ for n ~ NI
A 11
1.1 Ih(a1l y)(f,JlJ} - 1(Y))(1.1/1 :S ./IJ,,(Y) f(y}lrly < [, -11 -A
Applying tltn H.iclllaJlJ)-IA~ht~sglln lemma (d. Propo!;-:ition 1.5) to tlw s(~c()nd
term of (2.G), we find I.r.:\ (fdany).f(y)d!/I < [ for n ? N~ and sonw N~ E N, So, for 11 ~ rnax(N1, N l ), we hav(' IIll < 2[, whieh completes tlw pfOoL
[J
Theorem 2.15. Un ile 'I" the asslLrnption-9 of Lem,Tfl.1L ,2,14 with )~, := (S" - bTl) ja.n and
8" := I:;:'~I X,,,, where (X",) 'IS a seq1tenC(~ oj mdepen(ifnt ni's: {S,,} ..:!:., u (n··, (0).
We now prOVf' a result e()l1cerning C()l1v~rgcnce of fractional parts of weighted Sl1mS of iid rv's to U.
Theorem 2.16. t(:t (X,,,) /)1; a seqw~nr:~ oj zid rv's 'IlIith chf'P (1.nd 0 < VO,T XI < (XI, I,d. (am.) be (z sequence of feal nHmlw'!".'i such that lim m _ oo am :.:' 0, and d(~Fne lV,~ ::. L::~_l a.",X", ('II EN),
(i) IfZ(/'~1 is diveJ~(wnl" then {W,,} ~ U (n ---; (Xl).
(ii) S'IlpJ)()sr: then~ 1:8 ILn mtew~r~; E Zo such that if(2rrka"l) 1- 0 for all HI, EN, If L a;n. is r:rm:lw1/jcnt, thp.n {W,,} do(~s not converqf. in distrib?dion /0 U O,S
n·' (X). [11. addition, 'tJ L a"t is convergent, then {W,,} .!!.,. W ('II -. 0<..")) for' ~
some W t- u,
Proof: (i) Let k E Zo be fixed. By Theormn 2.l:3(ii) it suffin~::; (0 show that t.he :Slim Z~_j (1 -- 1'P(27l"ko."l)l) is divergcut. It. is well kllown th;l,t 1 Iy(t)i .. _. t(at)"+o(t 2 } a.s t .. () with (12 := Va.r Xl' Henc(~, for sl\ffi(~i!~llLly small l,
(2.7)
2.1. Sums 29
Since am ---'> 0 (m ---'> (0), for sufficiently large M, it follows from (2.7) that
n n
L (1 - Irp(211'kam )l) ~ (a'rrk)' L a;w rn=M
So 2.:;;;'=1 (1 - Irp(21rkam )l) is divergent.
(ii) Similarly, it now follows from (2.7) that 2:m{l - IlP(27rkam )l) is convergent, and hence, by Proposition 2.6, TIm Icp(27rkam)I is convergent. As a result, from Theorem 2.13 it follows that {Wn } does not converge in distribution t.o U. In addition, if L am is convergent., t.hen Theorem 2.11 implies that {W,,} converges in distribution to flome W.
o
Examples 2.17. Let Xl, X 2, .•. be a sequence of independent. [V'S distributed as X. Let (am) bf~ a S(~qllence of real numbers, and Itt W" := L amXm (n EN).
(a) Let X "-' N(I.£, ( 2 ), and am = m-& with bE Ill.
If b > 1, then L am and I: a;" are convergent. Theorem 2.11 yields that {Wn} converges in distribution as 'II, ---'> 00.
If! < b ::; 1, then 2: am is divergent and L a;' is convergent. We consider Jl. == 0 alld JI t- 0; If /1 = 0, then Theorem 2.] 1 implies t.hat {W'l} converges in distribution as n --j. 00; if tt. ¥ 0, then by the same theorem, {Wn} does not converge in distribution.
If b :S ~, then L a;' is divergent; hence, for k E lo, 1im~_oo c(W~) (k) 0:;0 0;
so, {Wn} ..:!., U. Theorem 2.16(i) is also a.pplicable.
(b) Let X "" g"p('>"), and am = m-b with b > O. It. is clear that
lV'x(t)1 = 1//1 +(±f (tElll).
Lr.t. b ::;;: ~. Then Theorem 2.160) implies that {Wn} ~ U (n --+ (0). If I < b :S 1, then Theorem 2.11 applies, and hence {W,,} does not
converge in distribution as n --j. 00. We remark that, if b = 1, then
W'l 4 rnax(X) 1"" Xn) (n EN). In Example 3.69(ii) we will also see that {rnax(X1,··· ,Xn)} does not converge in distribution.
If b > 1, then Theorem 2.11 yields that {Wn } converges in distribution as n ---'> 00, and by Theorem 2.16(ii) it follows t.hat {Wn} converges to some
d
W =I- u. (c) Let X have Cauchy's elf with !.px(l) = exp( -Itl) (t E IR). We note that. the moments of X do not. exist.. Then c{wnf(k) = exp( -27flkl 2:;;l=1 laml) (k E Z). If I: laml is divergent, then {Wn} ~ U (n -too) (d. Theorem 2.13(i)). If ;3 ;= lim'Hoo :L~=l l(tml < 00, thAn c{wnl(k) ---> exp( -27rlkl.6) (11. - (0); see also R(~mark 2.12(ii).
:10 Chapter 2. SlIH.IS <~Il(f Products
2.2 Products
III this s(~(tion we cOIlsidt>r the fractional parts of prodllcts of iid rv's. To this end, for a. sE~qllnll('n (X]) of iid rv's wit.h de1)sit.y .f, !PI. P" = TIj'=: I X)' and let .'i" ,~. log r" (n .. N). We fin:;t. di~riw slIHiciellt conditions on ttw dNivat.ivo8
of the dcniSity of S)~ for {PIl} .!!, U as n -' 00 (d. TheoH~m 2.20). This leads
to rather weak s\lfficient conditions in terms of .f' for {P,,} ..!..~ U as n·-' 00
(d- Tlwormn 2.23). As a prepal';ttioll, WE' jH'OVrl some auxjJjary r('slIlts.
Proposition 2.1S. [,1:'1, Ii ( N U {O}. Let.f : Il4 ---+ IF!. IJ/: Il fun.dion that is /) f1.rn.(~S d'iJlen:1I.!iable, Then
( d)" 1 ':'~ . . -1-- (~.f(logy)):::: ~ .~J-l)'"' J(J(I.J,.j).fl.il(logy) (·Y !J j-O
(y ~ 1)
Proof: I3y induct.ion; spe ;tlf;n [i'aft di Bruno's formilla ((J. Abramowitz hnd St.egun [2], p.82:~).
o
To avoid trivialil.i(ls, we assume here t.hat. F is non(hW~l!('ratn ann that n:(F) ~ L
Lemma 2.19-Let (X j) be a 8e(j'lWTtdl of iid rv IS having an absolutely C(l7I.!in'lL!ms rtf F '1111:/.11.
n(F) .~ 1. D(~n()le the den$lt.y oJ.)" :;=:: I:.j'_.llogXj flU Jn (71. EN). £(:'1. "frl. he n nonn(~!J(lI.'l,1Jt'; mteger, and suppose that the fir·sl. 111. derivatives oj J" e:rist. Suppose further that
lim lim f,(I/) (:r:) = () (11 = 0, _. _, m - 1), ?'! --+(X) :.1:: --+0 ~
(~.8)
lim lim ( .. -:t(Hl) f,\I/) (:r) = () (1/ = 0, ... , 'rn - 1), 'r~-"'O::) X "L)C1
(2.9)
and that then;; (::J:1j;t.'i a real nUmb(~1- (~ > 0 s11ch thnt
sup Slip k,,(,,-m) f,~")(;r)1 ..... 0 ('0. ---+ 00). ('2-10) u.:::.U I .. ·.lIl X ( [}l:'I'
TII.f:n {TI7--1 X.d ~ U (n _ .. , (0)-
2.2. Products 31
Proof: Let Cn denote the FSS of {TIj=l X)}. By Proposition L 7(ii) it suffices to show that lim'Hoc cn(k) = 0 for all k E ZO, Let k E 20 be fixed. Clearly, for n E N
00 00
cn{k) = J e2l1"ikYdFSft (log y) .::::: J e1rrikY~fn(logy)dy 1 1
21fikll 111=00 00 211"iky
= ~?tik Ofn(logy)) y::;1 - ! ~1Tik d (*fn(logy)) , (2,11)
Now, it is easy to verify by induction that for nE N
where for ;/ = 0, ' , , , m - 1
(
211"iky ) ( d ) v IY=w D,,(n, k);= (2:ik)IJ+l dy (V,,(logy)) y=l (2,12)
Hence
m-J 001 ( d ) m 1 Ic,,(k)1 ~ L: ID,,(n, k)1 + I d (;fn(logy)) dy (n s N), ,,=0 J y
(2,13)
We dlOW that DIJ(n, k) - 0 l).'j 11, - 00; frOIll Proposition 2.18 we get for some positive integers (3(/J,j)
., :::; O~~)1(l),j)1I"~1 L If~j)(log y)1 (y ~ 1, /J = 0, "" m), (2,14)
.J" j;;;O
By (2,8) and (2,9), for large n the right hand side of (2,14) is arbitrarily small as y --) 1 or y --) 00; hence D.,(n, k) ---; 0 (n --) 00) for /J = 0"", m - 1. It also follows from (2,14) that for any 0' > 0
001 ( 1 )m 1 Tn W J ~ (;f,,(logy)) dy::; CLJ!ADly«-mf~j)(1ogY)ldy, dy )'-0 I - 1
(2,15)
32 Chapter 2. Sums and Pro(/l1r:tS
W!t('H' C = rnaxO:,JSrn !3(m,j). 8y dorninat(~d cOllvcrgellce aud (2,10) it follow0 that the right haud side of (2.15) tends to zero as n·' ex). By (2,13) we hav(~ t:,,(k) --) () (n ---;, (0),
o
Itl the followiut!; theorem we formulate $uflir:ient conditions for t~w fractional part~ of products of iid rv's to converge in diKtribnt.ion to ,l, ulliform limit, W(~ Ilote that this theorem i~ nO! a special ca,se of Lemma 2.]9,
Theorem 2,20. Let. (Xi) be a s(;qnr:nN~ of iid 1·'11'8 havmg an 1L1i80i'ldeiy conlzniJ.OU8 (if F unth u(F) ~ I. Denote the dl';Hr;ity of S" := I:;~=1 log Xj by fn (n i: N), and S7~PPOS(~ Uw,1 its dcr·iv(ztiV/l 1:, e,,:isi.s. Suppose j'IJT·lhcr that
lim lim I,,(:r) = 0, I/. ... + 00 :J:--+()
(2,16)
and that
sup 11:,(:r) - fn(:r)1 ---;, () (n '(0) . (2,17) . , , Ih\.!
Proof: Let. en (knote the FSS of {nj~l X J }, We have to show that lim".~oo c,,(k) = 0 rot' all k <: 1',0' Let k f:: z() be fixed. From (2.11) we ha.v(]
Ic,,(k)1 S Dn(n, k) + J Id~(UIL(IogY))1 lIy (n € N) 1
with Do(n, k) as ill (2,12), and by (2.16), Vo(n, k) -,Oat-: n -> 00. Furthermore, W(~ havp
"" D<! .I 1~(tfn(l()gJj))ldy ~:;.I c·" l: l.r:, (x) - f,~(:D))ld:r (nE N), I 0
So, by domil1(tLcd conv(~rgence and (2,17), w(~ have cn(k) ---;, 0 (11. ---;, cc), D
2.2. Products 33
Theorem 2.21. Ll'.t (XI) be a sequence of iid rv's having an absolutely continuous df F with a(F) 2: 1. Denote the density of Sn := L:j'=l Jog Xj by In (n E NL and suppose that its derivative I~ ex-ls/'8. Let <p be the chf of log Xl' If
00
lim J Itll'P(tWdt = 0, n-oo
(2.18) "00
then {TIj\,] XJ .~ U (n --+ 00).
Proof: It suffices to show that the conditions (2.16) and (2.17) are satisfied. Since J~oo 1'P(tWdt exists for sufficiently large T/., we have (d. Proposition 1.4)
00
J,Jx) = 2~ ! e~ir..;tpn(t)dt, (2.19) -00
whence, for all x 2:: 0, ie-x In(x)1 :::; I I ... (x) I :::; f~O<) 1'P(tWdt. Therefore, fn(x) and e-"In(:r:) tend to 0 as'TI. - 00, uniformly in :1: E ~ (d. (2.18)). So (2.16) holds. Once again, llsing (2.19)
00
If:J:r) - j,t(x)1 = 12177 J -(it + l)e- itx 'Pn (t)dtl -00
00
< J (It I + 1)1<p(t)ltldt (2.20) -00
By (2.18), the right hand side of (2.20) tends to zero as n -> 00. So {2.17} holds.
o
Before we prove the main theorem of this section, we first state the following result.
Proposition 2.22. [sec Kawata [351; thm 2.7.31 Let X be a 7'V wif.h density f. Suppose that I(x) = J~oo f'(u)du with t measumble and J~oo 1f'(x)ld.1: < 00. Then 'Px(t) = o(ltl- l
) (l$ !tl- 00.
Chapter 2. Sum" and Products
Theorem 2.23. Let (X j ) be a sequence of iid 7'V'S with df P and n(F} ? 1, and having a density f 1,Jrith derivative f' $uch that
/"'" x!J'(.T)!dx < 00. (2.21)
TJu~T/, {TIj'::;1 Xj} '\ U (n ---l- (0).
Proof: Let It ;wd i.p denote the densit.y <U1d c!if of log XI. respectively. Clearly, 11,1(:1:) ,., .. f'kxk h + !(e·")e" (T E K,). I"l.lrthermon\ from (2.2l) we gH.
1)0 r.c L)I.'J 00
J Ih'(:r)!d:r :::; .I !f(e")!r?:r.(h; + J f(C'~)C"'dx = J ylf'(y)!dy I 1 < eX).
o 0 0 I
By Propo~itioll 2.22 we have 'flU) = o(lll-1) (It! '(Xl); !inner- there is a pot;iLive n.'II.1 number T :such thai 1<p(I,)! ::; !tl- I if 11.1 2 T. Since 11;)(1)1 < 1 for all t '" 0 (d. Proposition 1.15), we have: For a.ny t > 0 there exi:st:s a const.;'l.lli. Ii < 1 such that IlP(t}1 ~ b if E ::; It I ::; T. Couseql.lcntly, w(~ get. for ally E > 0 and for :sufficiently large n
"'" J Itll'P(t)I"dt = J Itll'P(tWdt + J 1{11'P(t)lt\lt + J Illltp(t)I"dt -00 11.1« ,:::1/.10 11.1>'1'
~ J !tldt + 2TI/' + J IW tt+ 1dt ::; :k~. Itl<., 111>7'
So Tlwon~rn 2.2l yields that. Ulj~ 1 X) }.!... U (n ---l- (0). o
Corollary 2.24. Let (X j ) bt: a seqltence of iid rv'.s with (U F and (t( F) 2 I, and havmg (L
Iunalded n()nincrelJ .. ~·ITI,y density .f 'with dwrivative 1'. Then {IIj= I X J } .!..~ U (n ..... ,. (X)).
Proof: OIl w:wnnt. of Tbeorp.l!l 2.23 it suffices to show that (2.21) hold::>. We obtaill
00 00 00
J :rlf'{x) \(/.:1; -.! xt(:r)dT = -:rf{x)l~ + J f(:c)dl: 1 I
.. f(l)·1 1 < (X).
o
In t.he followill1!, !'xample we 1";(>(> that (2,21) dOf>s not hold for all rlcn:siti{'s.
2.2. Products 35
Example 2.25. Let J(;I;) = r(I,)c-J;{x - l)r-l (x ;:: I, r E (0,1)) be the density of a shifted Gamma distribution with parameters rand L Then
/,(x) = - J(:t)(1 + (1 - r)(x - itl).
It. it> easy to sec that {2.21} does not. hold. Note that f(x) is unbounded at :1; "', I.
36
Chapter 3
Maxima
Iu this chapter we study the convergence in distribution of the fractional parts of maxima of iid rv's to, possibly degenerate, limits, and in particular, t.o a uniformly distributed limit. We will see that these fractional parts may converge to any given rv on [0,1), or not. converge. We first formulat.e nee(~ssary and sufficient conditions in terms of the cumulative hazard function of the underlying dbtributiorl for these fractional parts to converge. Using two different methods, one of which uses result.s on t.he number of maxima in a discrete sample, we show that any given rv on [0,1) can occur as a limit for these fractional parts, In t.he recent literature, some attention has been paid to t.he number of maxima in ~ discrete $~mple. Furthermore, we derive sufficient conditions for these fractional parts to converge in distribution to a uniformly distributed rv, This leads to an easily verifiable sufficient condition: the hazard rate of the underlying distribution decreases monotonically to zero. Using results from the theory of regular variation we derive sufficient conditions under which these fractional parts do not converge in distribut.ion. Naturally, the following question arises: How are the results about convergence in distribution of the fmctional parts of maxima related to classical extreme value theory? We will give an anSwer t.o this question, and we find some surprising results.
This chapter is organized as follows. Let Z", ;= max(X11 ... , X",), where XI, X., ... are iid rv's with df F, and assume that the right endpoint of F is in Ii nit-e. In Section 3.1, we show that. any given distribut.ion on [0, 1) can OCClIr a.s Ii. limit. for {Zn}. Wt' abo prOv(~ t.hat t.he convergence in distribution of {Zn} to some rv Z as n ---> 00 is necessary and sufficient for the FSS of {H- (11' + log n)} to converge to t.he FSS of Z as n ---> 00, where y rv 1\ with i\(x) := exp( _e-;r) (:1: E JR) and H- is the genel'alized inverse of t.he cumulative hazard funct.ion H of F, In Section 3,2, we give some sufficient conditions On this inverse for its FSS to converge to the FSS of U. Furthermore, we derive rather weak sufficient conditions on the hazard rate
37
38 Chapter 3. Maxima
of F for {Li'n} Ii, U. In Section :(:1, uudcl' cert.'l.ill cOllditiollS Oil F we show that. t.Iw dist.rihlltioll of {Z,,} docs lIot (:()llV{~rw' ill distrihlltiOll. Fimdly, in Secl.ioll :1.4, we consider l.lH~ con Iwc.tiOI1 wit.h f~xt.n~lfI("~ Vi'l,luc t.heory, alld ill Sectioll 3.1.5, we giw conclusiolls, de~cribing thf1 ['elation fwtwN~ll tI\{> most impor1.ant rnsnlts of Section :).4 fwd th(~ foregoing s(~ctiow-;.
3.1 Convergence to some limit
In this S(~(:tjon w{'~ give sufficient cOllditiolls on the underlying distribution for tll(~ frac1.ional parts of maxima to cotlvcrge ill (lit)t.rihlltioll to a 1l(0)(h~gr~nnrat,e limit.; we briefly com:idr'r cotlvcr!;pnc(: to dq~nncrate limits.
Lr't. Z,,:= m,~x(XI""'"\"") (n( N), wlll'fr (X J ) is aSf~qllr'nce of iid IV'S with <If r. III Section 3.1.1, w{~ show t.hat. {Z,,} converge::; ,lImost slirely if the righ1. (>ndpoint. of F, ddill(>d as w(F), it) fillitn. III addit.ioll, w(' give lI(~q~s~·mry and sllffici(~n1. conditions in terms of t.he generalized inverse of th(~ cumulat.ive hazard fllnction of P willi w(F) = 00 for tli(~ disl.rihlltion of {h,,} to converg~~ to it, possibly (kgP11(~r(\.u', limit as'll, ---> ()(). In Section :\,1.2, we giw suflici()Jlt. condit.ions Oll F for {Li')I} to qmvnrgc in dist.rihutioll to :! for S()Ill(~ Z E X[O, I). TIl addition, for ,Wy givon rv W ( X[O,1) we con::;t.l"1Ict a
fi(.'{}tIenq~ (XJ of iid rv's !:>uch t.liat {Zn} .!!:., W; in ScctiOlI :t 1.:1, we prove thi~ by USillg results on t.he nlllnber of Iwtxima in a. discrd(' SfLIllPIe.
3.1.1 Necessary and sufficient conditions
Throughout this chapt.er, when studying the limit dit::trihution of fmetional parts of maxima, w(~ consider
Li'" := rnax(X I, ... , X .... )
wit.h (Xj) c: A(F, X), and
A(F, X) := {(Xl)f (X j )] fife indcpcn(h~nt copi{~t:: of X with df F
and w(F) -'" oo}.
w(~ give l\en~sShry and :suffki("llt condit.ionK on F for {h,.,} t.o converw~ in ctist.rihutioll as n ., 00. Tn tlw proof we have in wind the fact that the (unit) expoll(lnt.ial <listri bution belong::; to the domaim, of attractioll of t.he Gumbel-type extf('nw value distribution (Ke{~ Section :.\.4.1). In a silnllar manner, we d{~riv(> nncessary and sllfhcient conditions for \.!tn fra.ction:tl parts of minima t.o converge in distribut.ion.
3.1. Convergence to some limit 39
The assumption w(F) =: 00 is justified by the following lemma, which is obvious once stated. Its proof follows easily by using the fact that the maxima of iid rv's converge almost surely to the right endpoint of the underlying distribution (cf. Resnick [51], p.9).
Lemma 3.l. Let (Xj) be a sequence of iid rv's with dj F and w(F) < 00, and define Z" :"'" max(X), ... , X n ). Then with probability 1
if w(F) E Z and P(X = w(F)) = 0 otherwise.
To formulate necessary and sufficient conditions for the distribution of {Zn} to converge, we use hazard functions; as a preparation Wt~ first prove an auxiliary result.
Lemma 3.2. Ld V ~ E1;p(l) I and let I : lR ----t C be bO'lLT!.ded and me().$i!mble. Then the existence of either of the following limits:
00 00
J!r~ J "Y(:r)dF:;(x), J~.I.~ J "Y(x + log n)dA(x), o -00
implies the existence of the other', (md the limit.':! aTe equal.
Proof: Since 1 - y :::; CY (y E ~), for n ~ 2 and x E IR
~~Fv(x+ logn) = (1- ~e-"'t-le-'-';:::; exp(_n:le-"')e-'"
< exp( -x - !e-X).
Hence lirnn",.oo It. F{)(:l + log n) = exp( -,T - e-;t) = AI(.T) is dominated by exp( -x - !e-X
). Then
00 00
I J "Y(x)dF~(x) - J ')'(x + logn)dA(x)1 o -co
O(J
= I J ,(.,: + logn) C~~F;:(x + logn.) - At(x)) dxl- 0 (n ----t Do)} -log"
because of dominM.ed convergence. o
40 Chapter 3. Maxima
To repn~sE~1l1. F in t.erms of hazard functions we uefld the following ~:()IlC(lpt: Let. n : IJt --'I- IR be a nond(~crea.sing functioll. Wit.h the c()nv(~llt.ioll that the infimum of all empty set i~ E~<jllid t.o +0:), w(~ ddill(~ tho (left. COlitillllOUS gCHcralizl'<l) invcr5c C;- : lR --'I- lR of G by
G'·· (y) ; = in f {,1: (? IR : G (:1:) 2: y};
dIP fUllction C' if-; llolldecreasing, and (G-)~ = G jf G is left continlH)lJS (d. n.csnick [51], sed. 0.2).
FOl"Lny rv X wi th df F we UUJ wriu)
x ,4 H"U (\I), (:u)
where H' j::; t.hE~ iIlv('rse of tb(~ cumulative haz(mi junction H of P, defincd by ll(:l:) := ..... log(1 - F(:.t)), and \f ~ E:1:])(l). If F has a dcnsit.y j, t.hen II bas it density 11,(:1:) := f(x)J( 1 - F(:r)); II, iH calbl t.he hazar'd mit: of F.
We now comE' to tlw main n~"llit of this sectioll.
Theorem 3.3. /'d Z" ;= max(X 1 , ..• , X Tl ) (n ( N) with (Xj) ( A(F, X). Let H Iw the CWYW-
lative hazilrd function of F, and let Z c X[O, 1). Then {ZTl }'~ Z (n --'I- (0) 'tif
00
,np~ J cxp(27r';'k(H-(~t I-logn.))dA(:r) = cz(k) (k IF Z). - logn
Proof: From (3.1) we know t.hat. X :4 H- (V) wit.h V ~ Exp( 1) and t.hat H' is llo11dCtl'c<tt;inf.{. Ht'IlCC
{ 7,,} { III ax ( X I , . . . , XI/.)} 4 {In hX (H· (VI ), . . . , J-f' (V,,))}
{H-(max(\f], ... , V,,))},
wheH~ (V)) is a seque!ltP of iid rv's wit.h df Fv. It. follow::; that for all k c 'l.
"" (:(.1;,,)(,1,:) = IF'f'l,,,kZn = 1Ee,,,;kll' (ll,"x(h .. ..,V.,)) = / /JkU{,·(v))df;;l(V)
U
By Lernma J.2 ana tlu~ cOlltinnit.y thnorptll (d. Proposition 1.7) the proof if; compldc.
o
3.1. Convergence to some limit 41
Remarks 3.4. (i) In the foregoing theorem we prove, in fact., t.hat. {H-(Y + logn)} ~ Z with Y '"" A is necessary and sufficient for {Zn} ~ Z as n --+ 00. We will
use t.his result. in Section 3.2 to formulate sufficient conditions for {Z1\} ~ U as n --+ 00.
(ii) Let WI,"" Wn be independent [V'S with extreme value df A; then it follows that max(W1, ... , Wn ) !f::. Wl +logn. Similar to the proof of the fore
going theorem, we find writing X !f::. f(-(W} with f{(x} ;= -log( -log F(x)) (:t E lit), 1(- is nOn decreasing, and W ""' A,
{Z'l} = {max(Xl"",XYl)}:ib{f{-(max(Wj, ... ,Wn))}
,4 {J(~(W + logn)}
and cp;,,)(k) = I~oo i3k(1(-(X + logn}}dA(:t) for all k E Zoo
Maxima can be t.ransformed int.o minima. by min{x, y) = - max( -x, -V). When studying minima we use a slightly different. approach to arrive at neceililary and sufficient conditions for the fractional parts of minima to (~ol)vergc in diiltribution. For this purpose, we suppose that a(F) = -00,
and we writ.e
x!f::.G-(-V), (3.2)
where C- : (-00,0]--- IR is the inverse of logF(x), and V f"v Exp(l).
Theorem 3.5. Let (Yj ) be independent copie$ ofY with a(Fy) = -00. Set Y 4 G-(-V) as in (.'1.2), and define W~ := min(Yi, ... , Y;1) (n EN). Let Z E X[O, I}. Then
{W~} ~ Z (n --- 00) iff 00
y~l.~ J exp(21rik(G-( -~.: - logn)))dA(:t) = cz(k) (k ~ Z). -Iogn
Proof: We have {Wn}!f::. {C-(min(-V], ... ,-Vn))} with Vj '"" Exp(l) (j = 1, ... , n). So, for k E Z
00
c{w".(k) = IF.e2lfikC-(-Vt ....• -V,,) = J fh(C-( -x)}dFv(l;), o
and hence, by Lemma 3.2, the proof is comrAde. o
By the fact t.hat max(X., ... ,Xn) = -min(-XJ, ... ,-Xn) we have
42 Gll8.pter 3. Maxim<t
Corollary 3.6. Let Zn := max(XI,···,XrJ (nEN) with (Xj)EA(F,X), and suppose:: tha.t
{Zn}.!!.' Z (n ----> (0) Jor' somp. ZE,l'[O,l). Let W '1 :;;: min(-··X], ... ,-X'1 ) (11, EN). 'then {Wn } !!.., {-7} (n - 00).
3.1.2 Convergence to possibly degenenlte limits
L(~t Zn;= max(X 1, ... ,Xn ) (nEN) with (Xj) ",A(F,X). In thi~ ~ect.i()I1 we giw SlI Hicient condi l.ion8 in terms of tlto lIndcrlyi ng d f F for {ZTI } to co])v(~rRe ill dist.ribution. TIt(~H' ronditiom; lead to t.b~~ following con:-;t.rllctiolL: for any giv~~n rv W in ,1;' [0, 1) we con:;truct n sequence (Xj) of iid rv 's such t.hat {Z'1} COJlV~~qV\S in distribut.ion t.o W as n ---> 00. We fir:-:;t give some notatiolL:
Ad:r) .-
(\{:r) .-
F(k + 1 + :r) - F(k + 1) F(k + 1) - F(i:Y"'
F(k - 1 + :r) .... F(k - I)
F(k + 1 )·~·P(k)
F(kl x) .- P(k) B~J:r) := "j«X:---"--;'~F(k)'
(:r E (0, 1], k f.: Z). (:L3)
S· "., -b~ ,\-H ,1llet ....... i,_. = L..,O
n. E N, ;r"; (0, 1] ]11.,,-1"//, we ohtkl,ilL 1,110 following ilL~~qllalit.ies for k (7.,
n(F(k + .1:) . F(k))Fn-l(k) :S Fn(k + J:) - F"(k)
:S n(F(k + :r;) - F(k))F,,-·I(k + :1:). (.1.4 )
Next, we give sufficient coudit.ions on F for t.he distributioIl of {Z,,} to n.l n v~\ I")!;t\.
Theorem 3.7. Let. Z,,:= [[J}\,x(X],,,.,Xn) (nEN) with (XJ)EA(F,X), (mod suppose that F{kl' 1) - F(k) (k E z) i8 p08ilive for suJficiwtly large k. r,d Ak and CI; bp. given fl8 i.n (3.3), and if:! Z E ,1'[O, 1) with df G. If JOT (Ill (;ontinudy p()ints :1: ( (0, 1) of C
lim inf A~,(:r) :::-. G(:t) k '1)1,.-"
(3.5 )
and
lim Slip Cd:r) = G(:I:), (3.6) k-oo
thm {Z .. }~'" Z (n - 00),
3.1. Convergence to some limit 43
Proof; Let g > 0, and let x E (0, 1) be a continuity point of G. Since (3.5) aud (3.6) hold, there is an integer mEN such that fOJ;" all k ?:: Tn
(3.7)
Sinu~ w(F) = 00, F(m) < 1, whence Fn(m) S t for sufficiently large n. Using (3.4) twice, and (3.7) we find for sufficiently large n
00
P( {Zn} < :(:) = L (F'l(k + x) - pn(k)) 11:=-0<:;
m-) 00
< 2: (F't(k + 1) - Fn(k)) + L (P'l(k + x) - Pll(k)) k=-oo
00
::; r;; + L (P(k - 1 + x) ~ F(k - 1))nFn-l(k)
0()
= [+ L Ct;(:r)(F(k + 1) - F(k))nFn~·l(k) k;;"m+l
00
< e; + (G(x) + 8) L (F't(k + 1) - Fn(k))
- e: + (G(x) + c)(1 - pn(m + 1)) :S G(x) + 28. (3.8)
Similarly, for sufficiently large n
00
P({Zn} <x) > L (P(k + x) - P(k))nFn- 1(k) k-;;;-OQ
00
> L (F(k + x) - F(k))nFn-
1(k) k"'m+l
00
= L Ak{X)(F{k + 1) - F(k))npn-l(k + 1) k=m
> (G(x) - e)(1- Fn(rn)) ?:: (G(x) - e;)(1 ~ e;). (3.9)
Combining ex.pressions (3.8) and (3.9) yields P( {Zn} < x) ~ G(x) as n~"-+ 00.
o
In the following lemma we formulate sufficient conditions in tenns of Bk(,t) and Ak(l) for (3.5) and (3.6) to hold.
Lemma 3.8. Let F be a df such that F(k + 1) - F(k) (k E z) is positive jor sufficiently large k, and let Ak , Bk and Ck be given as in (3.3). Let G I; F([O, 1)).
11 ChaptAr:3. Maxima
0) SUPl'o.~c that lim infk ._oo Adl) = 1. If j()1~ all continmfy ]!Omt5 :r, (0, ]) oj G
lim lid:);) = G(:t), ~'-oo
(:3.1 ())
then (3. .1) (Lnd (8. G) hold. Oi) f/tlPP08C thai lilllsuPk_oo Ak(l) --' 1. lffo1' aU conl.inuity pomt8:r E: (0,1) of G (.'/. ,'j) and (:1. G) hold, Own (.'J. 10) hohk
Proof: (i) Ckarly, for all continllily point.!:> :r E (0, ]) of C
and
limslIpCd:r) = lirnsllpBk_l(:r)(A~: 1(1))"'[ = G(:r;). ~,-.oo /-1-00
(ii) By the fact thai lim SUpk_cx. Ak(l) = lim infA' ."",(Adl))·· 1 =, 1, we find
liminf Bd:r) ~ liminf A/.; ... 1(:l:)(Ak- 1(1))·1 -=-; G(:1:) ~, ''X' k ····00
and
liw ::;llP Ih(:r) = lim ::;llP CHI (:r)A..,(1) ,= (;(:1:), 1.;."" k·. ""
[]
Theorem 3.9. Let L" := IllH,X(X I, ... ,X,,) ('II, E N) with (Xj) E A(F, X), and 8UPP()Sr: that P(A: + 1) - P(k) (kf:: z) i8 positnw j(W silfficiently large k. Let Ak and Bk ht' ghJcn a.s in (.'(.'1), and let. 7. E .1:'[0, 1) with dl G, rr (S.lO) holds and
Iiminfk_~ooAk(l) = 1, then {2T1 }.2. Z (n --+ CX)),
Remal'ks 3.10. (i) Clearly, Dd:r) = P(X - k < :rlX ( [k, k + 1)). In ol.her w()rd~, (:UO) says that for large k t!tp (:onditional ilf of X· k, giV(IIl 0 ::S X - k < 1, is approximately eqll.'d 10 C.
(ii) W(I 8'ttSpect thaI. if for sonw :1: <' (0,1)
then {Z,,} do!~s not eonverw~ III distriblltion (see also Exampk 3.14(b)).
3_1. Convergence to some limit 45
Computing Bk(X) and AI.;(l) ca.n be ha.rd, a.nd therefore we discuss classes of dt's, which are locally tail-equivalent in the sense specialized below. The following re:mlt shows that aIle can switch to a distdbution with a.n easy tail and calculate Bdx) and A k (l) for that.
Lemma 3.11. Let G E F([O, 1))- Let F* be a dJ such that F~(k+1)-F*(k) (k E z) is positive j07' 8?ljjiciently large k, and such that for all continuity points x E (0,1) of G
1. P·(k + x) - P~(k) G 1m - (x)
10:-00 F"(k + 1) - F"(k) - ., (3,U)
lim inf F*(k + 2) - PO(k + 1) = 1. k-co F·(k + 1) - Fo(k)
(3.12)
Let F be a dj that Mtisjies the following properties:
P I For some D > 0
I· F(k + x) - F(k) D 1m =
k-oo Fo(k + :r) - F*(k) (3,13)
for all continuity points x E (0, 1) of G for which P(k +x) - P(k) is positive for sufficiently large k, P2 S~lppose that F"(k + x) = F·(k} Jor Mlfficiently large k and for all continl1ity points x E (0, 1] of G for which (3.1S) does not hold. Suppose further that for such x: F(k + x) = F(k) whenever F·(k + x) = F*{k) for kroll.
Let AI.; and Bk be given as in (3.3); then JOT" all continuity points x E (0, 1) oJG
lim Bk(X) = G(x), lim inf A,./l) = l. k-oo k-oo
Proof: Let 0 < f; < D, a.nd let x E (0,1) be a continuity point of G such that (3.13) holds. Then we have for some ko E' N
F(k + -r.) - F(k) () -=-7-:--;-.---='--:-:'-:- > D - E k ~ ko ' F'(k + J:) - F*(k) -
This implies that F(k + 1) - F(k) ~ P(k + .1:) - F(k) > 0 for all k 2:: koSo each of three denominators irl the quotient below is larger than zero for k ~ ko; from (3.11) and (3.13) it follows that
F(k + -,t) - F(k) F"(k + ;r) - P(k) F"(k + 1) - P(k) Bd,1:) = P(k + ,r.) - F*(I.;) F*(k + 1) - F*(k) F(k + 1) - P(k) ,
46 Chapter:3. Maxima
which tnllds to G(x) (k ---> (0). Ld. now x E (0, 1) be a continuity poiIlt. of G such that (:3.l3) dMS not
hold. By (3.11), F"(k+:;;) = F"(A:) for all sllfficiently lcuge k, ,~Ild (;(:/.:) = O. By t)H~ assllmption on F it also follows that liwl.:.(><) flk(:t) = 0 = G(:1:).
Similarly, u'iillf!; (a.12) leads t.o lim illf" .= AdI) = 1. o
Corollary 3.12. I,d Z,,:= max(X1, ... ,Xn ) (nEN) with (X))EA(F,X), mulld ZE,·t:IO,l) with df C. Ld F" (Lnd F be df's thlLl, satisfy the condiii(m.'i ()f Lenwnn. S. 11.
'J'hr;n {Z,t} !:. Z ('0. ---> (0).
We HOW prove that. any giwll distribution on [0,1) can occur as a limit dist.ribut.ion for {Ztl}'
Theorem 3.13. I,d N be a positHw inf~/Je1'-vahtcd rv 1Jnfh 1'10 ;= F(N ~, k) (k EN), (lIui
Pk is po.~1.fi'/ll~ f07' 871ffic~entl!l lmyt: k. Let W ( ,:t' [0, 1) k independ(~n.! of N,
ILnd ld (Xj) be m.dependll'ftt copir;s of X .meh that X !i N + W. Defint.' Z,,;= max(X 1,·· .,X,,) (nEN). If
(:1.14)
th en {7,,} .!!... W (n ---> (0).
Proof: Lot 1\' (. N be $llrh that J!k > 0 for all k ~ j(. L(>1. F denote t.lw df of X, and let. z > 1. Sincp
Izl"'\ F(z) = P(N + W < z) = L Pm +Plzlf~,({z}),
TH=O
we have nk(:t) = Fw(x) all(1 Ak(l) = PI.:+l/Pk for all :.r ( (0,11, A: 2:: 1\'. In view of (3.14) we sep that. the condit.inlls of Theorem :3.9 are sati:Ji(~(L which eomplete::> th!} proof.
o
III SW:t.iOll 3.1.3 we ::;It(l.ll give another proof of the fon'~oing theorem using ref:;u)t.s 011 t.he uumber of maxima iu a discrete salnplr (d. The()n~m 3.20). In fact, t.hl' form1llatioll of t.he forel!;oillg theorem has been im;pin~d by 'l'heOn'll! :~.20. In SPc1.iOll 3.4.2, WP will apply Theorem 3.n to a couple of examples (see Exa.mples 3.51). We !lOW give SOllie exarnplr~s illust.rat.ing t.he S(:OP(I of t.he pn'vious result::>.
3.1. Convergence to some limit 47
Examples 3.14. Let Zn := max(X1"", Xn) (n s N) with (Xj) E A(F, X), and let Ak , Bk and C k be given as in (3.3).
(a) Let N be a tv with P(N = n) =' 7«101+1) (71 EN), and r EN. Let X 4 ~N.
Then P(X = ;) = .,( .. 1+ 1) (n~ N), and P(X < x) = 1 - Fxr (x > ~). It follows that
B( ) ~ r(k +x)rl- fkrl r(k + l)rl ~,1; - f(k + l)rl - fkrl f(k + x)rl
(x E (0, 1))-
Let'TIH{O, ___ ,T-1},audxE(~,m:11, Then r(k+x)rl =rk+m+1,and hence Bk(X) - mt 1 (k - 00). Au similar calculation leads to Ak(l) - 1 as
k - 00- As a result, Theorem 3.9 yields {Zn} ~ U,. (n -l- 00). (b) Let X I'v Exp()..) , and 1/ := (1 - rA)-I_ Then Ak(:C) ~ rAvF(r), RI;(x) = vF(x), Ck(x) = e>'vF(x) for x E (0, 1), Consequently, both conditions of Theorem 3_7 and those of Theorem 3-9 are violated_ In fact, .lagers [32] shows that {Z.,} does not converge in distribution_ So, it seems that conditions (3,5) and (3.6), or (3.10) and liminfIHooAk(l) = 1, are not very far from being necessary for {Z.J to converge in distribution-
3.1.3 Number of maxima in a discrete sample
In Sectiou 3-1.2 (cf. Theorem 3,13), for any given rv W ~ X[O, 1) we con
struct.ed a sequence (Xj)f of iid rv'S tiuch that {Z".} ~ W (n ----t 00), In this section we use results on the number of maxima in a discrete sample to construct suc:h a. sequence in a similar way, The number of maxima in a di:screte sample is considered in Brands, Stcutel and Wilms 111], Eisenberg, Stengle and Strang [14], and Baryshnikov, Eisenberg and SteJlgle [6]-
RAdc ((41]) proposes the following problem. Toss n coins, probability p for heads, as follows. First toss all coins, then toss the Ones that did not fall heads, again, and so on, until all coins show heads. He asks what can be fiaid about the behavior of the number J(n of coins involved in the final toss. The coim, involved in the final toss have shown tails in all foregoing tosses; each coin needs a geometrically distributed number of tosses to produce heads. So 1\-" is equal to the number of coins that. need the maximum number of tOS8CS to produce heads.
Brands, Skutel and Wilms ([UI) consider the following generalization of this problem- Let NJ , N2 " , - be independent, positive integer-valued rv's distributed as N. The nllmher of sample elements equal to the sample maximum is defined by
(3.15)
48 Chapter 3. M'iXiI118.
They study the beha,vior of K'l for larw' n, and give sufficif~llt c:onditions for /\'" to COIl verge to 1 or to iIlfillity, in di:.;trilmtioll or with probahility one. We note that K" (:an be interpreted ;is the number of wiIlIlnrs in a contest, e.g. a golf tournament. (d. Eisenberg, St.el1gh~ and Stranv, [14]). We hrst !H)od some not,ttion:
]
Jil P(N = j), 1)0 - J o - , Pj = L Pk (.; EN). (3,16) k=l
L~mma 3.15. I,d (N}) be independent, positive inkqf.1'-val~wd rv 's (H~trilHdcd as N. Let i{", Pj (md p) Iw given as in (.'i. 15) and (3.16). Tlwn
(i) r(J(,~ = k) = (~~ ) L~IJ1]P;~.~lk (k = 1,2, ... ,71; 11.( N).
(H) rzK" = n Z.~1 p]Py-l (n EN).
Proof: (i) By symmetry and illdep(mdence we have for 11. (N, k ... = ], ... , n
P({(" = k) = (:;) -f P(N1 = ... = Nk = j, Nkll < j, ... , N" <. j) . J-' I
(ii) For n, N w(~ gel
00 71.-1 ( . I) n '""'. 1) '" 11 - ]l/],)-1 k ~ n '""' p .p,,"·1 D } L..- k J }··-1 "., ~ J} .
)=1 k=O j-l
o
It is {~asy to verify that. if W(PN) i:.; fillite, t.hen P(l(n --1- 00) = 1. Therefore, we suppose that. W(FN) = 00, i.e. Pj < J for all j .. N. III this cast, dea.rly, t.here will be new records no matter how large the present record is; this mean::; that in this ca0E~ K" will t.ak(~ the value 1 infinitely oft.ell (i.D.). This i£) !Jot nec:essarily true for value£) 1;:l.rger than 1.
L~mma 3.16. Ld (Ni ) be independent, p(m:tive integer. v(Lbed rv's disl,r'iiJntcd 0.8 N. Let f(,,, Poi a.nd p) 1)('; gi.ven (1,~ 'in (3.15) I).nd (tU6), and ld m ~ 2. If 2::j~1 pT(l PY ... 1 )-m < eX), th(~TI. limn_ex> P(K" = k) = () /07' all k ~ Tn.
3.1. Convergence to SOme limit 49
Proof: Since P(Kn = n) = 'Lj':,,1 pj, it follows that P(Kn = n} tends to zero as n ---> 00.
Next, for 11, > k 2: m 00
P(J{,.. ~ k) = 2:>;(1 - PJ - 1r"Dn (j, k), i",,1
where
D'1(j, k) := ( ~ ) (1 - Pj_l)k P;~lk E [0, 1].
By dominated convergence
lim P(K" = k) = 0 n-",
::;ince ( ~ ) ~ n'" j(k!) (n ---> (0) and PP_-lk tend::; to 0 as n ---> 00 exponen
ti<.Jly fast. o
Lemma 3.17. Let (Nj ) be independent, posit'ivf: integer-valued rv's distributed as N. Let 1(,.., PJ and Pj be gwen as in (3.15) and (3.16), and let m ~ 2. If LJ;.1 pj(l - Pj_d-Tfl < 00, then P(K" = k i.G.} = ° for k ~ m.
Proof: We use a variant of the Borel-Cantelli lemma due to BarndorffNielsen [5]: If (An) is a sequence of events satisfying lim,Hoo P(A,,} "" 0 and if L~=2 P(A"A~_I) < 00, then P(A tl i.a.} = 0. Let Mil :=~ max(Njl'" j Nn )
with an n.t.uple (Nj)j of independent copies of N, and let. k ;::.: m. Then, with All = {Kn = k}: Lemma 3.16 implies that limn_ oo P(J(" = k) ~ 0; furt.hermore,
O(J (lO
L P(f(n = k, K,,_I .;. k) = L P(J(rt = k, 1(7'1.-1 = k - 1)
=~. ~ ( n - 1) k pn-k = "'.'" k;:'" ( n - 1 ) p;>-k L L k _ 1 PJ )-1 ~ PJ L.,.. k - 1 1-1
n=_1=1 J~I n=k
", 00
= L117(1- Pj _ 1)-k $ LP7(1- Pj_1tm < 00. J~I j=1
o
It is not. hard to see that we ha.ve t.he following corollary.
50 Chapter 3. Maximri
Corollary 3.18.
Ld. (NJ ) be independent, p08'lIiill: ·mtcgcr-val·ued T'V'S di8trilmtcd as N. Ltd g", Jil lI.1I.d PJ be given as m (S.15) (lnd (.1. 16}. IfL,lP](l-lj_l)-~ <00,
thert P(f{n = 2 i.o.) = 0, (md r{lim"_",,, K" = J) = 1.
T n tll(~ following t.heorem we givr. another ::;ufficj(~nt condition for I\'n to COIl
verf!:o in di::;tributiol1 t.o 1.
Th{":orern ~~.19. Let (Nd b(~ indf;pendent, positi1J(~ inl.ege7'-tlahwd rv's dislnlwtcd as N. Ld 1(", /JJ and Pj be gHWn (L.~ in (S.l!J) and (.'1.16), and SUPPOS(~ that p) > 0 jor srJ.jji(:'l(:·ni.ly large j. If
np·pnl < PI[ ... V" < 'fl1") [>,,-1 ,J )-1 -.J ]-1 - . J.J • (3.18)
Since }(" ~ 1, w~' have lEJ(" > 1. Purt.her, from 0.18) we obtai II for suffiCi('llt.ly large In
IIl-1 00 p' o < J[/{ - n "" 1)' pH." I = n "" _J-1). pn- 1
J n . L...- .1 J L.--.' Jt-I J j=1 .i~TrI PJ I I
no ""
< n{l + [(rn)) I: PJ+1 Pt I S (1 + [(m)) I: (r;~.l" pJn.) ]=u~
(1 ! .. c(ul.))(1 - r::,),
wben' 1 + [(rn) = :jll!Jm<)P)/PJ.,I). By (3,17) wn know that c:(m) .. _) 0 ,~8 UJ. ---'I- 00, Wh~·.~INlS 1 - p:.~ ---t 1 as n· .. ' 00 for each fixed m.. Furthermore, II, I:j/~,l P.iP],-1 tends t.o ;.;ero for each fixed m since P7 ---'I- 0 as n ---'I- 00 eXI)(Jm~nt.ia.lly fast, Hel!(:~' lim SUP,,_oo II<:/{" :::; 1, and SilH":e lEf(n ~ 1, we have liltl ... "DO JE1\",\ = I.
In addit.inll, clearly,
... n
fl<~l{" + P(!\'" = 1) = L kP(i{" = k) + P(A'" = 1) 2: 22: P(I"",,. ,1;;). k== I 1.:= I
Helice, :2 - 1F.1{" :::; P(J(" = 1) :::; I. So, [>(1\'" = I) ---'I- 1 as n ---'I- 00.
o
3.1. CoxlVergence to some limit 51
We remark that Eisenberg, Stengle and Strang ([14]) prove the following: If (p]+J/p)) ---* 0 as j ---* 00, then Kn does not converge in distribution as n ---> 00. In the coin tossing CMe proposed by Ride ([41]), which is equivalent to the geometric case, we have Pj = p(l - p)j-l, and Pj+t!Pj = 1 - P (j EN). Brands, Steutel and Wilms ([11], sect. 3) show that in this case the rv Kn does not. converge in distribution as n ~ 00. They also p:rove that this rv Kn is very close to the logarithmic series distribution, i.e.
P(J(n = k) ~ _pk !(k log(l - p)) (n _ 00).
Fina.lly, we remark that in an unpublished paper Baryshnikov, Eisenberg and Stengle ([6]) show that, for a positive integer-valued rv N with Pj a.nd Pj as in (3.16), limn_QOP(1(n = 1) exists iff lirnJ_ooPj/(l- Pj) = 0; moreover, if limtHoo P(Kn = 1) exists, then lim,,· ... oo P(/{n = 1) = 1.
We now come t.o the main result of this section, which, in fact, interprets the (;om;truction of Theorem 3.13.
Theorem 3.20. Let N be a positive. integer-valued rv with P1 and Pj as in (3.16), and suppose. that Pi > 0 for sufficiently large j. In addition, let (3.17) hold. Let WE ,1'[0, 1) be independent oj N, and jurther- let (Xj) be independent copies
of X such that X 4: N + W. Define. Zn := max(X1 , •.• , Xn) (11. EN). Then
{2n} ~ W (n -+ 00).
Proof; We have
where Nj = [XJ ] and f(n is independent of WI, W2"., , which are independent copies of W, arid [(n is given as in (3.15). It· follows t.hat
" F{Zn}(x) = L P(K" = k)F~(x) = Pf(,.(Fw(x)), (3.19)
k=l
wh(~I'C PH" denot.es the probability generating function of [(n.
If we wOllld have F"" .!;, FH , then we would get F{2~}(:1;) ---* Px(Fw(x)), and in part.icalar, since (3.17) implies FI'·n .!:. 1(1,00)' we have
I.e., {Zn} ~ W (n --> 00), o
52 Chapter 3, Maxima
This theorem lekLds to the followillg (known) n'sult: Let V ~ J:':tp(.A} for some ,\ > 0, (l,nd let. X = V + 1. We know t.hat X = N + {X} with N := [Xl and {X} independent, and P(N =". j) = (1 - C->')/liA U ~ N). Using (:J.1V) and the fact t.hat, for thit; rv N, I<n does not conwrge in di:=>t.rihul.iot1, it follow:; that {Zn} does not convergt~ in distribution (fine also Example 3,69(b)). Brands, Steutei and Wilms [Ill show tbt, for moderat.e values of A, the dist.ribution of {Zn} is not wry far from beillg ulliform if n is larg(~.
Exampl(~5 3.21. Let N be a. rv such that])) := P(N =)) = ((1+1og.i)-1- (1 +log(j + 1)) I), or ~ucli that Pj = cj-O: (0: > 1, r: > 0, j (N). Then it is NI.SY to verify that. L~l p;(l ..... Pj It" < IX!. By Corollary 3.18 we have, in both ca.")cs, P(li!)l" 'r>:> l\,. = 1) :::. 1 with !\"n as in (3.15).
3.2 Asymptotic uniformity in distribution (mod 1)
LeI. Z,,;= max(X1, ... ,.x'I) (nEN) with (X])EA(F,X). In thit> t.;(~dion we
giw Ill'cessary ,md sufficient conditions Oil t.he df F for {Zn} .~ U (n .. --. 00),
i.('. for ({ Ztt}) to be a~ymptotically uniform in di~1.riblltion (mod 1). Tbi5 loads t.o tbp following t;ufIlcicnt cOll(iition: if t.he haza.rd rat.e of F h~nds monotonicfl.lly to ZCH;, t.hen {" Zn} ~ U (n ---+ (Xl). These results ,~n~ based on a pkLper joint with Brand~ ([74]). Here, in fact, w(~ USE) a mon~ general approach to <..:OIllB to the same rather w(~ak suffici()llt. conditi()Il!; as forrnulat.ed
in Wilms ami Brand~ ([74]). Sppcializing Z %:- u in T}l(~()rmn 3.3 w(~ ohtain the following necessary
and suflicient. cOlldit:ions 011 F for {Zn} .!. U (n ---+ (0).
Theorem :L22. Let Z,,:= max(X!, ... ,Xn ) (uEN) with (Xj)EA(F,X). Lel H be tlw CH-
'(rwlatiV(~ hazard /uncf1:on of F. Thr;7l {Zn} ~ U (n ---+ CXi) iff
':.<.0
lim J exp(2lrik(1J r -(:r + log·II.)))dA(:r) = () (k (7,(1). '/~-OO
(3.20)
-log"
Til Soction :3.1.2 we formulated ~uflident conditions in \.onllS of t1[(.: gcm
Nali;..;!'d invers(~ of the cmHulat.ive ha.zard fundion of F for {Zn} .! .. Z for
an arhitmry rv Z; hy taking Z ;; U we have such conditions for {Z,,} ~ u
3_2_ Asymptotic uniformity in distribution (mod 1) 53
(see also Remark 3.1O{i)). The following theorem is obtained by using Theorem 3.9 stated ir) Section 3-1.2.
Theorem 3.23. Let Z'1 := max(XI) ... ,Xn ) (nEN) with. (Xj)EA(J',X). Let F have a denvative f that is positive in some neighborhood of 00.
0) If f is nonincreasing in some neighborhood of 00, and limt_oo tilt)l~ = 1,
then {Zn} ~ U (n ----? 00).
(ii) If l' exists in some neighbo7'hood oj 00, and liIIlt_oo 7fR- = 0, then
{Z.-.} !!..., V (n ----? 00).
Proof: Let BIo;{x) := ~~;!~l=;m (x E (0, 1)), and Ak ;= f}~t!t)~j;,~1)1) (k E Z). On account of Theorem 3-9 it suffices to show that
lim Bk(::Z:) ,",,:1: (x E (0,1)), liminf Ak = 1. k-oo k-oo
(i) By the mean value theorem, Ek(:1;) := ;J;'7!~!:~)) for SOme Ol, O2 E (0, 1),
whence for sufficiently large k (since f is ultimately Ilonincreasing)
f(k + 1) f{k + alX) < j(k) f(k) :$ f{k + 82) - f(k + 1)'
From t.he assumption it follows that. Bk(z) ----? x (k ----? (0)- Similarly, we get limk_= Ak = 1_
(ii) Again by t.he mean value theorem we have
Bdx) xexp(log(J(k + fh:t)) -Jog(J(k + O2 )))
~ :rexp((OlX - 02)(logof)'(k + 03 ))
for SOme 01 , e2 , e3 E (0,1). From t.he assumpt.ion it. follows t.hat. BI.;(x) ----? x as k ---4 00. Analogously, we find liHlk_oo AI.; = 1.
o
To get further insight into the tail behavior of distributions for which
{2,,} ~ U, we t.ry to give sufficient condit.ions - t.hat. arc morc easily verifiable" On the functions H for (3.20) to hold. To this end we first prove some auxiliary results-
:')4 Chapt.er.t Maxima
L(~rnma 3.24. Let 'Y ; IR - C /n.' bounded and m.r;(l5w'able. Thtm the fotlowinq 8tat(~mcnts arc cquiv{J,lt~TLf.
!J
(a) lim f 'Y(x + t)d:r: = 0 for all y > o. 1.-""'0
"" (b) lim J 'Y(x +. I)e-"'(f:;: = 0, I. '000
((~) lim J 'Y(x + t)J(:t)dx = 0 /r)'l" (J.ll measumbk f : IR ...... lft wz/,h t --+~x) -(JI;..'r
00
J 11(:r:)ld:r; < 00.
Proof: (c) =? (IJ.): Take I(:c) ::- l(o,y)(;r).
(b) ... > (c): We obt.,j,ill for every p, (I ( JR, P < q
11 00 00 .r f(;r + t)e--:or/;r = J 1(:1; + t)r';'d:r: - J fCr I .. t)e-"'t/:r:
I' P 'I
00 00
== r-P f 1(:1: + 1+ p)c" ~dT - e- q f 1(:1: + t + q)e "th, () 0
whellce Iimt_oo.1; l'(:r +- tk-:.r:(h = 0, HencA for every stAp-fnnctioTl .II (const.ant. 011 illtorvals) ~llld for every 1),1/ E IR wit.h j! < q it follows t.hat.
'I
lim J ')'(:1: + l)yCr)c-"'''dT :: .... O. i-cx.;
(:1.21 )
I'
Next, let. h > 0; t.ab' (/, IJ E lR, II < I), such that
,I. \)Q
J If(;r) liI:r + .I II (x) lib- < h(~:~l~ It(:r) It l, ._,.., 1/ -
(3,22)
For ('wry ltl(~asllr:tbh\ fllllctioll f wit.h J::~ If(:r)!d;1: < 00 t1ten~ is a :-;tep
fUlIdioIl .II such thkll
111
If (:r k' - fj (:1' ) I If:/" < he" (s 1l P h (;r) I ) 1 • xlR
3.2. Asymptotic uniformity in distribution (mod 1) 55
Hence, by (3.21) and (3.22) we get for sufficiently large t
IZ ~(x + t)f(x)dxl ,,; ~~~ h(x)1 Cllf(X)ldX + l'f(X)'dx) +
6 6
+j f l'(x + t)CtIi(J(x)e'" - g(x))dxj + j J l'(x + t)g(x)e-"dx! :::; 36-a Q
So, limHoo J.:-oo<;l 'Y{x + t)J{x)dx = 0 for every measurable function J with J.:-oC<) lJ{x)!dx < 00.
(aJ =} (b): We have
1 ,(x + t),-'dx ~ 1 (l ,-' dY) ,(x + t)dx
~ 1 (/ ,(x + t)dx ) e~'dy,
aud the proof is completed by dominated convergence. Finally, it is dear that the existence and finiteness of limt_oo f~ "((x)dx
implies that.
H1I
ii.~ f .,.(.r)dx ~ 0 (y > 0), I
whence limt_oo J61'(x + t}dx = 0 for all y > 0-o
Corollary 3.25. Let Zn := max(X1, •.• ,Xn) (n E N) with (Xj) E A(F, X). Let H be the cu· mulative hazard Junction oj F-
(i) Suppose that, Jor all k E l, assertion (a) (or- eql!ivalently, assertion (b))
of Lemma 3.24 hold8 with 1':= {-Jk 0 H~_ Then {Zn} .:!... U (n ----'> 00)-
(ii) if
!
}l~ J J3h(H·- (8) )ds (3.23) o
exi~;f.s and is finite for all k E lo, then {Z,,} ~ U (n - 00).
56 Cllapf.er 3. Maxima
Prouf; (i) Assertion (a) (or (b)) is equivalent to (c) of Lemma 3.24 with f = N. Then Theorem 3.22 completes the proof. (ii) 'fhis is abo a consequence of Lemma :3.24 and Theorem 3.22.
o
Lemma 3.24(a) can be interpreted as follows: if {H-(yU + tn .!-'t U a..'l t ----'t 00 for all y > 0, or if {H .... (V +t)} ~ U with V ~ Exp(l), then it turns
out. that {11-(1-'- + t)} ~ U for all rv's Y that have a density. We note that for lJ-(:1;) = 1X2 - ~ cosx2 (;l": 2 8) condition (3.23) fails, whereas condition 1imt ... QO fooo i3k(H-(x + t))e-x (J:1: = 0 holds (d. Brands [lO\).
The next lemma will be used in the proof of the main theorem of this sectioIl.
Lemma 3.26. Let F be a df having (L hazard rate h whose derivative hi exist8 on [xo, oo) for some .:co E lK. If
lim h(x) = 0, J Ih'(:r)ld:t· < 00, x-oo
(3.21)
then (3.23) eX'ists and is finite for- all k E Zoo
Proof: Let H be t.he cumulative hazard function of F. By 8ubst,iLuting s = H(:r) we have for k E Zo
t 11- (I)
J (h(H-(s))ds = J h(X)f:3k(X)dx. o If-(O)
Integrating by part.s we find for B > A ~ Xo
B If h(X),Bk(·1:)d:rl A
B B
< I h(x) /3k(~) II + I J hl(:r;/3k(~) dxl 211'zk A A 27rzk
B
< ~k(lh(A)1 + Ih(B)1 + J Ihlr)ldx). (3.25) 211" • A
By (3.24) t.he right hand side of (3.25) tends t.o 0 as A, B ----'t 00. So liml_oo I~ (jdH-(s))ds exist.s and is finite for all k E Zoo
o
Next., we giv(~ th(~ main theorem.
3.2. Asymptotic uniformity in distribution (mod 1) 57
Theorem 3.27. Let Zn := max{X1, ... , XrJ (n EN) with (Xj) 'ii A(F, X). Let F have a hazard rate h{x) that tends monotonically to 0 as); ----+ 00, and suppose that its
derivative h'(x) exists for sufficiently large x. Then {Zll} .!. U (n ----+ 00).
Proof; The proof follows from Corollary 3.25{ii) and Lemma 3.26. o
We rlote that in Chapter 4 we give rates for the convergence of F{Zn} (x) ...... x as n -+.00 (d. Corollary 4.22).
Corollary 3.28. Let Z,. ;= max(Xb .•• , Xn) (n 10 N) with (Xj) E A(F, X) be such that {Zn} ~ U (n -i' 00). Let Z~ ;= max(X;, ... , X~) with (Xj) ~ A(G, X').
(i) If X' 4. X + 0: for some 0: E lR, then {Z~} .:;.. U (n.....if- 00). d (Ii) Suppose F satisfies the conditions of Theorem S.27. If X' = aX fot
some 0: > 0, then {Z~} ~ U (n ....... 00).
Proof: (j) Since {Z~} 4: {Zn + CI:}, we have c(z~)(k) = e211'ik':'c(7,n}(k) ----+ 0
(n ----+ (0) for all kEZ,O' So {Z~} ~ U (n ----+ 00). (ii) Let hand h denote the hazard f\Inction of F and G, respectively. Clearly, h.(x) = a:-1h(a-1x) (x E JR); hence the assumptions of Theorem 3.27
- d. hold for h. SO {Z;J ----+ U (n ----+ 00).
o
It may ha.ppen that {Z~} ~ U (n ----+ ooL but {Zn} does not converge to
U as n ----+ 00: Let. X 4 a-1X~ and
for some (l > 0 and some sequence (Xj) of iid positive integer-valued rv'sj
in Example 3.29{e) we show {Za} ~ U (n ----+ 00), while {Z~} := {aZ1l } = 0 for all n E N. We further give some explicit. examples to illustrate the scope of the results stated above.
ExaITIples 3.29. Let. Zn:= max(Xt, ... ,X1l ) (nEN) with (Xj)EA(F,X). (a) For the Pareto df F(x) = 1 - ."C- f (x > I, /3 > 0) the hazard fate is p;iveu by h(x) :;:;: t1j:t:. Clea.r1y, h sat.isfies t.he assumpt.ions of Theorem 3.27,
and thus {Zn} ~ U (n ----+ 00).
58 Chap fer 3. Maxima
For all alternative proof, it is easy to verify that. the conditions of part (i) and part (ii) of Theorem 3.2:3 an' satisfied.
(b) For F(:1:) = 1 - exp( -x'"') (x > 0, 1.1 > 0) wit.h hfl.zard rate h(x) = IlX,,-1 ,
we filld that., for 0 < v < 1, {Z)t} ~ U (n ----+ (Xl) (d. Theorem 3.27). The (:a~w v ?: I will be considered iI) Ex~,mph> :t34(a).
(c) The df's F(:c) = 1 ,- IO~~ (T > f:) , F(x) = I .,. logiog~ (x > ~"), etc~
satisfy the conditions of Theorem ;}.27; so {Z,,} .!!:c, U at-; 11, - 00 (r:omparc Theorem 3.58(i)),
(d) Let F(x) = exp(-:r-<x) (:1,', Ct > 0). Theorem 3.23 yields {ZIl} .!!:c, u (n ----+ 00).
(e) Let X :4: G-l N, where N is a, rv with P(N = n) = (Hl!,!':n - HIog~Tt+'f))
(n EN), and 0' > O. If (t E Q., .. , i.e. 0: = p!g for some p, q E N, then X a.nd hence {Z,,} is lattice on ~N. Assume now that OiE lR;- \ Q; then we will show
that {Zn} ~ U. However, P({nZ,,} = 0) = L
To prove that {ZIl} ~ U, on account of Corollary 3.25(ii) it 8uffi.<.;es t,o show tbklt limA_oofo
A f3,.(Hr·'(s))ds exists and is finite for all kEZO, where fl- denotes the inverse of the cumulative hazard function of Ex. Obviously, we have H-(logy) = no if logn < y -·1 S; log(n + 1). Let B ~ N and A ~ log(l + log B). Thml
A I !3/t(H'-(tJ))d-9
°
D fi ·, '- ('J ( ) ( . h k r ) ,- 1 (l+IQg(m+l)) d .~ ",Tn j e HI.: Z .- II. n Wit (~'I. Z , Pm ,- og HI"K'" ,an qm .~- L-<j=O Z
for mGN. Then Iq,nl = l(zTn -l)j(z -1)1::; 2j(lz -11) (meN). I·Ience we find for 0" bEN, (I. < b
~
:; I L Pm(qm - (/m-l)1 In=.fl.
b
Ipb+lGt> - P<1q,"-1 + L qm(Pm - P,.,+I)i
as tt, b ----+ 00. So limLl _ oo t ])m.:"l exists, and so limA_oo foA jlk(H"(s))ds n1M=1
exi~tti ~\lld is fillite for all k E 2:0.
3.3. Maxima do not converge 59
3.3 Maxima do not converge
Let Zt~ := max(X1",·, Xn) (n EN) with (Xj) 0;; A(F, X). In this section, we give sufficient conditions on F for {Zn} not to converge in distribution as n ..... 00. To formulate these conditions we use results from regular variation t.hf)ory. We also nf)f)d the following rf)SI.llt..
Lemma 3.30. Let (bn ) be a sequence of real numbers such that bn - 00 (n - 00). If limn--~oo(bn+l - bn ) =: 0, then the $equence ({bn }) i$ dense in [0, I).
Proof: Let $ E [0, 1) a.nd 0 < D < 1 - $. Since limn_oo(bn +1 - bn ) 0:0: 0) we can select N EN such that bn+l - bn < {j for all n ~ N. Define b := rbN 1 + 8.
Since btJ :$ b there is an integer n EN such that n ~ Nand bn :$ b < bn+1'
Hence
[b] + s = b < bn+l < bn + 0 :$ b + 0 = [b] + s + 0 < [b] + 1,
a.nd so s < {bn+d < s + D. So ({bn}) is dense in [0, I). o
To prove another auxiliary result) we use a special class of functions from the theory of regular variation, introduced by de Haan (see Bingham et al. [9] and 1'\,150 Section 3.4.4).
Definjtjon 3.31-A nondecreasing function L : (:1:1, 00) ---'I- lilt belongs to the class n (notation L E TI) if there exists a function!J : (XI, (0) ---'I- (0, (0), an auxjJjary function of L, such that
. L(tx) - L(t) hrn ( ) == log x (x > 0), t-r.o !} t
Lemma 3.32. Let LEn with aU,Tiliary junction tl, and such that L(n) --> 00 as n -- 00,
Suppose limt_oog(t.) = a fa?' some It ~ O. Then ({L(n)}) is den.w~ in [0, I).
Proof: Sinc(' L(n) ---0 00 (n ---0 (0), Oil account. of Lemma 3.30 it suffices to show that (L(n + 1) - L(n)) ---+ 0 as n ---'I- 00. Since L~ IT, we have I· . L,(!.y) .. L(t.) - I· I ' lIllt· .. ·oo g(t) ~ ogy, w lence
lim L(11.1/) - L(n) = (t log]l (y > 0), 11-00
60 Chapter J, Maxima
Let ':: > 0. SiUCA L(n) is nonde(:rAasing, we get for sufficiently larw~ n
o ::; L(n + 1) - L(n) ::; L(n(1 + f)) - L(n);
the right. ha,lHI Ride tcndrs to (1. log(l +E) as n ---l- (Xl. So, (L(n + 1) - [{n)) ---l- 0
o
Thcor(~rn 3.33. J.el Z" ;= max(X1, .. "X,,) (ncN) with (Xj)eA(F,X), S~tPP()'~(~ that (1/(1 - F))- En with a't).J:iliary functwn 9, and limt_oo y(l) =: a, for' som~ a ~~ 0. Then {Zn} d(}(-:8 not conver!J(~ 'l7/, dzstributioTi. (],8 n ---l- (Xl.
Proof: LAt. L := (1/(1 - F))' , As in thn proof of L(~rnllla 3,32 we h,l.w
,}i~~L(ny) - L(n) = a, logy (il > 0), (3.2G)
L<'!, H(:r) := -Jog(l - F(:c)) (;1: f lit), the curnulativA hazard fllllction of FGy Theon~lll :L:.l it suffices to show that, with tJl 1 (y) := exp( -1 /Y) (11 > 0),
,",-,
Ilh(H' (]ogny))d1>l(:V) (3.27) 1. "
d(H~S llot converw~ (-IS n ----> 00, FrOlll (3,26) it follows by dominated convergAnC(~ that
00 0CJ
,Er.~J Ih(L(ny) - L(n))([<h(Y) ,." J ;3k(alogy)d~l(Y)' ! (I
"
whfmce (H-(Iog ny) = L(nfJ) (n EN, Y > ~)) ~ ~
J Ih(H' (logny))d(h(Y) = (h(L(n)) J rJk(L(ny) - L(n))diPl(Y) 1. l ?I ?I
n<'
~ OdL(n)) J 6da.logy)d¢I(Y) = ;3k(r(n))~h(21rk(J,) (17. --\- (XI). o
Ch~'l.rly, <f!h(2Jrka) :f. 0 for all k E Z since i.p~(27rka) ::::. 1'(1 -- 2JrikIL) , where r is the Gammh functioB. Lemma :L{2 yields th<-'!,t, ({L(n)}) is dense ill [0, 1); Ill'nce fJd L(n)) = iJk ( { L(n)} ) doei'i )Jot converge, il.nd so the sequence in 0.27) does not mllverge a.$ 11. ---l- ex),
o
3,4, Relation wjth extreme value theory 61
Examples 3.34. Let Zn :=max(X1"."X .. ) (nEN) with (Xj)EA(F,X). (a) Let F(x) = 1-exp(-xlJ
) (x, !J > 0), Then (1/(1-F)) .... (x) = (logx)l/II, and hence, (1/(1 - F))- Ei n with auxiliary function g(t) = ~(logt)l/"-l. If v 2: 1, then limt--.oo9(t) exists, and so, by Theorem 3,33, {Zn} does not converge as n ..... 00,
(b) Let F = Ai then 1- F(x) I""-.J eX (x _ 00), and (1/(1 - P)) .... (x) '" log x (x _ (0). As a result, (1/(1 - F))- En with auxiliary function g(t) = L Then Theorem 3,33 implies that {Z .. } does not converge as n ..... 00.
3.4 Relation with extreme value theory
In the study of fractional part.s of maxima of iid rv's the following question arises: Is there a relation with classical extreme value theory?
In this section we give an answer to this question, To this end we assume that F belongs to the domains of attraction of an extreme value distribution with Ilorming const.ant.s an and bn (sec Definition 3.35 below). As in the foregoing sections, we will investigate what limit distributions are possible for {Z,,} as n _ 00. These results are based on Wilms ([73]).
We now give a brief overview of the contents of this section, In Section 3,4,1 we state some results from the classical extreme value theory, In Section 3.4,2 we usc the construction met.hod from TheQrem 3.13 for any given rv W.; ;:\:'[0,1) to construct a df F iu the domains of attraction of an
extreme value df such tha.t {Zn} ~ W as n _ 00. In Seetiol) 3.4.3 we give additional sufficient conditions on F wit.h norming constant an such that tL" _ 00 for {Zn} ~ U (n _ 00), and in Section 3.4.4, on F wi th bounded norming constants a,,, for {Z,,} not to converge in distribution, In Section 3.4.5, we give conclusions, describing the (lack of) connect.ion between the most import.ant. results of Sect.ion 3,4 and the foregoing sections.
3.4.1 Preliminaries
Ext.rAme value theory is an import.ant probabilistic theory, which provides stochast.ic models for random quant.ities such as high t.ide levels, the life span of hllIIlans, Or athletic reeords (see Aarssen and de Haan [1 j, BaJlerini and Resnick [4]), Here we deal wit.h sequences of maxima of a sequence of iid rv's,
In extreme value theory criteria for dist.ributions to belong to the dollW.i1l5 of at.tract.ion of a limit. dist.ribut.ion are best understood within the framework of t.he t.heory of regular variation, which has been developed
62 Chapter 3, Maxima
hy Karamata ([:3:~)) (see Bingham ('t hI. ([9])), Th(~ monograph by de Haall ([26]) exploits the relatiollship of cxtrernt~ vallle theory and regn"
lar variation, Resnick ([SI)) exposes the asp(~(:ts of regular variatiol) and point proC(~XSAS\ which are est->{-~ntial for a proper nndrrsthllding of extreme vallle theory. Extreme value reslllts an: mostly phras(~d in tOnllS of maxirn~.\'; Galambos ([10]) phrat>es theorems for minima, although (HU~ can trallsform rcsultfO on maxima into minima., by min(x, y) = - lllftX( -:"I:, -y) (;e, y c ~).
We now prH';t'nt the cla~~ of possible limit distrilmtions for noruH~d rnax
irm~, Let Zn := max(XI ",·, Xu) (n EN), where (X))YO is it scqIH~nC(' of iid rv'1; with df F, and pOi:isihly finite right endpoint. The defillition of the
domain of attraction is giwl) af.: follows.
Definition 3.35. A df F is said to bdong to the domain of att.raction of a norHh'~g(:nArate df n (llOtat.ion Fe V(G)) if there exist sequences of n'1'I.1 tlumber!:i (f1,,);x' (a." > 0) ::tlld (I)nr~" such thklt
j '>1 ( I ) "/II C" ( ) , (l.",:T +!" ---l' • X ('11.--400),
T1H~ (~h'ments of ((1'1) alld (b,,) an:~ called norming constants. Th(~ following l(~)[)[rla\ due to Glledenko ([21], tdls liS to what. extent we may (:hange the 1Iormi l* con1;tun1.::;.
Proposition 3.36. [Feller ([IS], lemma 1~ p.253J Let C and C" Iw nondcgenemlt'; elf's, Let ((1.,,) (ItT' > 0), (b n ), (Ctn ) (u" :> 0), and (li,,) he scqttenC(:.~ of n=:al nttml)(:7·h'. If for a Sf-Ijlu;nce (F,,) of dJ's
F:, ((1.,,:1: + /)11 ).".', G (:1:),
Own there m'(': II, > 0 and b c: 1R snch I.hat
... , a, /j,,_~,I~ .. ....,. b (71. -, (0) (1." (/."
and
(i+(:r) = G((u: + b) (:r F JR). (J.2S)
If for SOIrl(' 11. > 0, he 1ft (.1.28) hold::, , we say thaI. G+ and G are of the ::!Ilme
type; Ihey only difr(~r by locatioll and i:icale jlflx,\.mders. We !lOW ~i vc a complete (h~t>(:rijlt,ion of t.he class of limit. ihws,
3.4. Rela.tion with extreme value theory
Proposition 3.37. [de Haan [26], thm 2.2.1] Let FE D(G). Then G is of the type of one of the following d!'s;
= {O exp( -x-(»
_. {~xP( -( -x)<»
if x ::; 0 if-r. > 0
if:r; < ° if x ~ 0
(x E lit)
63
fOT some a > O. We refer to 40, '110 and A as the extreme value distributions,
These df's are known as the Fnkhet type, the Weibull type and the Gumbel typt\ respectively. We note that the limit distributions of the maxima can also be expressed as G")"(x) = exp(~(l + 'Yx)""lh), which is of type w-1/'Y if 'Y < 0, of type A if'Y = 0, and of type 1>Ih if I > 0 (for more information see e.g. de Haan [26], sed. 2.6).
In t.he st.udy of fractional parts of maxima of lid rv's, a df in the domains of attraction of qJ 0 docs not playa role since its right endpoint is finite (see de Haan [26], thm 2.3.2). Ther-efol"e, io t.his section we will only consider df's in t.he domains of attraction of .p<;> or A..
Definition 3.38. A function L : (0, (0) ---;- (0, (0) is regularly varying at 00 with index p
( . I RV)'f I' 1_(tI) - .p (. 0) notatIon" E . p 1 lmt_w L(t) -:t .~. > .
If p = 0 we call L slowly varying; slowly varying functions can be char-acterized by the Karamata Representation (see e.g. Resnick [51], p.17). Examples of regularly varying fUllctions with index pare x P and x P log(l + x); the function log(l + x) is slowly varying at 00.
From de Ha.an ([26]) we take necessary and sufficient condit.ions for a df t.o belong t.o the domains of aUr-action of t.he variollS ext.reme value distributions; we also characterize the norming constants an and bn .
Proposition 3.39. [de Haan [261, thm 2.3.1, remark 2.3.1 J Let F be a df. Then for every 0: > 0
(i) FE D(cp(» iff 1 - FE RV_(>. (H) If F E V(4)(,), then the norm.ing constants can be chosen as
(/." = (1/(1 - F)-Cn), fLnd /}" = 0 (n EN).
61 Chapter 3. Maxima
Pwposition 3.40. [de Haan [26J, thm 2.5.4, t.hrn 2.5.1, cor. 2.5.1]
Let F be a dj, (md define
w{F)
l?(t.) := (1 - F(t))-l I (1 - F(y))tly (t < w(F)), (:l.29) t
p()<~.~ibly infinite. Then
(i) FE D(A) iff R(t) < 00 and
1. 1 - F(t + xR(t)) ._. ( lilt = e. To l' lib)
t-,,;(F) 1 _ F(t) :. Ell'!.. (3.30)
(ii) If F f.:: D(A), then the norrY/.1.rI.g r;on8lan18 can be cho.~~n as
a" -~ R(b,,), and b,\ = (1/(1 - 1"))-(71.) Cn EN).
RemuI'k 3.41. By Galambos ([19], remark 2.7.3), (3.30) is eq1livalent to
lim. P(X - f < :rH(tlIX 2: tl":: I - e- x.
I . . w(f)
In ot.IH'r words, F ( D(A) iff t.he conditional distributioll of the 'remaining life' X - t scalnd by its condit.iollal expect.ation flU) = E(X - fiX ;:~ t) is a.pproximat.ely unit pxponentia.l.
HcganliIlg t.he problem of calcula.ting nonning const,tnts, t.he following ret-:1Ilt. shows that onE' cftn switch to a,Il easy tail-equivalent distrib1lt.ion and comput.e the const.ant.s for that. We say that two df's F am] G a.re tail r:lrlti~lalcnt if w(F) ::::.: w(C) = w, a.lld lim _11:(P:((x)) = D for SOInt~ n > O.
X'."'4l s X
Propusition 3.42. [Resnick [51], prop. 1.19] Le/. F and G be df's, supposc that Il) and H2 aTe e:r,treme value distribuI.ums, and let. (0'>1) (a" > 0) a.rul (Ii,,) be 8cq1te.nr:e$ of t'cal nnmba8. Suppose that. P'((I."T + b,,) ':.11, HI (:r). If G"(an:r + b,,) .:!:. 1I2(:r) , then the f()llowing cond1Ji()1).8 hold: For' .S(mw (J, > 0, b ( II!.
F and Care to.'ll r:q'llivalcnt, liw,,-_w :=2i~ = D; jl1rtherrrw'l'f:, if HI = 1>(>"1 I.lum () = 0 and D::.::: (J,('; 7fHI ,,-, A, then (1. == 1 and D = f'0.
3.4. Rela.tion with extreme value theory 65
We remark that Von Mises ([47]) originally established sufficient ~ondi~ tions for a distribution to belong to a domain of attraction that are often more convenient to verify than some of the necessary and sufficient. condi~ tions 80 far presented (see also Proposition 3.54).
Simple examples are provided by: The exponential, the standard-normal, and the lognormal belong to D(A); the Cauchy and the Pareto distribution belong t.o D(1).,). Normed maxima of independent copies of rv's with the df F(x) = 1 - l/(logx) (x > e) do not have a nondegenerate limit (cf. Galambos [19], example 2.6.1).
3.4.2 Convergence to some limit
Let Zn ;= max(X], ... , X,,)(n EN) with (X)) E ..A(F, X), In Example 3.29(d)
it is shown that if F = 1>.:., then {Z,t} ~ U (n -.~ 00). Therefore, One might expect that this is true for all distributions in V(1),J. In addition, in Example 3,34(b) we have seen that if F = A 1 then {Zn} does not converge in distribution as n ---of 00. In this case, one would t.hink that for all distributions in D(A) {Zn} does not converge in distribution. We will see, however, t.hat [or the distribution of {Z,,} with FE V(if?.:.) or with F fi. V(A) all limit distributions can occur,
In this section, Wf~ use the construction met.hod from Theorem 3.13 (or Theormn 3.20). For any given tv WE .1'[0, I) and a > 0, we give a positive
int.eger-valued rv N independent of W, such that the rv X 4. N + W has df
FE V(1\.) and {Zn} ..!!:,., W as n ---of 00, In a similar way, Wf~ cau construct
a df in D(A) for which {Zn} ..2, W. We remark that the integer-valued rv N is chosen as the integer part of a rv Y1, which satisfies the property that {max(YJ, ... , y,,)} converges in distribution to U as n ---of 00, where (l'j) is Ii. seqHenee of lid rv's. To const.ruct these df's we prove some results for diseret.e distribllt.ions, in particular, for int.eger parts of rv's.
Proposition 3.43. [Galambos [19], cor. 2.4.1] Let Z" := max(X], ... , Xn) (n E N) with (X]) E A(P, X), and let X be a posit.ive integer-valued rv with Pm := P(X = m) (mE N), If Pm/"Lf::mPk doe8 not converge to 0 as Tn ---of 00, then there are no con$i(lTI.ts (1." > 0 and b71 .9uch that the distribution of (Z" - b,~)/a71 tends to a nondegenerate limit.
An immediate consequence of this corollary is that normed maxima of independent eopies of the geometric or the Poisson distribution do not converge to nondegenerate limits, We note that the converse of Proposition 3.43 is not true, j,c, it may happen that Pj/ Z~j P~, ---of 0 as j ---of 00, while 110 norming exists such tha.t. (Z,,- - b .. )/an t.ends to a. nondE'gcnNat.e limit. as n ---of 00 (for an ex;:tmple see G;:tlambos [19], pp. 86·87).
66 Chapter 3. Maxima
In the following lemmas, for a rv X we describe the rda..tioll between the cOlldit.iollS Fx E V(G) and Flxl E D(G), where G = 1)" or G ~ 1\.. The proof of the following lemma follows immediately from Proposition 3.;;6.
Lemma 3.44. Let F ( D(G), where G = 1>c< or G = A, with norrnmg constants a; and b;' stich tho'/' (1,~ --'I- (X! as n --'I- 00. If Fn{a"x + bn) ~ G(:1:) (n --'I- 00) for 80rnfl norming constan.ts (Ln. and bT\, then (I'n --'I- ()() a8 n -. 00.
Lemma 3.45. Let FE. D(A) with norming constants aT' and w(F) = 00.
Ifl ' l-F(t+l)'l tJ Im!_ .(>.) l-F(t) = , . ten (J'TI. --'I- 00 as n --'I- (Xl.
Proof: Let n be given a~ in (3.29). On account of Lemma 3.44 i1 siltficcs to show thf-l.t if we choose (Ln "" R(bn ) wit.h bn .,., (1/(1 - F))-(n), then a" tends to 00 as n --'I- 00 (d. Proposition 3.40)_ Clearly, ~~~Yt~,) --'I- 1 a.s !. --'I- 00 for all k c Z. Let tJ > 0; tlH~)l for kEN
ij+k 1 _ F(y) k(l - F(t + k)) R(t.) > - .. -.. ,,·-d·/I > __ .m· > k(l - 5) -- 1 . F(t) .I _. 1 .' F(t) -
i
for slIlficicntly large L So H(t) tf~Ilds to 00 a .. -; t --'I- 00; since Ii" --'I- 00 it. follow::; that an -. 00 (n --'I- 00). .
o
Lemma 3.46. Let X be (j. 1'11, and let ((tn ) (an> 0) and (bTl) be $eque.nr.~8 of real rwrn.l)f:r-s
such th.o.f an --'I- 00 (n --'I- (0). Then p;l(a"x + hr,) ~ A(:r) (11,-" 00) i.1T P[;'I(anx + bn ) ~ A(x) (n --'I- coo).
Proof: Let F~(aYl:r + b,,) ~ A(:r). From
we get. for ~ > () and for ~uffieiently larg~· 11,
F;(a"x + b,,) ::; F~)~I(a.".:l: + b,,) ::::; f--;;((},,,l: + b" + J)
-=; F~ ( 0." (:r: + t) + br1 )·
From t.he assuIllpt.ioll W(~ havr- f~'~I(a".:r + b,,) .I'~ A(J:) (n --> 00)The proof of other implieation i::; ::;irnilar.
As !1. preparat.ion to t.he following theorem we prove
(3.31)
o
3.4. Relation with extreme value theory
Lemma 3.47. Let X be (J. TV. The following Gsser-tions are equivalent.
(a) limx_oo(l- Fx(x + 1))/(1 - Fx (x)) = 1. (b) lirnx _ oo (1 - F!Xj(x + 1)}/(1 - FlxJx)) "" 1-(c) limn_=Pn/ =~""nPk = 1 wdh Pk = P([X] = k).
These assertions imply tail equivalence of Fx and Fixi'
Proof: (a) *> (b): Use inequality (3.31). (6) ¢:} (c): Trivial.
67
o
In the following theorem, for a rv X We give additional sufficient condi~ tians on Fx for F[x) to belong to TJ(A). Combining Lemmas 3.45 and 3.46 the following theorem is proved.
Theorem 3.48. Let X be a H) with w(Fx) = 00, and su.ppose that lim _ l~Fx(;r;+l) = 1.
;z: C)O I-Fx(:r)
Then Ex E'V(A) iff FIx) E V{A). satisfy an ----/' 00 as n ----/' 00.
In both cases, the norming constants an
In order to prove an analogous result concerning the domains of attraction of ~(>, we need an auxiliary result.
Lemma 3.49. Let a, hE JR, L E RVp. Then
(i) limt_oo L(t + a)/ L(t) ~ 1. (ii) Let Lo be a function such that L(t+a) ::; Lo(t) :S L(t+b) (t > 0). Then Lu E RVp.
Proof; (i) Let 8 > O. By Resnick ([51], prop. 0.5) we know that
Lit/} ----+ xf' as t -0 00 uniformly in x E (1 - 8, 1 + 8). Writing t + a = tx(t)
with :r(t) :~ 1 + T.> we get
lim L(t ()a) ~ lim L(t((t)) = lim (x(t))P = 1. 1-= L t t-= L t) 1-·.<)0
(ii) We have for :r > 0
L(tx + a) L(tx) L(t) < Lo(tx) < L(tx + b) L(tx) L(t) L(t.r) L(t) L(t + b) - Lo(t) - L(t:t') L(t) L(t + a)·
By p<'trt. (i) and the fact that L Ii: RVp it follows that Lo E RVp. o
68 Chapter 3. Maxima
Theorem 3.50. Let X be (~1'v. Then
(i) 1 - Fx E RVp iff 1 - F[x[ E UV;,. Oi) Px ~ D(1J<» iff Fix) ~ D(4).,).
Proof: (i) By (:5.:31) and applying Lemma 3.19(ii) t.lw proof follows ea..':lily. (ii) This is an immediate consequ~.'Ilet' of part (i) and Proposition ,'3.:.m.
o
For <\ll.Y given rv W E ,1'[0, 1), we now C(mst.ruct. a df F in V( G), where
G = 4c. or G = A, with t.he property that {Zll} ~ W (n --+ (0).
Examples 3.51. Let. Z" ;". rna.x(Xl,.·"X,,) (nEN) with (Xj)E..A.(F,X).
(a) Let We ,1'[0,1) be a rv, ;.-Lnd let a: > O. Ld, X 4. N + W, where N is illdepenth~nt. of W and p,~ = P(N = n) = (n-C. - (n + 1)-<\) (n EN). Sin(:e
liIll"_" .. ,,,(p,~/p,,.Jl". 1, we haw {Z,,} ,;!.. W (n - (0) (cf. Thh)rmn 3.13). Clearly,l - FN(t) = rn-r> (t 2: 1), when(:(~
(:1: > 0).
So 1 .... Fix) = 1 - FN E RV_c., i.c. fix), D(q:.,,). Theorem 3.50(ii) yi(~lds that f'x E D(1) ... ).
(b) Simihtrly, let Y be a rv wit.h P( Y 2:: y) = cxp( -v'!/-=-l) (y 2:: 1), flnd let
N := [Y]. Let We ,1'[0,1) be a rv, independent of N, and k~t. X 4 N + W. SillC!~ lim,,_OD ~ :,,' 1 we haV(~ {ZTl} ~ W (n --+ 00),
pn 1
We now t::how that Pc D(A). Clearly, Fy E D(I\.), and by Th(~()n~Hl 3.18, FN E D(I\); hr-mcc Fixi ('O(A). By ThAor~~rn 3.48 it. sufficeK in show that
1· . ~x±~l -- I··l ·.,· '1, . r I-FIX)('":..l~Il. -. 1 ( f lIH,~,. '(X", I-FX(x) -_. 1, w llCll IK UtlllV,l cut to lrt1",_clO ) .. 'I'ixJ(~') - (; .
r.A~mllla ].47). We havp
J - l~ x lr .. -+-.. D_ .. 1 - Flxlr}
jJ([X] ;::r + 12 _ P(N _~Jxl + 1) F([X] 2:::1:) - P(N ~ rxl)
exp ( - VI;;:l(1 - {l'''::- :i / f:rl))
exp(-1/(2VN)) - 1 (:r: - (0).
3.4. Rela.tion with extreme value theory 69
Remark 3.52. Similar as in Example 3.51 we can show that for a positive integer-valued
rv N with P(N = n) = (I+l~gn - 1+lo~';(n+l)) (n E N) we have 1 - FN ~ RVo. Note that F(x) := 1 - IQ~:t' (x ~ e) does not belong to the domain of one of the three possible extreme value distributions (see Example 3.29(c)).
3.4.3 Asymptotic uniformity in distribution (mod 1)
Let Zn := max(X1,··. 1 Xn) (n E N) with (Xj) E A(F, X). In this section we give additional sufficient conditions on FE V(G), where G = (fl.:. Or G "" A, for {Z,,} :!.. U (n -I 00); under the assumption that t.he norming con::;Umts an tend to infinity as n ---* 00. To prove onr main results, we suppose that. F has a density f, and we need local uniformity of the convergence in
We ]"(~call that local uniform convergence is uniform convergence on compact subset.s. This condition itself, unfortunately, is very hard to verify. However, it is known that local uniform convergence of gn to G1 is equivalent to th~ Von Mi$es (xlIldition (conditions (3.33) and (3.34) below). We note that t.he a0$umpt.ion an ---* 00 holds for all distributions in V(4?..-.) (cf. Proposition 3.39); from Galambos (cf. [19], lemma 2.7.2) we know that for FE D(A) with w(F) = 00 we have an/b ... ---* 0; so a ... y+bn ~ bn(l +any/bn) '" b ... ---* 00 (n ---* 00) for all YE lit
TheoreITl 3.53. Let Z,,;= max(Xl' ".,Xn ) (:n-EN) with (Xj)EA{F,X). Let F have a density I, and suppose that FE V(GL where G = q>(> or G = A, with narming constants an and bn such that a" ~ 00 (n ..... 00). If (3.32) holds locally
uniformly on (0,00) OT on IR, respectively, then {Zn} .!.. U (n ---* 00).
Proof: Local uniform convergence of (3.32) implics
<;>0
,!!!lJ.:, jI9n (Y) - G1(y)ldy = 0. -00
I-kllce, the proof is an immediat.e conseqll('tlce of Lemma 2.14 by taking
Y71 = (Z" - bn)/a". o
70 Chapter.3. Maxima
Wt~ remark that de Haan and Refmick ([27]) give sufficient. conditinns for the density of the normed maximum to converge to the density of the appropriate extreme value df in the metric J,p, 0 < p :::; 00. In particular, if p = 1, t.hen the Von Mises condition is sufficient for f~"", b'l (y) -G'(y) Idy -I a as n ---T 00, whNe lin is defined as in (3.32)- In Resnick ([51], prop. 2_5) it is even shown that local uniform convergence of g,~ to G' is equiva.lnn1. to the Von Mises condition- Combining results proved by Resnkk IfJad to the following proposit.ion.
Proposition 3.54. [Resnick [51], prop. 2.5, 1.15, 1.17] Let F be a df wzlh density f that is positive in [;Orne neighb01-hood of (Xl and nonmcr'e(ui'iny_ Suppose FE 1J( G) 'Until, norming constants (J" and b".
(i) Ld G = 1'.:., I'hen
). :£;f(:t) 1m (= 0:,
1'-00 1 - F :1:)
and (3.S2) holds 1()C!Illy ?1niforml!/ on (0, (0)(ij) Ld G = A. Then
w{F)
I· f (J.;) J ( _ ... L'( )) It --;- 1 un ( f~())1 1 1 t (,. ... , .• ·.w(P) 1··· :c ,.
and (S. S2) hold.s locally lmiformly on IR..
(3.33)
(3.31)
The foregoing proposition leads to the following the()rf~m, which irs an irnrncdia.t.(~ consequence of Th(~orAm 3.S3 and Proposition 3.54. Lernlnll, 3.44 implios that the followilLg theorem hold~ for any choice of lLonning constants an such that. (J.1l. ---T 00 all n .---t OC).
Theorem 3.55-Let Z" ::::: mftx(Xj, ... ,X,,) (nEN) with (X,i)EA(F,X). Ld F' have a densUll (hat is positive l.n 80me neighborhood of 00 and n()1!'mcl'casing- 'f FE D( G), where G.·, PI> or G = A, willi. nonnin,q constants Ct,,'''' (:(!, lIum { Z,,} .5!.,. u ('II, --'1- ()o).
The followilLg lemma irnplip:-; that. tail equiva.lent dilltribut.ioILs have the rsanH~ properties with re~pec1. to the limit distribution of {71l.}'
LClInna :i.56. Let P' E: D(G), where G = <P" Of G =-" /\, with nonniny um,stants a;, ,~w.'h I.ha.t 0;, --'1- (Xi as II ---T (Xl_ IA~t F be a ilJ sw;h thai. F (md po are (.mJ (YIJ.i·oalenl., Then P E D( G), and the norm-my consl.(1.nts o,~ can k (:hosen such that all ---T 00 as n -..... ()()_
3.4. Relation with extreme value theory 71
Proof: Let limr~QO \ __ ~.g/ :;;;;: D for some D > O. Suppose G .=: A; for G = 1\:,< the proof is similar. Let b~ be the norming constants of p •. Take b" = b;, + a;, log D and an = a~. From Proposition 3.42 it follows that
(3.35 )
SinC(~ P+(a"x + bnl ---> 1 as n ---> 00, expression (3.35) IS equivalent to 11,(1 - r(an:r + bTl)) ----. exp( -(:1: + log D)) as n ----. 00. So
( ( )) ( +( I-P(anx+b,~) n 1-FcLn:l:+bn =n I-F anx+bn)) ( -e-~,
1 ~ F+ anX + b,~) which is equivalent to FE D(A). Sinee an. = a~ ----. 00, the proof is now cornpldE~.
o
[3don~ Wt~ e$t.abli~h sufficient conditions in terms of the hazard rate of F for {;;'I} .. ~ U as n --) 00, we prove an auxiliary result.
L(":ITIma 3.57.
[Jf:!, F Of:.' (I. diiff:'.1"f:.'nti(zble dj having a hazard mtc h that is positive in some nezghborhood of 00.
(i) Let 1 - FEH~, for 80me p :=;; o. If :J:-Yh(:r) is monotone in some neighlwrhood of 00 for some tJ E IR, then
lim xh(x) = -po J:-OO
(3.36)
(ii) Lf'J FE D(A) 'III·ith w(F) = 00, and R given as in (3.29). If h is monotone in some neighborhood of 00, then lim:r._~ R(:z;)h(:1:) = 1. Tlw. TWT"Tning
constants Ct), can be chosen as a" = l/h(bn ) wi/.h bn. = (1/(1 - F))-(n),
Proof; Let f be the derivative of F. Let. (J, :l:(I E ~ he such that. x" h(x) i~ monot.rJllf' for :1: ~ :1:0.
(i) S\lppose :l.~(1h(:r) is nondeer~asing; It similar proof work:: if .<;"h(.<:) is noniH(T('a~ing. Let:1.: > 1. and t ~ Xo. 8y applying the IlH'an value t.he()H~rn Wt~ W'I
I - F(t) - (1 - F(tx)) = ---.!L(~l .. ,,, < tl'.(9) ( ) ( F()) ( )
( (f) E (f, f:r)). :r-11- t 1-Ft-
1 - F(t) - (1 - F(t:r)) < (:r 1)(1··· F(t))
urr'H"h(H) ~ Ur"'(b:)"h(t:r)
< f 1l1l-tXU-", (t:r r")( b:)'" 11,( b:)
_. /-1 max( 1, ;r-")t:rh(tx),
Chapter 3. Maxima
Since 1 - FE HVp the left hand side tends to (1 - xf.')/(x - 1) a.'" t -.. ~ 00,
and hence
1 - xf>
:r-1 < :c17
-1 max(1, x·· (7) lim inf h:h(tx)
t.--+~
X,,-I max(l, x-Oj lim inf th(t). t-oo
Now let x 11; then
- p::S liminfth(t). I.-+DU
Sirnilarly, by takillg 1: < 1, it follows that
- p ~ lim supth(t). 1-00
Combining (:3.37) and (3.38) we get (3.36).
(J.:n)
(3.38)
(ii) From Galambos ([19], thm 2.7.5) w(~ have limx_ .. wU') R(x)h(x) = 1. From Proposition 3.40 we can take the norming COnstant a" a::; (Zn = R(bn ) with bn = (1/(1 - F))-Cn). By Proposition 3.36 we may choose an ,.<: l/h(b,~).
o
Theorem 3.58. Let Zn:= max(XJ, ... ,Xn) (nf'N) with (Xj).;:A(F,X). Let F have a haz. ani rate It that is l'o$itive and monotone in some neighbo'rhood of (Xl, and suppose it,~ derivative h' (t) exists for .mjjiciently largf~ t.
(i) If 1 - F E R~! for some p S 0, then {Z,,} .~ u 1L8 n -" 00.
(ii) if F f' D(c.'fJ..,.) , then {Z .... } ~ U as n ---'I- 00,
(iii) If FE D(A) with normin.rJ constants OJ> ---'I- 00, then {Zn} ~ U as n ---. 00.
Proof: (i) FrOIrl Lemma 3.57(i) we have :J:h(x) ---'I- -p (x ---. (Xl) , i.e. h(:l.;) ,...., 7- (:r -. OQ). Sinct h is monotone, the conditions of Theorem 3.27
hold, and so {Z,,} .. ~., U (n ---'I- 00).
(ii) l3y Propo:;itioll 3.39, thifl is [m immediate COH$equence of part (i) of this Iheoreln.
(iii) By Lt>mma3.57(ii) we m<\y choose On = 1.(6,,) with b" = (l/(I-F))" (n) (d. Pl'Oposit,iOll 3.10). SillCt~ h is l)louotonc, and a" ---4 00, I)n ---'I- oc) Wp hnd 11.( t) J () (t ---4 00). Now Theorem ;{.2i applins.
o
w(~ 1I0W returll to sorne di$t.rilmt.ion::: considered ill Examph~H ;t2\).
3.4. Relation with extreme value theory 73
Exa.mple~ 3.59. Let :t" := max(X I ,,,., Xn) (n E N) with (Xj) G. A(F, X). (a) Lnt F(:r} = 1 - x-fJ (x > I, (j> 0). Then FE V(1lIl ) with an = nl/f3,
and its density is positive and nonincrc<tsing. Hence, by Theorem 3.55)
{Z,,} <1, U (n --) IXl). Thcorem 3.58(ii) is abo applicable.
(b) Let F(x) = 1 - x!'cxp(-x") (:.r > 1) for some ItE IR and v > O. Then Fe V(I\.) with nanning constants
f-llog log n ;~i(l;;-gn)l-l/'-' .
A~~llIIH' now t.ha.t // < 1. Clearly, a" ---t 00 as n ---> 00. Then Theorem 3.55 irnpjie;s that for thi;s subdastl of tht> dist.ribut.ions {Zn} ~ U (n ---> (0). We 1101,(> that Theorem 3.58(iii) also applies.
3.4.4 Maxima do not converge
Lt't. Z" := max(X1, __ .,Xn ) (nEN) with (X)EA(F,X). In Section 3.4.3 we have seen that. for df';s in 'O(A) with Honning- COllst;-tllt.S a" ---t 00, the dixtriblltioll of {Z,,} often converges t.o U as n ---t 00. In this section, this c,\£:w is excluded by a;ssnming tktt. thp stXPHHlCO (11.,,) is bounded, and we give sufficient conditions on F G D(t\) under t.his assumpt.ion for {Z,,} not to ('oIlVel}}~ in distribution as n ---t co.
We now describe the reJa.t.iouship het.wt~en D(I\.) a.nd t.he class IT by specifying t.he nOl'ming const.ants and the auxiliary function; the definition of II is given in Section 3.3 (d. Definition 3.31). The proof of the following l(~mma follows easily from the proof by R(~snick (d. [51]' prop. 0.10).
L(~mma 3.60. L!:'I F lil': Il tif, anti n ;::: O. Tlu~n the following Ils.'iertwns are equivalent.
(a) FE D(I\.) with norming constants ail such that limn _ oo On = o. (b) (1/(1- F))- E IT with (W,:l:ilillT"y fltTwtum.!f Mwh that limt_oo.1(t) = a.
Tltl' proof of l.lt(~ following l.ltN)r(llll rNl.dily follows from Theorem 3,33 and Lemma 3.GO.
Theorem 3.61-Lel Zn := Hlax(X1,· •. , Xn) (n" N) wi/.h (Xj)<. A(F, X). Lei FE D(A) with rW1"ming constants (In and bn such that liHl n _ oo (i~~ = a faT some a 2:: O. Then the distributwn of {Z,t} does not converge as 1"1 ---t 00.
74 Chapter 3. Maxima
Nf'xt, to give a rather weak sufficient condition on FE D(A) for {Z,,} not to converge, we nned some auxiliary re$ult.s.
Proposition 3.62. Toel Y be a rv with dj A, and let n, b ~ 0, a i- b, and p, q E IR.
Then !c{aY I ,,)(k)1 f IC{bY+~)(k)1 i- ° JOT k E Z().
Proof: Let tp bt~ the chf of A. We know that ~(t) = r(1 - it) (t Eli), where r is the Gamma fUDction. From Abramowitz and Stegun ([2), p.256) we obtain for all t c IRI'
1<;>(1-)1 2 = Ip(t)ip( -t) = r(l- it)f(l + it) = ----;-~; slIlh 1ft
the fllnction Bi~\t1rt is decr€ftsing on 14. So, for all s, tE JR, we h,we If(t) i- 0, and
Since h~Y+p)(k)1 = Itp(27Tkn.) I we get ic(ov+l'Jk)1 i- icm'+q}(k)1 t- 0 for k E 2'.0.
[l
Lemma 3.63. Lrt(a,b)([O,CXl)X[O,l) andleJX be a nondegene.mterv. Let (0.,,) (a" >0) and (btl) be .~eq'ILr-:nce8 of real Humben, S1ICh that (an, {b7\ }) --+ (a, b) (n .... , 00), and ld. (X'l) be a sequence oj rv'.s s11ch that PXn "~\ Fx (n ---> 00). Then
{anX" + bn} -!!.. {aX + b} (n --+ (0).
Proof: Let c" and c be the FSS of {(l,nKn} and {aX}, respectively. Then
Since F:,,, ~ Fx, we have i.px" ----'> ~x locally uniformly On IR a<; n'-~ 00; htnc(~ uniformly on th!" set {2nko." : 'II. E N} U {27Tka.}. So Cr, - c as n .... , 00
(sO(> IV~~)lid.: IS]], sec1. O.l). Since, by '~K::;uTllption Ibn} ----'> II, the right hand side of (:\.:~9) tends t.o zero a..s n· .. 00.
n
Lemma :l.64. Let F ( D(I\) wdh w(F} ;;··00. if F"(o,,,:r + b,,) ~ I\(:r) (n·' (Xl) f()7" some nonning constants (I." and b,,, ih(~n (he pairs (a"., {1)1/.}) do nol. tonverge as n. --+ co.
3.4. Relation with extreme value theory 75
Proof: By Proposition 3.40 we can choose the norming constants b~ :"". (1/(1 - F))-(nL and Cl~ :""' R(b~) with R as iIl (3.29). Suppose that lim 71 ··' 00 (J.~ =;:: a for some (). ;:: 0; then combining Lemmas 3.32 and 3.60 we find that ({b~}) is dense in [OJ 1). Furthermore, Proposition 3.36 yields that
(3.40)
Suppose now that limn _ oo an = a for some a ~ O. Then a~ -0 a, and hence ({b~}) is dense in [OJ 1). From (3.40) it, follows that (b~ - bn ) - 0 as n - 00. Clearly, the property (b;, - bn ) ---+ 0 together with ({b~}) is dense in [0, 1) implies t.hat. ({ bn }) is dense in [0, 1), and therefore the pairs (an, {bn}) do not converge as n - 00.
o
Using Proposition 3.36 one easily sees that the following theorem holds for any choice of bounded norming constants an.
Theorem 3.65. Let Zn := max(Xll'·" Xn) (11, EN) with (Xi) E A(F, X).· Let Fe D(A) with nanning constants an and bn such that the sequence (a,") is bounded. Then { Z,,} does not converge in di$trib~LUon {J.$ n - 00.
Proof; Let Ytl = (Zn - bn}jan. Then we have {Z,,} = {anYn + {bn}} and Fy', ~ Fy , where Y has df A. By Lemma 3.64 the pairs (an, {b ... }) do not ha.ve s unique limit point; so let (Cl;), .81), (0:2,.82) be distinct limit points in [0,00) x [0,1). We distinguish two cases: 1. a:1 = a:2; 2. 0:1 t= 0:2.
1. Suppose that (0" Ih), (a, .82) E [0,00) X [0, 1) are limit points corresponding to subsequences (nu) a.nd (n2,j)· Then Lemma. 3.63 says that for i = 1,2
By Proposition 3.62, c{AyJk) =I- 0 (k E Z); hence, CloY Hd(k) i= cloY H2}(k) for all k E :Eo since /31 t= (32. Then Proposition 1.6 yields t.hat {Z'lI.,J and {ZIl2.j} have djfferent limit distributions. So {Z,d does not converge in distribution.
2. Su ppose that (Crt, (3t), (0:"2, (32) E [0,(0) x [0, 1) tUe liIIlit points corresponding 1.0 subsequences (t/.].J) and (n2,,i) with nt i= n2. Then Lemma 3.63 says that for i = 1,2
76 C}J<.lpter ,1 Maxhn<l
,I
Propo~itiou J.{)2 implies that {Ul Y + ())} t- {0:21"+ ift}. Ikncc {Zill does not conVPIW~ ill distribution,
TIle following lemma says that taill~(jlliva.l~!!lt difOtributiolls iIl D(I\) havl~ th(~ samp properties with re$p~'ct t.o the limit dist.ributiOll of {Z,,}.
L~mma 3.66, Ld ;;" := m<lx(X1, ... , X,,) (n, N) with (.\) E A(F', X). Let F· ( V(j\) with
nonn1,1I,Y I:OTLstan15 a;, stich f.h(d ((}.~) 'IS botmdul. fA': I, F and p. be /.(zil eqtLivIl,lim!. Thcn {2'1} d()(:,~ not umllerrw in IUst'n/;v..t-ion 08 'II, --+ (X"
Pnwf: Similar to the proof of LUlllma 3,SG it f()ll()w~ that F ( D(A) with hou ll(kd llorming com:t,aut.s Ii. II . Then Th(~()n'ln 3,G5 appli(~k.
o
N('xl, W(~ formlllatE' sufliciPll1 conditioBS ill tprltlS of tlw hazfl.rd rate of P for {Z,,} not to C()IIV('rg:t~,
Corollary :1.67. I,e/. ::1/. := lnax('\I,"" XII) ('II ( N) 'IIn:th (X.i) ( A(F, Xl, Let F ( 'O(!\) have (l !w,zrml rat(: II. thILl, L~ posdilW mul 7fwtwtone w, SIJ1IW ne'iqhl)()'f"ho()d of co. S!LJll)()'~C thoi liIrl:I:'_,mh(;r) = ('j"01'80rne c(::((),I~.)I, Th(~71, {Zill d()e,~ no! (()'II:IIITifC in di8frilmfum IL,~ 11 ., IX).
Proof: LCJl)lllH :LS7 yidd:-; l,!Jal t.he llonnillg COllf-ltalll,:-; II,,, can !w dl()knn iI.S (/11 = 1/ h(Ii,,) with norrniTlg COlls(.antf-l iI" il.k in Proposition :~,10. lIt'nce t.lH' ;';NIll('I[(,(' (11./1) is hllllll(]('d. So 'l'IWorUll 3,GS avplil's.
LJ
Corollary 3.68. /,1'1 L/I:= lllH.x(Xl" ",X,,) (II(N) 'W·il.h (X,)"A(F,X) be ,qu:h that {Zn} dlWs 110/ 1:llJIlIl'lyc ill di,~/'('d)uli()n (j~ /I --+ 00, {)(JI:ne ;;:, := JOax(X;, ' , , , X:,) 'Imlli (.Y;)" A(e, X').
(1) if X .':. Xi +0 lm' S{)rJI,r: 0 e lit tlll:n {Z;,} (/0(:8 no! COnU(TlJc in riisiriblJ.twn
as "/I --+ (X.I.
(ii) S'/ll)II(),~(' Ihltl F sul.i\/ies illI: usslr.7nplwns of COl'OUO,171 ,'un, If x' Ii (~X fl),/" .'lOlli-I' (I > 0, I.hl."1i {z:,} dIW,~ not Corl:IW'Uj(: 'III (H~tnl)'(J.twn as 'I) -·f ()(I.
3-4. Relation with extreme value theory 77
Proof: The proof is analogous to that of Corollary 3.28. o
We work out a few examples to illustrate t,he results of this section.
Examples 3.69. Let Z ... := max{XJ' ... , Xn) (n E N) with (Xj) E A(F, X).
(a) The stalldard~normal distribution belongs to V(A) with a'l :;;;; l!b.,.. and b... = J210g n. Clearly, afl. ---+ 0 and (bn+ 1 - bfl.) ---+ 0 (n ---+ (0); both Theorem 3.61 and Theorem 3.65 yield that the distribution of {Zn} does not converge as n ---+ 00.
In addition, it is well known that 1- F(t) "" (t)2n)-1 exp( -t2 /2) (t ---+ (0); hence for the hazard rate h of F we have h(t) ,.,.. t. So Corollary 3.67 also applies.
(b) Let the df F be given as in Example 3.59(b). Assume nOw that v ;;:= 1. Clearly, the sequence (an) is bounded, and so Theorem 3.65 implies that for this subclass of the distributions {Zn} does not converge in distribution as n ---'j. 00. We note that for p. = 0 a.nd v = 1, F is the unit exponential d ist.ri bu tion.
3.4.5 Conclusions
Here we compare the most important results of this sectioll with the results of the previous sections. We consider Z ... :-= rnax(XJ, ... ,X.,..), where (Xj) is a sequence of lin rv's with df F and w(F) = 00, and we study the convergence in distribution of {Zn} as n ---+ 00. One might think t.hat for F E V(~<:.) (or FE D(A)) the limit behavior of {Zn} is identical to the limit behavior of {Zll} when F = 4\, (or F = A). It turns out that nothing of this sort is the case. We even find that the convergence {Zn} is not directly linked to F being or not being in the domains of attraction of 41(>; or A (w <> does not playa role here). However, under the a5sumption that F belongs to . the domains of attraction of 4(> or A with norming constants an, there is a relation between the convergence of {2n} and the behavior of an. There is also a relation between the convergence of {Zn} and the hazard rate of F. We now go f1ll'ther into these relations and review th~~ most important results !lot intending to be complete: sometimes additional conditions on F are needed.
F()r any given rv WE X[O, 1) it. is possible to construct a sequence (Xj)
of iid rv's with FE V(1\,J (or F f V(A)) such that {Z.,..} ~ W (see Examples 3.51). As a result, any rv W on [0,1) can occur as a limit for {Zn},
78 ChApt:el' 3. M<.lxinw
with llll(h~rlyillg df FE D(1\,) (or F c '0(11.)). However, t.ilel'P a]"(~ a.bo df's F wit.h 1 - F ( R'~J' alld hence F dOCi; not bdong to the domaillt-> of at!.ra('tioll
of (1\, or /I., such I.hat {Zn} ~ W (:-;P(~ R(~mark 3.G2), N(~xt, assuming that FE D(oJ?,,) U D(A) with Ilorllling constants (1,/, wf~
dpscril)(~ t.lH~ (:olllICction between tlH' (,OIlV(~rgcll("'(' of {7." '} aile! the bdW,'(IiOf of (1.11' If }<\:. ])((T),~) (or FE D(A)) with norrningcollstallts a" sllcb thi\t (I" ---> DO
a::; 'II. ---> (X), 1,IH'Il {L:,,} .:!.. u (::;e(~ Thdlnlms 3.GS, 3.58). Til TIH~ormtl 3,G8(i) it is shown that. there arc also df's F wit.h 1- F .. fWo (iJ~. F ~ D(<P,,) uD(A)) s11ch tkd, {Z,,} d, U, Wl..~ reTl1;1,rk that all df's F (: D( (P,,) satisfy (/)' ---f ex.,) fl,S
'II. ---> ex).
On the other hand, if F E D(A) with bounded IH)rming constants (I.", then {Z,,} dons not converge ill di6\.ri biltion (sec Theorem ;tG5). In Theon'1l1 :~.:13 we show that thE~ condition (1/(1 - F))- E II with a.uxiliary function .rI :such tha.t lim/_ex, 9(t) = (1. for some a ~ 0, is su fIiciellt for {Zn} 1101, 1,0 (onvergp. By Lnlllrll,t 3.60 it follows Ihat this sutllciellt cOlldiUon is eq11ivalent t-o F' (' D( t\.) with llorming: C()l1stallts (/'10 and lim ...... 0<", (J. .... ~ n. ThE;rdorl\ the sl1fficieut COllditiol\ formlllated in Theorem :L:~3 is weaker than that nf ThcorClll :3 .CD. So, it SI'nlllS that df's F for which {Z .. } dOI's not CO)lVPrp;<\ do b(~I()ng t() the doltlaills of ~l\.l.nlction of 1\,
Th(' b('lt<wior of the Jwzllni rat.e Ii of F (k\.<'rrninns in many chSf'S whether the distri\)1t1,ion of {Zt.} d()E~S COIlV(;rge to U, or docs not COllwrg!' at alL III Ltd, ill TIH'orclll 3.27 Wt~ show that- thE~ hss11ltlptioll th<tl. 11. ckcrcases
llloliotollicH.lly to 0, is snfficieili. for {Z,,} .:!." U. Til ;'tdditioll, in Corolla.ry 3,67, I1lld,'r Ill(' condition tliat FE V(i\), wp prove thitt if /1(:1:) u~lIds 1ll00wt()llically to :-;()])H~ (. ( (0, CXJj (l.~:C • 00, U)(~ll {Z,,} d()(~s llot. COllVCl'gC. Thi~ ~\Ibgests
tbat 1.Ii,'S(' condit-iollS a.]"(~ lIot fill' away from being 1\('CE~sSary.
Chapter 4
Covariances
The decomposition of multiples of rv's int,o a disc:rete part and a rounding errOr has reC:t~ntly heen studied by Kolassa and McCullagh ([38]). Considering covariances, they show that under suitable conditions the discrete part and its r:ouuding errOr are asymptotically uncorrelated. However, covariances of sums of iid rv's and maxima of iid rv's and their: rounding errOrS have not been considered in the literature. In this chapter for a sequence (Yn ) of rv's we are concerned with the limit behavior of covariances of {Y'l} and [Yn] f
and covariances of {Yn } and Y'l as n ---+ 00. We consider three types of rv's: Yn = nX1 with Xl fixed, Yn = E~l=-l Xm and Yr, = max(XI"" ,X,J, where (X",) is a sequence of lid rv's as n -.... 00. We give sufficient conditions on the distrihution of Xl for {Yn } and Yn to be asymptotically ullcorrelated, and n~,} and [Xl] to be asymptotically negatively correlated. For t.hese three types the limiting value of the covariance of {Yr.} and [Y71 ] is equal to - 12 (i.e., "Var U, where U denot.es t.hf~ rv with uniform distribution On [0,1)). We use the Euler-MacLaurin sum formula to prove t.hese results and under appropriate conditions to derive a high order of approximation.
In Chapter 3, it. is shown that {ZTJ} .:!... U as n ---+ 00, where Zn is the n-th partial maximum of iid rvjs having a hazard rate that tends monotonically to zero (d. Theorem 3.27). Wt~ will also give rates for the cOIlvergence of the dcnsi ty of {Zn} to 1 as n ----oj, 00. For this we use some results from extreme voduc theory, and we see that the rates depend on the tail of the underlying elf. By contrast, under weak condit.ions the density of t.he fract.ional part. of £:iums of iid rv\ tends to 1 exponentially fast (d. Proposition 2.3).
We now give a brief outline of t.he cont.ents of this chapter. In Section 4.1 j W(:I prove some preliminary results on moments and covarianccs of rvjg having densities, using the Euler-MacLaurin sum formula. In the Sections 4.2, 4.3 and 4.4 we apply these r{~sults to multiples of a rv, sums of iid rv's and maxima of lid rv's, respectively. As counterexamples we £:iet~ that if a rv X is uniformly distributed OIl [0,1) Or exponeutjally dist.ributed, t.hen [nX] and
79
80 ClJaptAr 4. Covarianws
{nX} hn>. llnc:orrAlated for n E N, In Section 4.4, Wf\ give also rate::> for thf. convergence of the densities of thp. ftfwtional part of maxima of iid rv's to 1.
4.1 Prelitninaries
Lel (Y~) be a sequence of rv's such that EIYn I < 00 (n ( N) and with corr(':::>pOllding seqllenc:e (jn) of densitie::>. In this section we give suflicient condil,iollS on the derivatives of fn for P~,} and [Y"j to be <ltlylllp1.o1.ically negat.ively correlated and for {Yn } a.nd l~,. to be asymptotica.lly llilcorreiated. We will use the Enler-MacLanrin sum formu],l. t.o ;lvptoximate the exp(~ctatioIls ~[Yn], IE([}~,]{ Y" } ) 1 and lE( Yn p~, }) as n -> 00,
Tllr following proposition call eA.sily be proved.
P['oposition 4.1. Let k c: N, (Ind let J ; 1R. ---. IR be in Ck-1(1lt),
(i) Let,91 be defined U8
1Y+1
i!1(Y) ;= Y f(.r)(i:r; y
(yEll~),
Then g(!l(y) = g+1 f(:l.')(h; + y(f(y + 1) - f(y))
(J\JI\y) ~ p(f(p.21(y +- 1) - f(JI-2)(y)) + y(.f(/J-I)(y + 1) -_. f(p-l)(y))
(p=2"",k).
(ii) Let .tI2 bp. (ft::Jined as
['Ii 1
,!I2(U) := Y I,,· (:r _. y)f(:r:)dT .y
(y Eilt),
Th(~71.
!Al\y) = g+!(:1: - '2y)f(,T)rl:r; + y.f(y + 1)
y~2)(iI) ~-= -2 g+! f(T)(h + (2 - y)f(U + 1) + yf(y) + ut(y + 1)
(4,1)
(4,2)
.Q}l')(y) C~ ])(f(p .. :n(y) - j(p-:i)(y + 1)) + (]I - y)!(p ... 2)(y + 1) + y/U'· ~1(1J)+
.j. y.f(p.-I)(y+1) (p=3, ... ,k),
(iii) Lei. 93 k d(~fin(A as
£1/11
.ch(Y):= ' :1:(:r - y)f(:r)d:r '/I
(yc 1ft). ( 4.:3)
4.1. Preiimjnarjes
Then y~l)(y) :~ _ g+l ,tf(x)dx + (y + l)f(y + 1)
g\p)(y) = (p - 2)(f(p-3)(y) - j(p-31(:1J + 1)) + (p - 2 - y)fp-21(y + 1)+
+ yf(p-21(y) + (y + l)j(P-l)(y + 1) (p = 2, ... j k).
81
(iv) /fxf(p)(x) -.~ 0 as Ixl --> 00 (p = O, ... ,k -1), then g~p)(y) --> 0 as Ivl-->oo (i= 1,2,:3; p=Oj ... ,k).
Lemma 4.2. Let Y be a TV with !ElY I < 00 and dens'ity f. Let mEN; suppose that f E C·",-l(lR), and that
lim xf(/.:)(x) = 0 (k = 0"", 2m - 2). Ixl-=
( 4.4)
For'i = 1,2,3, let gi(Y) (y E JR) be defined as in (4,1), (4·2) rmd (4.8), and lei.
00
Ct := max J Ig?m)(y)ldy, .=1,2,3
( 4.5) -=
Then
{
IE[Y] - EY + tl ~ C\'
(i) 1IE:([Y]{Y}) ..... ~IEY + tl ~ a
IE(Y {Y}) - tIEY I :5 (L
In (J.dditioTl., suppose that f E C2"'(IR) and that limlrl_ooxjChn-l)(x) = O. Let
00
!3:= J Ijl<m)(y)ldy. -00
Then
(ii) Fo'( the density h of {Y}, sUPz E (0,1) Ih(x) - 11 ~ (3.
(iii) Fm' all // EN, IE{Y)" - lEU" I ::S {J,
(iv) IVm' {Y} - V(I1' UI ::s 3(r
( v) Let rt : = III i n q , Ii}. Then
{
ICov([Y), {Y}) + til < ~n + (3' (IEYI + ~ + 3a) ICov(Y, {Y})I < a + StlEY I·
( 4.6)
82 Chapter 4, COVarifl.Il(;(Js
Prouf: (i) By Proposition 4,1(i) we h,w~~ Yl E C 2m (JR). From (4.4) a.nd applying Proposition 4,1(iv) we get
lim g~k)(y)=O (k=O, ... ,'2m.-l). Iyl--oo
Applying the Euler-MacLa.urin slim formula we obtain (d. Proposition 1.26)
DO j +-l e<> y+l
w.[Y] 0= .. ~ j f J(z)dz = .I y J J(z)r1zdy + HI )-'00 j -00 !I
00 Z
J J(z) J ydydz + Rl = lEY' - } + R l ,
-00 .-1
where tlw remainder term Rl ::;atisfies (sAe Corollary 1.27(ii))
"" IRll ~ .I I.q\2m
)(x)kb: ~ 0:.
-e<>
Similarly, we get
JH:([Y]{Y} ) )-+ I
. -E j J (z - j)J(z)dz )=-00 .
J
00 y+1 J y .I (,3 - U)f(z)dzdy + fl.l
-OCI Y 00 ;;
'''' .I f(z) J y(z - y)dJ/dz + H2 = ~JE:}' - ~ + R·i ,
-(10 ~ j
)+1
11<:(1"- {Y}) = f: j J .3(,3 - )).r(z)dz )~-OO J
(10 !I I 1 J .I z(z - y)f(z)dzdy + R;)
-00 !I
00 ~
J zJ(z) .I (z - y)dydz + RJ "'~ ~IE:Y + R!, -00 1:-1
4.1. Preliminaries 83
(ii) Let x E [O, 1); by (1.5) the density h of {Y} is given by
00
h(x);= L f(j + x). j~-oo
A pplying the Euler-MacLaurin sum formula for h we obtain (d, Propositim) 1.26)
00 ""
h(x) = . E f(j + x) = J f(y + x)dy + R = 1 + R, ]=-= -=
where the remainder term R satisfies (d. Corollary 1.27(ii))
00 00
IRI ::; J If(2ml(y + x)ldy = J If(2ml(y)ldy = /3. -00
So sup.. Co (0,1) Ih(x) - 11 ::; /3. (iii) For n.ll v E: N,
-00
1JE:{V}" -lEU"1 = I f x"(h{x) -l)dxl S; sup Ih(x) - 11 S; (3, ( l
x E (0,1) 0,)
(iv) fiy plHt (iii),
IVar{Y} - Va.rUI S; IE{y}2 - W21 + IE{Y} - WIIE{Y} + lEVI ::.:; 3(3.
(v) Let q := [EY - 0' - !l, Then by part (i) we get
o ~ EY - a: - ! - q ::; IE[Y] - q ::; lEY + Q - ! - q, (4.7)
From part (iii) it follows that
o S; t - {J' ~ oc{ Y} :5 ~ + 131• (4.8)
In addition , we have
('.ov([Yj, {Y}) ;; Cov([Y]- q, {Y}) = lE:(([Y]- q){Y}) - lE:([Y]- q)lE:{Y} = E([Y]{Y}) - qlE:{Y} - E([Y]- q)lE:{Y}. (4.9)
To derive lower and upper inequalit.ies for Cov([Yj, {Y}) we distinguish two case:=;: 1. q < 0; 2. q 2:: O.
84 Chapt:er 4. Co V8.ri8.ll CCS
1. By (4.7), (4.8) and (4.9) and p,~rt (i), ;tlld, moreover, by tll(~ fn.et that q > lEY - ~ - I) we obtain
())'O ([ 1" J, {Y }) < ! lEY - 1 + 1.1: - q n + fn - (IFY - ~ - n: _.. q) q - i1')
-- -0 + ~n + 0' (IH:Y - ~ - u- 2q) I 3 '1 ( I 5 ) < -11 + 2~ + fj ~·IE} +"2 + (~
< -n + ~a + /3' (llKYI + ~ + 30:). (4.10)
Sirnil;!.rly,
cCAm([Y], {Y}) > !IEY - ~ - (l: - qq - f3') - (IT':Y - ~ + a··· q)(& + (n > - -6 - to" + /1' (fEY, ~ - 30') > - h f u - 11' (1lli:1" I + ~ .j 3u). ( 4. 1 I )
lne(j\l,diti(~s (4.1O) and (1.11) I(~;td t.o
2. Tlw proof is similar to that of cast> 1,
N ext, to t;ltow t.hat IC.(H) (Y, {Y} ) 1 ."5 0: -1 fP I lEY 1 we distinguiKb t.wo cases: 1. JE11" ~ 0; 2. ~:}I" < 0. 1. Since Cov(Y, {Y}) = lE(l' {Y}) - Il!:YfE{ Y}, part (i) and (4.8) imply
{
(A-m(1", {Y}) ::; 11[Y + (1, _ .. lEV (t - ;n = C¥ -t- /3'w.,Y
O)'/!(Y, {V}) 2:: tlEY - (\' .. - JF;Y (~ + /1') = -0' - 11'1~~Y,
whencr.~ 10)11(1", {Y})I S; o' + {:I' lEY I· 2. Tlip proof is analogolls t.o that of C;tS(~ 1.
o
To drriv~ asymptot.ic (:'X)H'f'o;sions for shlllcnCCN of mvariancef':, w(~ H('ed the t.wo following lcrnJlliti;.
Lcrnrna 4.3. ltd J.: EN, and lei. Ii : ~ ---4 If4 be in Ck(I) w"lfh 1 :::::. (Ii, ex)) and p t OCU { ... oo}. Let (, : / ... _, 10, J j, 0 ::; b, :::; b .r0!' .~mru~ II > 0 (i = 1,2,3), and ld (/ Iw de.{irH~d
Il .. 'i, jor!! " I,
4.1. Preliminaries
fA 1 < v ::; 2, and define
Then
b(j):== sup Ih(il(V)I, In( ·-'l,<)
{U) := max{o(j),c(j)},
. J 17b-y (1) Ig(y)ldy S; ~, IJ - 1
I
(ii) J Iblh(k){y)ldy -s; 3b'Y(k) , v~1
I
cc(j) ;= sup 1Yv+1h(j)(v)I, Y E I
{:= max {(7). j",k-l,k •
85
( 4.12)
(4.13)
Proof: (i) Suppose ~i(Y) is constant, say ~i(Y) = ~i (i = 1,2,3), If the fnrI(:t.iOrItl f,,(y) depend On y, t.hen the proof is analogous, Let 11 ;= 1 n (-00,--2), 12 ;= In (--2,1), and h:= In (1,00). Then we have
.I Ig(y)ldy = J Ig(y)ldy + .I Ig{y)ldy + J Ig{Y)ldy , I[ '2 I~
-. Al + A. + A3 · ( 4.14)
By the definition of {, it follows that A2 :$ 9b'Y. Furthermore, by (4,12) we obtain, writing (t = v + 1,
+ J Ib211v + ~21°lh(k-I)(y + {2)]d + J Ib3Vlly + 61<:<lh(k)(y + ~3)]d II IY+{~lo- Y It IY+61" y
-S {J Ib11 + Ib21 + Ib3vldy -S 2b{ J Iv + ll-C<dy + h{ f Iyl dy IY + 11<> Iy + lla
I, 11'1
Analogously,
86 Chapter 4. CovfirifiIlces
Since 1--' $ 2, w(~ oht.aiIl then from (1.11)
J Ig(y)ldy S ~~~ + 9bt ::; Sin + 91r( = ,1]lry. v-1 v-I lJ-1
(
So part (i) holds.
(ii) The proof is ::;iwilar to that of part 0)· o
Rema.rk 4.4. Under the conditions of Lemma 4.3, if bi for i ""' 1,2,3 arc function:) OIl [0,1]' then Lemma 4.3 remains true.
Lemma 4.5. Let Y b(: a rv with £IY I < 00 (lnd (if;nsdy f. Let (j ?: 1, kEN, k ?:: 2, and 81(.PfJ08(; tha.t f E Ck(J), where J := (pq, (0) u;'ilh P E lRU {-co}. Let h denote the dt:nsity of Y/q, Define J := (p,oo), and let 1 < I) :::; 2. Furthermore, let bU), f:(j), lU) U = k -I,k) and'Y be gwen as in (4.12) I)wl (4.13), and ld g; U = 1,2,3) be qhlfn as in (4.1), (4.2) and (4.3). Then
(i) /If(k)(y)ldy ::; 3k q--I; :~)l' .1
(ii) ;.1.nl~2~:\ J Ig;k)(y)ldy ~ 17k q-k+1
v = 1 . .J
Proof: (i) Clearly,
.I If(~:)(y)ld!/ :.::: q J IfU:)(qy)ldy = q-~: J Ih(q(y)ldy .
.1 J I
Now LCln1l1kl 4.:1(ii) applie~ with hi = 1, E.l(Y) = 0 (y G 1) k\.Ild implic::i that, part (i) holds. (ii) By f'ropoRitioll 4.1 (i) hlle! by t,lw IIH'ftn value (lu'orem we get
q)k)(y) _ 1;,(f(~:.2)(y + 1) _ f(k'2)(y)) + yUU',' 1)(7/ + 1) _ f(k--I)(y))
- k.r(~·-II(y+f.)+yJ(~·)(y+O) (4.15)
4.1. Prelimina.ries 87
for some ~, 0 E (0,1). By Lemma 4.3(i) and (4.21) it follows that for some e, flE(O,l)
J Ig~k)(Y)ld!l = q J Iglk)(qy)ldy j 1
= q J Ikj(k-l)(qy + t) + qyj(kl(qy -+- B')ldy 1
f ef
q-k+1 J Ikh(k-I)(y + ~) + yh(k)(y + - )Idy 1 q q
::; 17k q-k+ 1 -'-. 1/-1
(4.16)
Next., for the function g2 we distinguish two cases; k ~ 3 and k = 2. If k ~ 3, then by Proposition 4.1(ii) and by t.he mean value theorem we get
g~k)(y) = k(j(k-3)(y) _ f(k 3)(y + I» + (k - y)f(k-1)(y + 1) + + vj(k-2)(y) + yfk-l)(y + 1) = _ .. k(j(k-2)(y + ~d - j(k-2)(y + 1)) + - y(J(k-l)(V + ~2) - j(k-l)(y + 1))
= k(1 - 6)!(k-.l)(y + 91 ) + y(l - {2)!(k)(y + 92 )
for ::;ome Oi E(~i' I), ~i dO, 1) (i ;= 1,2). If k ;;:; 2, then again by Proposition 4.1 (ii) and by the mean value theorem we obtain
y+l
lJ~2)(y) = -2 J f(x)d:t + (2 - y)f(y + 1) + yj(y) + Y1'(y + I) y
= 2(f(y + 1) - f(y + ~d) + y(j(y) ~ f(y + 1)) + y!,(y + 1)
= 2(1 - f.Jlt(y + BI) - Y1'(Y + ';2) + Y1'(y + 1) = 2(1 -- {1)f'(y + 01) + y(l '- 6)f"(y + 92 )
for $Orne 9i (C (~i' I), ';i E (0, 1) (i = 1,2). Similar to the calculation for the function [JI it follow::; t.hat for k 2: 2, with r: = 1 - ~;,
for sonw e:E.(,;j, I), ';:0::(0,1) (i = 1,2). Since,;~ and';2 depend on y, it follows from Remark 4.4 that
.llg~k\Y)ldy ::; 17k q-k+l 1/ = l' ( 4.17)
.J
88 Chapter 4. Covariance.')
In addition, for g3 it. follows by Proposition 4.1(iii) and by thE~ meau value theorflm that
.rJ~k\y) = (k - 2) (I(II:-;I)(V) - fk-3)(y + 1)) + yj(k-2)(y) + + (k - 2 - y)!(k-2)(y + 1) + (y + 1)fk- 1)(V + 1)
-(k - 2)/(k-'l)(y + ~l) - Vj(k-l)(y + {2) + + (k - 2)f(k-2)(y + 1) + (y + l)j(k.l)(y + 1)
= (k - 2)(1 - ~J)j(k-l)(:!,j + (I)) + j(k-l)(y -+ 1) + + y(l - 6)f(k)(y -+ (h)
for some 0, E ((i. 1), ~i Ei (0,1) (i = 1,2). Obviously,
J IrAk)(y)ldy ::; 17k q-k+l 11: l' J
Combining (4.16), (4.17) and (4.18) completcs the proof.
A8 a COllt$cqueI)ce of Lemmas 4.2 and 4.5 we have
Corollary 4.6.
(4.18 )
o
Let 0<1) be (l S(~(rIL~n(;e of '{'v's with lElY,,1 < 00 and density in (n EN). Let 7n EN; SUpp05C that I ... "' c'lm(lR) j07' all n EN, ond Uw!
j . j(k)() ) Jm .r), :r = ( 1_"1 '00
(k = 0, ... ,2m - 1). (4.19)
Let (!J,,) (q" ~ j) b(~ II sequence of 7'cal rmmb(~r.~, and let h" denote the density of Yll / (j/l lOT' (Ill'll, E N. Let
;11 := . [[lax 'Y,,(j), 'y,h) ::c' max{o,,(j),€,J.1)} (n EN), (4.20) J=2m-l,2m
WhC1'(~, for 1 < II S 2, o,.(j) and f.n(J) at'C defined /!:IJ
as n ---* 00.
(4.21)
_. -fi + fIn"'" (Gn Ob,,) + (lEY" +~) Ob,J2m,)))
= qT~~'" (fin 0(')'11) + 1E}~~Obll(2'm)))
4.2. Multiples of rv's 89
Proof: Let
"" 00 .- J I (1m)( )Id 0:" . - .max g,t,i Y y, ,=1,1,3 /j,~:= J If2m )(v)ldy (n EN), (4.22) -00 -OQ
where
y+1 11+1
g",I(Y) .- Y J ill (x)dx, gn,2(:1.J) := Y J (:r - y)fn(x)dx, y
y+1
f/n,JU) := J x(x - Y)f,~(x)dx (YE;IR, neN). ( 4.23) y
Lernrn;-l, 4.5 yields
-2m+1 ()( ) (~Il == (jll . 'In, ;3" = q;;2m O(ry,,(2m)) (n ---;. 00).
Since q;;1rn 1t'l(2'rl"/.) --+ 0 (n --+ 00), we get min{~, ;3,~} ..... 0 as n ---;. 00, By the assumption (q;:;-2m+1'1,,/(IEYn + ~)) '"'"'" 0 as n - 00 and Lemma 4,2(v) it follows that
Ow([Yr,j, {Y,,}) -TI + O(an ) + O(l3n ) (lEY" + t + O(On))
"" --12 + O(un ) +0(;3,,) ((lEY" + ~)) -12 + q;;2m (qn O(ryn) + (lEY" + VO(rn(2m))) ,
O)'u(Y" , {Y,,} ) O(Q,,) + O(tJrt)IEY"
q;;~Tn( qn O(-Yll) + lEY;1 O(-Yn(2'm,))) (n --+ (0).
o
4.2 Multiples of rv's
Rounding errors of multiples of rv's on IR have been studi~~d by Kola.ssa and McCullagh ([38)). They round real numbers y to the integer nearest to y. Specifically, let X be a rv with density f; for IE > 0, let D := g < X/c >, where < :1: > denotes the integer nearest to x, and < n + t > := n for all n E Z. Let E := X/IE - < X/g > be the rounding errOr on the interval ( -~,n Under the condit.ioll t.hat, for some mEN, the first 2m derivatives of f exist and are integrable on rR, Kolassa and McCullagh ([38]) show that
90 Chapter 4. Covaria.I1CCS
X and E arc asymptotically Ilncorrclated to a high order of approximation for small c. Abo, under these conditions
(4.24)
Here we st.udy slightly different covariance:s since we use another rounding method. To be specific, here we consider t.he covariance:=; O;Jv([tXj, {tX}) and Cov(tX, {tX}) as t ---> QQ.
Theorem 4.7. Let X be 0. rv with lElXI < QO and density j. Let mEN, and 8'UPPOSC that i E. e 2m (JR), Suppose iudhe?' that
lim :r.j(k)(:t) = 0 (k = 0, ... , 2m ~ 1) Ixl-oo
and for some 1 < v ::; 2
Then
sup sup IX I'+l j<j)(:r) I < 00, j=2m-l,2m x<;1il
{
c[))tI([tXJ, {tX})
CDv(tX, {tX})
- -112 + 0(1,-2",+1)
O(t·-2'~+1 )
(1.-00)
(t - (0).
(4,25)
Proof: Suppo:se '(I,) := ICov(tX, {tX})I/r 2m+1 is nnbotlTl(jNl in somt~ neighborhood of 00. Then then~ f'xists a :sequence (tn)T' (tn ?:: 1) with lim" .. (>0 tn = 00 and lim,,_oo l(tn) = 00, and henc:e the aS8ert.ion
is not true. This is in c!llltradiction with Corollary 4.6: Let n c: N, and let in denote the dellsity of tT~X. Cloarly, .f,(,k)(:r:) = t;, (k+l) f"") ( t.) for k = 0, ... , 'bn - 1. Hence (4.25) implies (4.19). Applying CorollMY 4.6 with hn(:r) = 1(:1:), l:, ='" tnX, qn = tn, bn(j) = CnU) = 0(1) (j = 21"n - 1, 2m), and hence "i" = ),,, (21n) ,~;, O( 1), we obtain COVU'1 X, {tT) X} ) ",. O(t,;'lm+ 1) (n .. , (0). This contradicts tlH) a~suIIlpt.ion t.hat t.llt' fnnctioll l is unbounded. As <t
res\llt, C.ov(fX, {tX}) = O(t.-2>nll) (t - 00). Similarl'y, the assert.ion Cov([tX], {fX}) = -h +O(r-'lm + 1
) (t .... (0) can br proved.
o
4.2. Multiples of [V'S 91
In Theorem 4.7 we assume that f is continuous on lit Therefore the question ariscs what happens if f is not continuous on Ii. To provide an answcr to this question we consider two examples. We will see that if the rv X is uniformly dist.ributed on [0,1) or exponentially distributed, then the fract.ional part and the integer part of nX are un correlated for a.ll n E N; this it; in contrast with the reSlllts obtained in Theorem 4.7.
Examples 4.8. (a) Let nEON be fixed. It is easy to see that {nUl ,4, U, and that ~[nUJ :4 Un, where Un is the discrete uniform rv on [0,1). By some algebra it follows that [nUj and {nU} are independent, and hence
Cov( [nUl, {nU}) = O.
Furthermore, we get
Cov(nU, {nU}) = Cov([nU], {nU}) + Cov( {nU}, {nU}) = var{ nUl =' &.-
(b) Let X '" Exp(l). It is known that [Yj and {Y} are independent if Y is exponentially distributed (see Steutel and Thiemann [64]). So,
Cov([nX], {nX}) = 0 (n> 0).
Furt.hermore, we will show that
c.ov(nX, {nX}) = f.J + O(n-l) (n -. 00). ( 4.26)
Straightforward computation yields
e-xln F{ nX) (x) = n( 1 _ c lIn) (X dO, 1), 1H N),
sup F{nXj(x) = 1 + 0(n-1), Var{nX} = f2 + 0(n-1) (n - 00).
r E 10,IJ
Then we have
Cov(nX, {nX}) = Cov([nX], {nX}) + Cov( {nX}, {nX}) ;;; Var{nX}.
So (4.26) holds. In contrast to Theorem 4.7, the covariauces of nX a.nd {nX} do not tend to ° as n ----f 00.
92 Chapter 4. Covariances
4.3 SUlllS of iid rv's
In this s(~ction we deal with covariances of fractional parts and integer parts of SUIIlS of iid rv's and covariances of such sums and their fractional parts. We first prove /:l,n auxiliary result.
Lemma 4.9. Let (X j ) be a sequence ofiid rv's with chftp~: C3(rre) such that for some () > 0
( 4.27)
(md KnP.I::;O,I.2,] BUPt E IF: ItpU)(t)! < 00. Let kEN, k?:: 3 be ji.Ted, and define
00
at"! := .. _sup _ su~ . nj J Itp(t)!n-J!tlk-~dt, )-0,1,2,3 T •• o, ... , .. l-l
'00
(4.28)
00
hn := J 1<f(t)lnltlkdt. ( 4.29) -00
Let fn(x) ;= fir J~oo e-ib'<p,,(t)dt (x E 19:) be the density of I:j=l Xl· Then for 8ufficientllllarge 11., In E C2m(lR), and
(i) lirnlxl ... ooxf,\P)(x) = 0 (p = 0, ... , k).
(ii) sup k J f~k)(:r)1 ;:~ O(an ) (n ----> ex)), sup If~k)(:r)1 ~ b". xEK xER
Wn.p(t) :,,:" <f"(t)( ~it)P (I E lit, 11 EN, P ( N U {O}).
w(~ ilrst. prove tll<\t I,,, E Ck(IFt) for sufficiently large 7/.. Assnmptioll (4.27) implies that, .r~oo II.P(t)I"ltl'>TL-~rll. < 00 for suffieirmtly large 'fl.; hence, for k :::; nn - 2
00
I fSk) (:1:) I To" ~l" II: e i"X'l/n,~:(t)dtl ~ .I !<f(t)IHltlkdt < 00. ( 4.30)
'-<xl
So, for t;uffieiontly large n, In E Ck(JR). We fnrther note that (4.27) also impli('f; that b" il:i finite for sufficiclltly large n.
III 11[(> sequel of this pmof we deal with f" and W",l> for sufficient.ly large n. Wht'u t.hese funct.ions occur, W(~ olllit 'for ~uffi(:iently larg(~ n'. Next, to cont.inue t.lw proof of t.hil:i kUlllla, we show t.!t(, following claim.
4.3. Sums of iid tv'S 93
Claim: Let p be a fixed nonnegatiw integer. Then for X E IR and j = 0, ... ,3
(4.31)
Proof of the claim: Since fr.{x) = 2~ J~oo e-'i!:r:!.pn(t)dt (x E lR), we have
00
j (p)(, ) ~ 1 I -itX.T, (t)dt T) ,f, - 271" e '£" n,p , " (x E JR).
-0()
By induction On j and integrat.ion by parts We get
-00
By (4.27) and finiteness of M, it follows that, for lJ = 0, 1,2,
So (4.31) holds. This completes the proof of the claim.
(i) By (4.31)' we have for p = 0, ... , k
1;10
Ixf~P)(x)1 ~ I / 1{I~,p{t)e-it~dtl =: It· -00
Once again, by (4.27), f~oo IW~,p(t)ldt < 00, and hence applying the RiemannLebesgue lemma (d. Proposition 1.5) we have 11 -+ 0 as Ixl -+ 00.
(ii) We prove sup~ e 1R IXl f~k)(x)1 = O(a",); suP ... E IR If~k)(x)1 ~ b..-. is proved in (4.30). Since k ~ 3, we find after some algebra
3 3-j
IWS~k(t)1 :$ k3 11P(t)lnltll.;-3 + L(MnFIIP(t)ln-i L kTltl k-T. (4.32)
j=1 ~=O
94 Chapter 4, Covariances
By (4,31) and (4.32) we ha.ve
QO
l:r3 f~k)(X) I :s J IW~~~(t)ldt = O(o,tl) (n - (0). -00
o
To prove t.he main result of this section we need a result about convergence of sums of indt~penden(. rv's. Petrov (149]) proves the following proposition for a sequence of independent, not necessarily iid, rv's, We only need his reS lilt for lid rv's.
Proposition 4.10. [Petrov [49}, p.109] Let n c N be fixed; let XI, ... ,X" be independent TV '.~ distr'ibutcd as )( lm!h chi 'P, and lEX = 0, a< := JE.X2 > 0, (lnrllElXl 3 < 00, Then
Theorem 4.11. Ld (X]) be a sequence of zid rv's with density f and EXI = 0, lEI Xl 1:\ < 00.
Sttpposc the de.n·lmtive r of f cxzsts f:wch. that J.~oo 1f'(x)ldx < 00. Let S,., ;= Xl + ... +X", and denote it::; r/t:nsity by In (n EN). I,d m. EN, m ~ 3, and WPPO$P. that in E C2m (\R) for n EN. Then
{
0.m([Sn],{S,,}) = -n+O(n3-m)
Cov(S,~, {Sn}) = O(n:l - m)
(n - (0)
(n - 00).
Proof: To prove l.his theorem we combine Lemma 4.9 and Corollary 4.6. We firM ch(~ck t.he conditiont> of Lemma 1.9, i.e. we show that the :1.t>SllInp(.iolls on the chf Ip of XI hold. Secondly, W{~ (:a.lr.t1iat.e a,} and lin. .'Iii defined ill (4.28) and (4.29) (se~ Lemma 4.9),
It i~ well k110Wll that the finitelless of IEIX11:1 implies that 'P c: C 3(1R), and hence, for j = 0,1,2,3, I<pU)(t)1 ::; J~oo I:rlj f(:l:)dx < 00 (sAe e.g. LukilJ'f; ([43], cor. 2, p.22). In addition, by Proposition 2,22, rpU) = o(ltl-l) H.'; III - 00.
Nf'xt, to calculatE' Q'n and btl, let. A := 112j(4IEIXlI3), where a2 ;::= 11(,Xr.
We obtain for j = 0, 1,2, 3, and pEN U {O},
"" ('II + j)j J Icp(t)I"IWdt
-('II,)
(/I + j)J .I )(p(t)lnIWdt + Itl>A
+ (n + j)1 J 1~(t)I'lltIPdt =: II + h. 11.1::; A
( 4.Tl)
4.3. Slims of iid rv's 95
Since i.p(t) = 0(111- 1) as Itl ---+ 00 we have lif(t)1 ~ 21tl for It I 2: T and some
T > rnax(A,l). Furthermore, tp{t) = OWl-I) also implies that li.f(t) 1 < 1 for all t =I- 0 (see Proposition 1.15). Hence, (: := snpA<ltl~r li.f(t) 1 < 1. As a l'ASlllt, for n > p + 1
[I < (n + iF J IIP(t)I"IWdt + (n + j)1 / Itp(tWIWdt A<ltl:ST Itl2:T
< (n + .i)jcn / IWdt + (n + j)]2-n / Iw"'ndt.
A<ltl:::;T It I ;::T
SO, 11 tends to zero as n ---+ 00 txponentially fa..c;t.. Next, we get
h ::; (n + J)J(avln)-(P+l) .I lip(~)lnltl/\lt. ( 4.34)
III::; A<7.;Tl
Furthermore, by Proposition 4.10 it follows that, with Tn = AO"yn,
/ lip(~WIWdt:::; J l<pn(~) - ~~-t~/21IWdt + f e-t2
/2 IWdt
Itl:STn It I 9'. Itl:STn
4 00
::::; Tn J e~·t2/3ItIP+3dt + J e-!~/2IWdt = 0(1) (n --+ 00). (4.35)
Itl:STn -00
HAIlCA, c:ombining (1.34) and (4,35) we get. h ""' O((vn)2j - P-l)) as n ....... 00.
So, for k ~ 3, we have an = O((yn)S-k) and bn = O((v'n)-(k+ ll ) as n ---+ 00,
where an <tnd b" arA given as in (4.28) and (4.29), Thcn Lemma 4.9 yields
sup l:r3 f~k)(x)1 = 0(( vn)5-k), sup If,~I.;)(x)1 = 0(( v 1i")-(k+1)) , xE~ ~ER
as n ---+ 00,
We now complote the proof as follows. Lemma 4.9{i) implies that (4.19) holds. Applying Corollary 4.6 with Y,l = S ... , q'l = 1, OrJj) = O((y'n")-(j+l)), [tl(j) = 0(( y'n:Y-J) (j = 2rn - 1, 2m), and hence !'1(2m) = 0(( vny-2m) and , ... = 0(71,3-",,) as n ---+ 00, the proof is complete,
o
Wt~ not.e that the conditions given in Theorem 4.11 are rat.her weak. Many textbook examples such as t.he stand.=trd-normal dist.ribution and the shifted Garnrna distribution of order larger than 1 satisfy thesc conditions. But t.he shifted Gamma distribntion of order at mo:;t 1, Or t.he uniform distribution on [-t, tl do llot satisfy these condit.ions since its density is not c(mtinuous on fit This problem is solved below by considering c:onvolutions of such densities.
96 Chapter 4. Covariances
Theorem 4.12. Let (X j ) be a sequence 0/ iid rv's with density f and lEX I "'" 0, EIX l 1
3 < 00.
Suppose that there eJ.:ists an positive integer p s'Ueh that the density 9 of Y := Xl + ... + X1> has the follow7ng properly: the derivative rl of 9 f::ri8ts such that f~oo 1.rl(x)ld:r < 00.
Let Tn to N, m ~ .1 and let SI' := Xl + '" + Xn (n EN). Then
{ Ow(IS:]' {~n}) = -fi 3~r:)(n3, .. m)
CoV(S,,, {S,\} ) = 0(71: )
(n--+oo)
(n--+oo).
Proof: To prove this theorem we follow the proof of Theorem 4.11. As in the proof of Theorem 4.11, we now verify whether the chf 'P of X I
sati:~;fies lP E C3 ([f{), SUPj:=O,J.2,3 sup! E IR!'P(j)(t)1 < 00, and 'P(l) = o(ltl-·J) as It! --+ 00. Glearly, cpy(t) = 'PP(l) (t E JR). SinC(~ lI<:!X1 !3 < 00, it follows that '-P. C:\IR) a.nd SllP.i:::O.I).~ sup/ E iR!tp(j)(t)! < 00. Furthennore, Proposition 2,22 implies that )Oy(t) = o(ltl-I) as !l! --+ 00; hence tp(l,) ,= oWl' 1/1') a~ !I,! --+ 00.
Next., since cp(t) = o(ltl-I/p) (It I ----> 00), irnpliefl t.hat J.~"", IlP(t)I"rif < 00 for n ~ p + 1, we obtain from Proposition 1.4 t.hat the densit.y of Sn is givcn by f,Jx) = 2~ J~oo ritx~n(t)dt. So, f,\2m!(:r) = 2~ rc:oo( -U) 2me .. uxip"(t)dt, Hmce f .. E C 1m(lR) for n 2: (2m -I- l)/p. Allalogous to the proof of Theorem 4.11, the proof can be complet.ed.
D
4.4 Maxhna of iid rv's
Here we a.rt) COnCerlH!d wit.h covariances of fractional parts and integ~\r parts of lIl<txima of iid rv\, (l.nd covariances of such maxima and t.heir fractioTlal phr!,:=;. To prove the main result. of this section, we use extreme value the. ory; for a brief review of the relevant extreme vhJue t.heory we refer to Section 3.4.1. We reeall t.hat t.here are three possible extreme value df's, viz. <1>" (:r) = l(o.oo)(:t) exp( -.1:'-"), \Ji « (:t) = 1 (-oo.ol(:) exp( .. · ('-x)"), and l\.(:r) = cxp( -e-:r:) (d. Proposition :3.37).
We also rnab:~ nS('> of the notioll of differentiable domains of attraction; Picka.llds ([50]) give~ <I. definition assurning that tht df's concerned are ult.imately k-tin)!~t-: dif[(~rentiable. W~> define the djff(~rmtt:iablc domains of attradioll as follows.
Definit.ion 4.13. Let k I'. N, and let. F and G Iw nondcgcneI'M.(' df's with F c. Ck((xo, w(F)) for
4.4. Maxima of iid tV'S 97
some sufficiently large Xo < w(P). The df F is said to lie in the k-times difft~rentiable domain of attraction of G (notation Fe; 'Dk( G; xo)) with norming constant~ all (a'l > 0) and b71 if
( 4.36)
locally ulliformly in x E (xo,w(F)).
We note that de Haan and Resnick ([27]) study the ouce differentiable domains of attmction in dept.h. Sweeting ([65]) provides short.er proofs for thes(~ n~$nlts. Pickands ([50]) characterizes the differentiable domains of attraction in terms of the generalized inverse of the cumulative hazard function (sec (3,1)). He also gives a simple characterization for the twice differentiable domain of attraction,
To show that the narming constants can be chosen at our convenience, we need t.he following lemma. Compare this result with Faa di Bruno's formula (d. Abramowitz and Stegun [2], p.823).
Lemma 4,14. Let nand k be jiJ:ed nonnegative integers such that n > k; let F be a df with dens~ty f E CI.:((!Y(F), w(F))), Then, un:th 7r(p) := nf=o(n - i),
(pn)(k+ll(x) = t 7f(p)Fn,'l-p(:r) r L ~f3(p) IT (jU)(X));J,,-] (4.37) p=O ~ E Ih(p) )=0
/orMmc constnnts';rJ(p)ENU {O} (p= 1, ... ,k), ';;J(O) = 1, Ak(O) := {£? = ([30" .. , ,5\;) ,dNU {oJ )1.:+1 : fh = 1, fJ] = ou = 0, .. " k -In, andforp= 1,--.,k
k-1
Ak(P) := {t3 = (t30"·,, (J~,) E (N U {O} )1.:+1 (h = 0, L /3j = p + 1, j=O
1.:-1
L(j + l)f3j = k + l}. j=O
Proof: Wt~ prove this by induction on k. Clearly, if k = 0, then (4.37) hold!::. Suppose that. (4.37) holds for k = r; then we have t,o show that (4.37) holds with k = r + I. We obtain
(F")"'+~)(x) = ~(pn)(r+l)(x) = .i (Fn(x)fr)(x)) + d1: dx
+ ~ (t 7f(p)Fn-I-I'(x) [ L €J3(p) IT (j(j)(X));Ji]) dx p=l ~ E Ar(p) j"'O
98 Chapter 4. Cov<viances
+ t 1r(p)pn-
2-P(x)(n -- p)f(x) [ L ~f(P) IT U(,j)(X))(J;] + p=l /JEAr(P) )~()
+I:1r(p)F"l I'(x) [ L E.fi(p)f(:r) nu(])(:r))tJJ ] + (4,38) p=2 /3 E Ar(p-l) j~O
For (3 in (4.38) Wf~ have t.he followiug: if (31 = (;3()I' ." (1;'+1) c (N U {O} )"+2 with f3b = (30 + 1, (3; = ;3j (j = 1, .. ,,7'),13;'+1 = 0, aud (/io, .'" {J,.) F At(p -1), tht~ll (i' (Ar+~(l'). For /3 ill (4.39) we get: Let)/ E {O, .'" r'" 1}; if /3' = (!1~, "'J (/~+l) E (NU{O} r+2
with (J~ = 0, (3~ = /3" - 1, ;3~+1 = i31,+1 + 1, P; = ;3j (j -=1= II, V + i), and (;30" ",/-1r) c A'+ l (p), then (i' r:: Ar+'l(P)'
Then it follows that, with {!3(O) = 1,
o
Lemma 4.15, Let k f.: N, k ;::: 2, be fixed, and let F ( Dk(G; :l:()) wdh nonnmy cons/ants aT! and b", and suppose Owt it,~ derivatzve F' is positnw in 80me ncigMorhood of w(P). If faT' 80m,!': sequence (0:') (a:. > 0) and (b;J (b:, E JR)
. an hm _.- = 1, lI.-+no 0.;1.
then F €: Dk( G; :1."0) 'With norming constants (I.~ (lnd b:"
4.4, Maxima of iid rv's 99
Proof: Let f ::::::; Fl. By assumption we know that, for q ::= 0, ... , k, lirn,,_= a~(F,.,)(q)(a,tX + bl"<) = G(q)(:{:) locally uniformly in x €j (xo,w(F)). We have to show: for:r > Xo, for a sequence (.1:.J with In - :1; (n - 00), it follows that for q = 0, ... , k
( 4.40)
For the C(l.1;e q "" 0 we refer to Proposition 3.36, and thf. ca.se q = 1 is proved by Resnick ([51]' lemma 2.4).
Let z" ;= a"x" +. bn and z;' := a;,:;;~ + b;t· We first prove that for x such that G'(x) > 0
(4.41)
Since a;(pn)(1)(z~) - G'(:t) and O.n(I"n)(l)(z,:) ......,. G'(x) as n - 00, we obtain for x such that G'(x) > 0
as n - 00. So (4.41) holds. To prove (4.40) for q ~ 2, we get from (4.41) <.l.ud Lemma 4.14 for x> Xo
(a;' )q(Fn )(q) (z~) = (a~)q ~ (U (n - i)) pn-l-p(z,:) e(z~; p, q - 1)
q-l ( P ) (F't)(l)(z*) = (a'~?L II(n-i) FP(~.)f(n.)e(z~;p,q-l)
p=o ~"'l "'n zn
q q-1 ( l' .) (pn)(l)(Zrt) . "-' an ~ D(n - t) Fp(Z,l)!(Zrt) e(zn,p,q - 1)
= a~(Fn)(q)(Zn) ....... C(q)(;r-) (n - 00),
where
q
B(x; p, q):= L ~{.l(p) II (JU)(x))tjj (x ElIt), tj E A.(p) j-'O
~1l(.i since z; = a,.(a~a~lxn + (b~ - b,.,)/an) +b" ~ an x" +bn =: Zn as n - 00. o
Let Zn := rnax(X], ... , Xn) (71, EN), where (Xj) is a sequence of iid rv's with df F, We cOllsider here df's with w(F) = 00. The assumption w(F) = 00
100 Chapter 4, Cov<triaTl(:As
is justified by Lemma 3,1, which says that otherwise {Zn} converges almost surely either to 1 Or to {w(F)} as n ---'> 00. The df W .. plays nO rolE: here, sinn~ df's in its domain of attraet.ion have a finit.e right. endpoint. W(~ nOw prove a lemma. :::aying t.hat the left tail of the df F has only ,w Axponentially small infillcYI(::e on t.he limit behavior of t.he covariances.
Lemma 4.16. Let (Xi) be a sequence of iid 7""'11 's with df F, w( F) = 00 and lKl Xli < OCI,
F'II!rthwrmore, let (X:) be a sequence of iid '(V'S with a df fhat coincides with F on wme neighborhood of 00, and 8uch that lEI X~ I < 00. D4ine Zn := max(Xl, "" X,,), and Z:. :"', max(X;, "., X:J (n EN). Then the differerwe.s Cov(Zn, {Zn} ) - On;(Z~, {Z:,}) and On)([Z,,], {Z,,}) - f' .. ov( IZ~ 1, {7;J) tend to zero (l,~ n ---'> 00 exponentially fast.
Proof: Let G be t.be df of X~, and let. (ElR be such t.hat. F(x) =:: G(:l:) if :r ? ~, Suppo:::e t.hat. F(O > 0; ot.herwise the result is trivial. Sinr:e ~ < w(P), we haw F(O < 1. For n E N we obtain
n < lE(IZ,~1 IZ". < 0 S E(]{I + IZn - el ]Zt~ < 0 I~] + lE({ - Z" IZn < 0 ~ I~I + !It:(~ - Xl 12" < 0 I ~ I + IE:( { - X 1 IX 1 < e, . , , ,X,~ < ,;) ]~I +lE(E - Xl IX l < {) < 00.
Hence, lI<:(IZ,,1 IZ" < n = 0(1) as n ---'> 00; this is abo true for Z;,. Moreover, [[17n l ::; E(IX1 ] + ... + IX"I) = nlE!Xll = O(n) (n --'I- (0), SinCf: Z" and Z:. have t.}w ~ame distribution on [C ex)) th~.~ foregoing implies that
IlEZ:, - ~:7nl ]lE(Z;,IZ~ < OP(Z:, < 0 - E(21\IZ" < OP(Zn < 01 ::; (PE(IZ~] IZ:, < 0 + E(IZn liZ" < 0 )F"(O = OU·n(O) (n -.~ 00);
t.his is also tnw for IE{Z,~} - lft:{Z:,J1 and IlE(7~{Z~}) -lI<:(Zn{Z,,}). As a n~slllt,
IOm(Z:" {Z:,}) - ('»~)(Zn{Zn}}1
S IlE(Z;, {7:J) - lE(Z)1 {Z,,})I + lE{Z:,} I~:Z:, ... - E?n 1+ + IlEZ ... ]llE{Z:~} -lE{Z,,}1 ;"-= O(nFn(t;)) (n ---'> co),
I.e, the difference totJ(Z:~j {Z;.}) - Om(Zn{Zn}) tcnd~ to zero as n ---'> 00
exponentially fa~t., A si milar proof fl.pplies to CO'v ([ ZTI]' { Zll }) - i(',(m ([ Z;,], {Z~} ),
o
We now proV(' an auxiliary result for 4>(> and /1...
4.4. Maxima of iid .rv's 101
Lemma 4.17. L(d (Xi) be a seqw';nCf: of iid rv's with df F, a(F) E lR and w(F) = 00.
Fud/tennore, define I := (a(F),oo). Let ke.N U {O}, and ~uppose that FECA'+I(lR) and FE-pk+I(G;o:(F)) with nor"'ming constants (1.n' Let f denote the de.n~ity of F and fn the derivati1)f:: of pn, and let IJ E lR..t . (i) Let G = ~Q and k + 1 > v. If jor' j = 0, __ . ,k
f (j)( ) '"" e?(1 - F(x)) x Xj+l
(x ----;. (0), ( 4.42)
jor 80me constants Pj E lR \ {O} I then
sup lxI' f~k)(x)1 = O(a,:(~'+l"")) (n ----;. co). To E I
(ii) Let G = A, and suppose that f is positive in some neighborhood of 00.
Let 0 < p < 1 be such that J-l.-(k + 1) > v. Ij jor j "'" 0, __ ., k
1(1)(:1:) "-' P.i(l ~~~x)) (x -4 00), Xl-'J+
( 4.43)
jm' some const(lnt,~ p] E lR \ {O} t then
sup lx" f~k)(x)1 = O(a~(k+l)+"/P) (71 ....... (0). :!: E I
Proof: Clearly, by assumpt.ion, we have f, In E Ck(lR). Let. g,,(:1:; fL, b) ;= (aJ; + by fAk ) (ax + b) (a, b, x E JR), and T := a(F).
(i) On account. of Lemma 4.16 we can choose the narming constants an and bn as an ;= (1/{1 - F))"'(n) and b,~ := O. :From (4.36) we have lim'Hoo a~+\j,~k)(anX) = q>~k+l)(.r) locally uniformly in X E (7,00). Let e > T; hence there exist.s a constant M > Osuch t.hat for all x E [r,O] and for all sufficiently large n
~ Ix" (n~+l f,\k) (anx) - 4>~k+l)(X)) I + Ix"4>~k+))(x)1 $ M
::;inC(~ 4i,k+I)(:r) is bounded. So, for :r E [7, e]
IYn(x;an,O)I:;::::: O(a~(k+l-V)) (n ----f (0). (4.44)
To prove that (4.44) holds for :r in neighborhoods of 00 it suffices to show that if Xn ----;. 00 then (a~+!-vfJn(:I"7n; (1.", bn )) --+ 0 as n ----f co. We obtain from Lemma 4.14 and (4,42);
102 Chapter 4_ Covariallces
1- k·t\·-I/ ( . 0) 0 lIn all 9 ... :rPl.; !I", =. Tl-OO
(ii) By Lemma 4.16 we tan choose bn := (1/(1 - F)t-(n) and an := R(bn ),
where R i~ ddinfld as (see (3.29))
QC>
R(t) := (1 - F(t))-l /(1 -- F(y))dy (t EIre). (4.45 ) t
We fir::;t Khow that
SlIp l:rl'fT\~')(:r)1 =: O(b~a;;-(k+l)) en --+ CXJ). ( 1.46) :1: t /
LN () > i; since (i'l/On - 0 as n . .. , 00 (d. Galambos [19], lelIlIna 2.7.2) for ('vpry :r E [T, 81 we h;-Wf' ((u ll ;r)/bt• + 1)" --+ 1 as n ---. 00 uniformly iT)
J: E [T. 8J. HcnC(~ thrr(' is a const.ant M > 0 such that for all :r E [T, 81 and for ,til sufficiently brgr. n
r:;inCl~ i\(k+l)(X) is bounded, whence for :1; (: [T,O]
I.q,,(:r;all ,b .. )1 = O(b~a;,(A:+l)) (n ---'> 00). (4.47)
4.4. Maxima of iid l'V'S 103
Analogous to the case G = 4;10;, we now find if Xn ---'> 00 as n ---'> 00, wl"itiug
(j(q) = (1 + '"'b:")'1 and "I ~ v - ;Ak + 1),
for some constant C independent of n. Since f is ultimately positive we have a'l f'V (l-F(b'l))J f(b ... ) ~ pobJ~(n ---'> (0) (Resnick [51], p.88 and condition (4.43)). Since n(l - F(anxn + bn)) -0 0, and p.( k + 1) > v it follows t.ha.t.
(4.48)
Combining (1.47) with (4.48) yicld~ (4.46). By the fact that an "'" Po&:. as n ---'> 00, the proof is complet.e.
o
Remark 4.18. Condition (4.42) with j = 0 (and Po = 0;) is equivalent to the Von Mises condition (sec (3.33)). If G = /1.., then the Von Mises condition is equivalent to /(:1:) ~ (l - F(x))JR(x) (.:7; ---'> 00) with R as in (4.45) (see (3.34)). Instead of (4.43) it suffices to assume that lim:r_06 R'(x) = 0, and that for some constants pj E IR \ {a}
fUl(x) rv Pj(l - F(.<:)) (:1: ---'> 00). R(.t)J+!
To show that, for u > 0, a;(k+l)+V/1-t (a.".?;n + bn)IJ f,~k)(O.n.'Cn + bn) ---'> 0 a..<;
:1:" ---'> 00 it. is sufficient. that R(:r) '"'" ax/.! (.r ---'> 00) for some a> 0, ° < /-1-- < 1.
Resnick ([51]' prop. 2.1) gives sufficient. conditions for the moments of the normed maxima to converge to the moment::: of the extreme value dist.ribution. Here we only state sufficient conditions for the convergence of the first moment..
101 Chapter 4. Covaria1J(~f!s
Proposition 4.19. [Resnick [51], prop. 2.lJ Let (Xi) be a 8e(j'lu:nce of iid rv's with df F E D( G) with nonning eonstants an and b", and I?oo IxldF(x) < O(i. TJd Z" ;= max(X1,· .. , X,,) (n EN).
(i) Let G = 4>(>: wdh Q: > 1. Then
00
lim lE(Z,Jo.,J = J .:rdq,,~();) = 1'(1 - a 1). H--+.;x)
()
(ii) Let G = A. Then
-00
where r '18 Etder's con8tant.
Lemma 4.20. Let Fe '0(0), where G = 1>« ()7" G = A, with 7!(wming const(ZTd.~ an, and w(F) = 00. 1'hen for' e'(le.,.-y E, q > 0
(4 .4~I)
PI'oof: Let G"" (I)". On account of Lemma 3.39 w(~ c:an set an :,.- L{n) with L := (1/{1 "". F))-. By de Haan ([26), part G of WI'. 1.2.1, p.22) we know that L c.: nv~, with p := l/(),. From the l«I,raIIlata Repre::;eIlt.a1.ion we havc for f > 1 (see Rct->T)ick [51], remark, p.19)
[,(1.) ~ cu) exp (! ",-' P(X)dX) ,
where p, (: : IF4 --) lilt, alld c(:.r) - C > 0, pCl:) ---,\ P a::; :r - 00. Tht~n~ (~xisLs a po:-:>itiv~~ integer no :-:>uch t.hat. Ip(x) .- pi < 1 and !c(:,,) - 1:1 < 1 for :r ?:: no· Hence for n 2: no
n
() < an = L(n) = c{n)L(no)(c(no))-l exp(j ;r;-.l p(:r)dx)
TJ
< _. (c + 1)L(no)(c(nn)r· 1 cxp((p + 1) ! ;r·· l d:l:)
- ( 1.50)
Hence (t,/, = O(nq(r+ 1)) at-: n ---> 00. So (4.49) holds.
4.4. Maxima of iid rv's 105
Next, let G = A; set bn := (1/(1 - F))-(n) and an := R(bn), where R is defined as in (4-45)- By Resnick ([51], prop. 0.12) we know that the 'continuou::;' norrning fanction a(t) is slowly varying at 00- Now, the proof is simila,r to that of part (i).
o
We nOw rome to the main theorem of this section-
Theorem 4.21. Let (Xi) be a seqltence of iid TV'S until dj F, w(F) = 00, and lEIX11 < 00- Let 11"}, E N, (md 8'IJ.ppo8e that FE V 2m+1 (G; xo) with norming constfwts an· Let f (hn()t~ a density of F and assume that f is positive in some neighborhood of 00, Furtherm.ore, let SlIP;J;' E R b;O, ... ,2m-t If(k)(x)1 < 00, and suppose that
lim ~:j(x) = 0, ;:(:-+00
(4.51 )
[,d Zn := max(X t , ••• , Xn) (n EN).
(i) fA:;t G = 4'(. with a > 1. Suppose that (4·42) holds jor j = 0, ... , 2m. If m ~ 2, then fO?' any q < 2(m - 1)
{ CoV([Znl,{~n}) = -tz_:O(fL;:;q)
CoV(Z'I' {Zn}) = O(an )
(n ---t 00)
(n ---+ 00).
(ii) Ld G = A. Suppose that (4-43) holds for j = 0, ... , 2m and some o < fI < 1. If'm > A (.!. + 1), then for any q < 2m + 1 - 1
L I' /.!
{ Cov([Z"J, {Zn}) = -iz + O(a;:;q)
c.ov(Zn, {Zn}) :; 0(0,;;-1)
(n ---t 00)
(n---too),
Proof: (i) By LeIIlIna 4.16 we may choose O'n := (1/(1 - F))·-(n). Moreover, on account of Lemma.s 4_16 and 4.20 it is no restriction to assume that F satisfies the following properties: For some p > Xo, P := a(F) E il, and FE c2,.." + 1 (IR), and hence FE v2m+l(G;p), Note that all = O(nI+l/a) (n ---t 00) (d, (4.50)).
Let fll(x) := nP'I-l(x)f(x) denote the density of 2n (n EN). Here we will apply part (i) of Corollary 1.27. Let (compare (4,23))
)1+1 11+1
g".I(Y) := y J in(x)d:x:, [}n,2(Y) := Y J (x - y)f,,(x)dx, ]I
y+l J :1:(X - y )f,,(x )dx 11
y
106 Chapter 4. CuvariklJ)ce::;
Note that iI,\~J(y) lOan be derived from Proposition 4.1. As in LewHla, 4.2, we firM ca,leula,te lE[ZnJ, lE{(Zn]{Zn}) , IK(Zn{Z.,,}), and lE{Z,,}. Dy (4.51) and (4.42), Hm x _ ru Xf~k)(X) = (l for k =~ 0, ... , 2m ···1 (see ftho Lemma 4.14);
the latter condition yields limy _= g~,:l (y) = 0 for k ::=: 0, .. " 2m _ .. 1 and i = L, 2, J. So, Wt' can apply Corollary 1. 27( i).
Similar to the calculation in the proof of Lernm<l 4.2(i) we now derive from the Euler.Mar,Laurin sum formula (d. Proposition 1.26)
00 ]+1 D<! .1+1
lElZnJ = Lj J fn(z)dz = L,j .I fn(z)dz )=p . ]=p -1 )
00 y+l 00' J y J J.,,(z)dzdy + R,l,l = J fn(Z) I ydydz -I- Rn.l l!Y l' ~-l
(>(>
.I (z - ~ )fn (z )dz + Rn.l =., II<:Z." - ~ -+ R,),I, )!
wh(~n~ the remainder term Rn ,] sat.isfies (see Corollary 1.27(i))
;rn OCJ
IR",II ~ Ig",l(P)1 + L Ig~~{-I)(p)1 + J Ig\~t\:(,)I(h. ]=1 p
Hence
(4,52)
Simila.rly, we get
j+1
E([Z,Ij{Zn}) , •. fj J (z - j)f,,(z)rlz = ilEZn - k + R",2, )=p J
wlwre
IH",21 ::; I q,,:1,(p) I + f Ig;l~~-I)(1))1 + 719~1~~')(:r)ld:r. J~l p
HellC\'
(4.53)
Moreover,
)+1
E(Z,,{Ztt}) .=e fj ! z(z - j)f,,(z)dz = !EZll + RTJ ,3,
J=1' J
4.4. Maxima of iid rv's 107
where
IR"'.31 s: Ig".:l(p)1 + f Ig~~Cl)(p)1 + 11{]~~:t)(X)ldX. 1=1 p
Hence
(4.54)
To get an a:symptotic expression for lE{ Ztl} as n ---+ 00, we consider the following. By (1.5) the density I{z,,} is given by fUq(x) := "EJ;pfn(j + x) (x c. [0,1)). Let x E [0,1); applying the Euler-MacLaurin sum formula for liZ,,} we obt.ain (d. Proposition 1.26; compare with proof of Lemma 4.2(ii})
00 00
11Z,,}(.t:) = L f(j + :r} = J In(Y + x)dy + R.,('t) J=P p
00
= J fn(y)dy + R(x) = 1 + Rn(x) - Fn(p + x), p+'.
where the remainder tel'll! R,.,(:r:) satisfies (d. Corollary 1.270))
TlL 00
IRn(x)1 :s; fn(P + x) + L Iff\2J-l)(p + x)1 + J If!.2m )(y + x)ldy. J=l p
Hence
(4.55)
Obviously, we have
00 00
SliP J If,~2m)(y + x}\dy ~ J If,~·TlIl(Y)ldy. •• t: 10,1) p P
( 4.56)
ou
13:, .- J If,(,'n1 l (y)ldy + P
+ sup {f,,(P + x) + F"(p + x) + f Iff\2j-l)(p + X)I}. (4.57)
x E 10.1) )=1
lOS Chapter 4. Covarianccs
Thtm from (4.5:';') aud (4.56) we get
8Up If{z,,}(x) - II ::; 13;" ;~ .. (0,1)
( 4.58)
and hE'IlCe (se(~ the proof of Lemma 1.2(iii))
(1.59)
We now define
( 4.60)
A::; ill the proof of Lmuna 1.2(v), by (4.52), (4.53), (4.G4) and (1.59) we !lOW
ti I . h /311 • - . . {I 131 } ll( , Wit" . - mm 2"' " ,
{ Oru( [Z" 1 '. {Z,,}) = .-. ·A ... ! O( U:,) + o (j;l;;) (lli:Zn + ~ .j 0 (ct:J)
(1.61) (cov( Z,,' {Z,,})= U( a;,) + O(!J::)1EZ".
as n ----1- 00.
Below we will show that maXi.=1.2.3 .fpoo Ig~:;n)(Y)ldy and g".l If7~~m)(:u)ldy de
t('rrniflA the rate of convergence of (r:t and ;3~, respectively. Spedfically, we show t.hat tht~::;(~ rat~~s are slower than CXPon(~Ilt.ially fast. By Lemma 4.5 it follow:-; t.hat.
00 00
,~11~i.3J Iq.\~;")(Y)ldy = O(Tt,), J 1.f,\1ml(y)ldy = 0b'1(2m.)), (4.62) p p
(IS n ----1- 00, where
hll(.J) := sup IfT~Jl(/J)I, E,,(j):= ~up ly"+I.f~,i)(y)l· (4.64) y ,;: (~'.oo )rl( - <.2) y E. (P.co)
IA'Illlna 4.17(i) yidd:-;
• ( .. ) = O( .... (J+l)) II" J an ,
and hellv~ "l(.i) = £n(j). So, a:~ = O(a:~+1271'), (3.;~ = (J~ = 0«-2",) a.s n ". 00. LmnIlla 4.20(1) implie~ that these rates itn~ ~lower than cxpouellt.ia.lly fast..
4.4- Maxima of iid rv's 109
By part(i) of Proposition 4-19 we know that IEZ,l = O(an ) (n ----* (0)Then it follows from (4.61) that
{
'-', ([Z 1 {Z}) - _1.. O( -(1711-,,-1)) v.iV ·In, n - 12 + an
Cov(Z {Z}) - O(a- l2m- I'-I)) n, n - 'n,
(n ----* 00)
(n ----* 00)-
Next., to complete the proof we have to show that rnaxi_I,~,3 J; Ig~:;n\Y)ldy and .G~X! If~2m)(Y)ldy determine the rate of convergence of o::t and !3~, respectively, a.s n ----* 00. To this end we prove: (a) g" .• (p) and g;,~hp) (i = 1,2,3) tend to zero exponentially fast;
(b) For k = 2, ... ,2m -1, g~~J(p) (i = 1,2,3) teud t.o zero exponentially fast.
(a) Proposition 4,1 implies
P II
]9n,l(p)1 = Ip J f,,(x)dxl = IplFn(p + 1), p
p+1 p+l
jYT1,:Jp)I = Ip J (x - p)fn(x)d.rl ::; Ipi .I iT - Plfn(x)dx :5 Ip)Fn(p + 1), p P
p+l p+l
]gn,3(p)1 = I J :r(:r - p)fn(x)dxl S ! IxlfTl(X)dx ::; (Ip] + l)F"-(p + 1)_ v p
So, g".i(P) (i = 1,2,3) tends 1.0 zero as n ----* 00 exponentially fast. Analo
gously, we can show that this is also true for 9~~1(p) (i ::;::; 1,2,3). (b) For k = 2, . , , , 2m -1 and i = 1,2,3, from Proposition 4.1 we can deduce
expressions for (J~~l(p). By Lemma 4.14 it follows that g~~l(p) tends to zero ~~xpoll~llt.il:Llly fa.st..
In conclusion, rnaxi=1,<,3 J; Ig~::n)(Y)ldy determines the rate of conver
gence of U;l'
Secondly, to prove that Jpoo 1f2m )(Y)ldy determines the rate of conver
gence of /3~ as n ----* 00, we consider two cases: (a) sup", E [O,lj(ftt(p + x) + pt(p + x)) tends to zero exponentially fast;
(b) For k = 1,. _., 2m-1, sup", E [0,1) If~k)(p+x)1 tends to zerO exponentially fa.%.
To this end, let
M := sup if(k)(:r)l. (4-65) I E II!.. k=O ..... 2m-1
110 Chapter 4. Covariallces
(a) Clearly, sup" E [0,1) Fn(p + x) :S ptl(p + 1), and
sup fn(P + :1:) = sup nFn-l(p + :r)J(p + ;;0;) :S MnFn-l(pl 1). "E [0,1) :l; E [0,1)
Hence sup", E [0.1) (f'l (p + x) + Fn(p + ;:)) tend8 t.o 0 expon(-~lltially fast;
(b) By Lemma 4.14 a.nd (4.65) wH gt't for k = 1, ... , 2n~ - 1
K\lj) If,~k)(p + :{:) I < ,~ E [0,1)
C. SllP [tU1F1I-'(p+:r)] ,~ (, [0,] ),=0
< C(k+l)nkFH-k(p+l)
for smn(~ con~t.ant C indept~nd!~llt of n. So, sup" E [0,1) If~q(p + :r)1 tends to Zt'I"O exponentially fast for k = 1, ... ,2m -, 1.
Cons(~qtWllt.ly, .It IJ,\2711) (y) I fly determines the ra1.e of convergence of /1;;.
(ii) This proof is similar to that of part (i). On an:Olll11. of Lnmma 1.16 we can take bh := (1/(1 - F))-(n) and (I" :'c R(bnL where R is given as in (4,45). Now, by Lemma 4.17(ii), it follows that I',,(j) = O(a;;U·t- 1)+(u+ 1)1!,) as n ---* 00 (j = 2T11. - 1, 2'rn) (see (4.63)), and hence 0::, = 0(0.;;-2"'+(d 1)/I-'), If:; = ()((I~(2m+l)+(I/+])III) as n ---* 00. Dy L(~rnma 4.20(ii) it follows that these rates are slower tlw,n (~xp()nt>nt.ially fast. In addition, we can l1S/~ IEZ" ~ /)" (n ---* (0) (d. Proposition 4.19(ii)). As in the proof of L~~lllllla 4.17(ii) we have /)" ~ pola!/M (71, .... ~ 00).
o
III part (i) of the foregoing theorem the assumptions J~\x:, IxldF(:r) <. (X.)
and iI' > 1 are sufficient for lEI XI I < 00. If FEV(1\\), then we how that I - F(:r) ~ :1;-<\ L(:I:) (:1: -> 00), where L i0 ::;lowly varying at 00; thi0 implies Jow:rkdF(x) < 00 if k <. u, but it is possible that f~"", l:rlkdF(:r) = CQ for any k > 0 (see Resnitk [;:'1], p.77).
III S(~ct.iol1s 3.2 and 3.1.3 we established snfficient conditions for th/~ fraet.iom,l parts of maxima to conV!~rge in distribution to a uniform rv on [0,1), By (4.58) it follows that thl:' sequence 11:; := minH, ;3;,}, when'! p~ is dehlwd by (4.57), IlIajori~es the rate of wnvcrgcllce of i:;Ul\: E 10,1) I/n(x) --- 11 as n ---* 00, where fn dcnotes the del1sit.y of the fradiollal part of the n-th partiflJ maximum, In the proof of Theorem 1.21 we Jet.f"rmined this sequence /3::. So we h;we t.hr. following corollary.
Corollary 4.22. Und(~1' the conditions of TheoTem. 4.21: Let h,) denote the rlen8dy of {Z,,}.
4.4. Maxima of iid rv's 111
(i) Ld G = 1>" wdh a> 1, and suppose that (4·42) hold.s Jor' j ~ 0, ... ) 2m. lim ~ 2, then JOT" any q < 2m, - 1
sup Ihn(x) -11 =0((1,";"1) (n ---t 00). ".' E 10.1)
(ii) Dei G = A, and .'ltLppose that (4.48) holds for j = 0, ... , 2rn and some () < II < 1. If III > H -; + 1), then for any q < 21'n + 1 - *
sup Ih,,(:c) - 11 = O(a~q) (n ---t (Xl). J: E [0.1)
In the following example we SN~ t.hat. the orders of approximation for t.ll(~ cnvari,ulCes considered in Theorem 4.21 for distributions in the domain of attraction of 1> c< and A are different. Obviously, this is also true for tll(~ (kllsi t.y c:onsidered in Corollary 4.22. Furthermore, we show that it is possible that this density tends to 1 as n ---; 00 expouent.btlly fast.
Examples 4.23. Let (Xj) be a :sequence of jid rv's with df F and density i, and define Z" := max(X1,.··, Xn) (:n EN). Let in and hn denote the density of Z ... <"tI)(J {7,.}, respectively. We consider three df\; F with the property t.hat
{Zn} ~ U as n ---; 00 (see Examples 3.29).
(a) For the Pareto df F(x) = 1 ~ [2 (x > 1) we have FE 1)(1)2) with nanning constant.s an. = ...fii (S(~r. Example 3.59(a)). It is easy to show by indllction t.hat for all nonnegative integers p and for some integers cpU)
p+1 . (.) j-l
L FnJ(.1:) ~v..7 1 II (n - k), :t ]+/-+
(P'l )(p+ I) (:,r) j=1 '/,;=0
. ( -2) ~ cp(j) exp -x L x 2j+1'+1'
j=1 '
For each mEN, m ;:: 2, F S V 2m+ 1 (42) with an =.J7i, Furthermore, it is
easily verified that iU)(x) ....., ~: n{.;;o(2 + k) (x ---; 00) for j = 0, ... ,2m; w (4.42) holds, Then Theorem 4.21 yields that for q < m - 1
{ Cov([~nl' {~n}) = --0: O(n-
q)
ICoV(Z'I' {ZTI}) = O(n )
(n ....... 00)
(n-oo),
and by Corollary 4.22: SUPH [0.1) Ihn(:r) - 11 = O(n-Q-1) (n - (0). We note that similar results arc obtained for F(x) = 1 - :;.:-fJ (x > 0) for some (J > L
112 Chapter 4, Covariallces
(b) For P(j;) = 1 ",. cxp( -JX) (:r > 0) we have F {D(A) with lJOrllliHg constants a'1 = 210g nand bn = (log nY (see Example :3.59(1.»). It is easily shown by induction that for all nonnegative inkgers p and for some integers
Cp,jC~ )
1 1'+ 1 .' .)1+ I - j , .I" I
(F")(I'+L)(:r) = )(p+l)/2 L },j,-) (:r)e-Jfi L. Cp,j(s):r·-"n II (n - k) (4x )=1 .=0 k~O
and i\().'+I)(:r) = 1\(x) I::j!~ cp(j)c jr for wrne il)t.egers I:rcn; moreover, c).',)(O) = cp(j) for <tll .i = 1, ... , p + I. For f1adtm E N, In 2: 3, F", ])'1"'+1 (1\) with UT! = 2logn and b,\ = (lognY· It is eaf:;Y t.o v()rify that (-1.13) holds with II, ~.~. ~. Thon Theorem 4,21 yields that, for (} <: 2nl. - ;)
{ Co1I(1~n], {~'l}) = - t + O(~~10g n)-<l)
Cov(Z" , {Z,,}) = o ((log n) )
(71 ---'> C.OO)
(71 ---'> ()O),
ctnd by Corolla.ry 1,22: sup", E [0.1) Ihn(:c) - 11 ::'. O((logn)' 'I :~) (n ---'> oc1).
We note that analogous results Ca.n be obtained fol' F(:r) = 1 - i~Xp( _:7JJ )
(x > 0) for some () < /1 < 1.
(c) Let F(:c) = 1- 1/log:1: (T > e). We note that F does no! hf'long to one of t.ltp tlLn~e possihh· domains of att.raction. It. is n';idily verihcd by in<i1l<.:tioIl
that. for all nonnegative integers J!
1 P (i. 1 ) F(:t:)" i. i-I . (F'l)(p)(:r) = - '"' II(n - k) _M."_ L. (:,(,)(lng:t))
Ti' D (log :c)p +, t . ,=1 10=0 )=0
( 1,66)
for SOllle integer£::: cpcn. LeI. THEN; writing C = ('2u/+ 1)2. ma~ ic2,MIU)1 - .J :::(») ... ):lTrL
and 1\" = exp ( -'11, + jn/(2m + 1)), we gd ('0 .I 1],\'lm) (:J:) Id:t f~n ll.f,~2m)(en:r)I(!:J: + f.lIll.fS2m\(~TI:r)Id.T ~ ~I-n KTI
II + I 1 , (4.67)
TlwlL by (4.66)
C'c-·1m71 /00 1 ~L"'+111.i ('.1 __ 1_)"-' .. - z 12 ::; -_... I -,---------::-)."...' ---'--:::-2 d:r
(2rn + 1)2 :r.2m+1 ,,,1 n+ ,,~:r: (n 1··log:1: 1,tl+ ,.... ~~
'1'.:Xl __ 2'11~.
- (' 2Hl+I -2m11. f . 1 I" - ("»2m+ I , "1TII.7L~ ..:.... .'1). (' --( •. 1.- .'/. (. . X;lm-fl 2n/.
,0(11
£n?",+l exp( -2m 1;;/(2nl + 1)) = 0((,-';"(2'" I)). 2ni V
(4,68)
4.4- Ma.xima of iid TV'S 113
Similarly, we obtain
IJ ::::; Cn2m.t·]c-~mn l gn(:r)dx j 9n(X);= ~(1 - n+l~gxt-(2m+l)-1£1 ... ."
Clearly, g" is increasing on (e 1-
r\ I),n), and hence
( 4.69)
It is easy to see that sup~ E JR, k=O ..... 2m-J If(k)(x)l < ()o, and that (4.51) holds. A::; ill Corollary 4.22, the ("ate of convergence of snp;t' E 10,1) Ih,.(x) -11 (n ---* 00) is Inajorized by f3;" giveo as in (4.57). Since the rate of convergence of (3~ is dd-ermined by the term JI~lllfA2m)(Y)ldy, we get by combining (4-62), (4.63) and (4.64) with (4.67), (4.68) and (4-69)
sup Ihn(.r.) - 11 = O(~-,;;.(2"'-l) ) (n - 00). x
Finally, we note that the covariances as considered in Theorem 4.21 do not exist. since lEZ'l :;;;.:; C>O for all n E N.
Remark 4.24. In the foregoing examples it seemS that. t.he rat.e of convergence for hn (x) ---* 1 af> n - 00 depends on the tail of the underlying distribution: the thicker the tail, the faster the convergence. We did not find a theoretical result of this typ~- By contra.sL j for the density I" of the fractional part of the n-th partial sum of iid rv's it was shown t.hat in(:"r) - 1 as n - 00 exponentially fast, independent of the t.ail behavior (cf. Proposition 2.3).
114 Chapter"1, CovarianCA.';
Chapter 5
Infinite divisibility
The classical theory of infinite divisibility plays an important role in probability theory (sec c.g. Feller [IS]). Schatt.e ([55]) studies infinite divisibility in the modulo 2"iT sense. In this r.hapter we deal with infinite divisibility in the (equivalent) modulo 1 sensc. We prove a new characterization for iTlfi1litely divisible (mod 1) dist.ributions; t.his gives furt.her insight in infinite divisibility (mod 1). To prove this characterization, we represent infinitely divisible (mod 1) FSS's in a Levy-Khinchine-type canonical form and use a characterization for distributions with finite replication number. The latt.er chara<.:t.erizatio1l leads also to a generali,,-ation of an equalion studied by Goldman ([23]) and Arnold and Meeden ([3]).
The organization of this chapter is as follows. In Section 5.1 we give a lH1W r.harar.terization of distributions with finite replication number to prove a generalization of an equation studied by Goldman ([23]) and Arnold and Meeden ([3]) (see Section 5.2). The results of Sections 5.1 and 5.2 an~ ba..,ed on Wilms and Thiemann ([75]). In Section 5.3 we characterize infinit.ely divisible (mod 1) distributions, and we prove some new properties of t.hese distributions. We also give a Levy-Khinchine-typc, a Levy-type and a Kolrnogorov-type c(l.nonical form for infinit.ely divisible (mod 1) FSS's. Finally, in Section 5-4, we briefly consider self-decomposabilit.y and stability (mod 1).
5.1 Replication number of a distribution
The concept of replication number, due to Schatt.e ([55]), is defined as follows.
Definition 5.1. The replication number of a rv X is defined by
Rep(X) := sup{n N: {X} ~ {X + ~} }.
115
116 Chapter 5. infinite divi~ibility
Clearly, Rep({X}) = Re.p(X). Whfm (:{X) = ewe 1:)omctirnc::; writ!" lh:p(c) inst.t>ad of Rep(X). We note that for eV(~ry X wn have Rep(X) ?:: 1, and that Rcp(U) = 00. F\lrt,herrnOH\ if X E ,1'[0, 1) has replication Illlrnlwr r E N,
then on each interval of the form [~, ~) (j = 0, ... , r - 1) the rvX Ii as the 1:)ame di::;tributiOll.
To chhrhct.eri.-;e distributions with finite replicat.ion Humber we need the following t.wo pl"Opert.ies ofthe continuous and c.\if)(:rNe llniform distribution;;.
Lemma 5.2. (i) Let Y be (1, t"1J independent of U. Then {U + Y} 4. u. (ij) Ld r' E N. Lp,t X and Y be rv's that an' independent of U~, and let Y be lrItl1n:: on ! Z. Then
T
Proof: (i) By (1.10) and Proposition 1.19(i), we have for k I:.?.
Uy PropmitioIl lJ), Lhis is equivalent to {U + Y} 4 U. (ii) l1y (1.10) and Proposition 1.19(ii), c{X I Y +v,}(k) = (lX + q(k)cu,.(k) = 0 for all k 1: rZ; for k E rZ we have kY I< Z (compare Lemma 1.16), whence
So c(x+y+ur} does not. depend (lll Y, and is therdore equal to c{)(+I.I .. ). By
Proposition l.G this impli,~s {X + Y + (/1'} ,4 {X + [q. o
R.emarks 5.3. (i) In part (ii) of Lemma 5.2, X <tud Y need not be jrldependent. This leads to t.he following result, which is of somc interest. by itself: Ld. Z be a rv independcnt. of Ur . Since ~[Zl i1:) lat.tice on ~z, from Lernrllft ~.2(ii) we obti'iin
{U,. + ~Z} = {UT + HZ} + .::[Zj} 4 {U,. + ;:{Z}} = U,. + HZl· (5.1)
(ii) T::.killg r(X = 0) = 1 ill Lomma 5.2(ii) Wtl obtain
{U, + Y} ~: UT
for all Y 1 jJ)d(~p('Ildent of U,., and lattice on ~Z.
5.1. Replication number of a distribution 117
Next, we characterize distributions with finite replication number.
Theorem 5.4. Let r E N, and let X be independent of U~. Then the following statements are eq11ivalent,
(a) Rep(X) ErN or Rep(X) = 00.
(b) {X +;} 4 {X}. (c) {X + Ur } ;, {X}. (d) ejxl(k) = 0 if k ¢ rZ.
(e) {X} ~ U~ + ; {Z} jor some Z independent of Ut .
Proof: (b) ¢:} (c): This follows immediately from
c{x}(k) = ((X)(k)e27rjk/T (k 0:,) iff c(x)(k) = c{x}(k)cu, (k) (k d'.).
(d) * (c): Using p.\rt (d) and (1.10) we obtain (cf. Proposition 1.19(ii))
c{x+u.}(k) = c(x)(k)cu.(k) = cp:"}(k) (k~ Z),
and by Proposition 1.6, {X + Ur} 4 {X}. Thus (c) holds.
(c) =:} (e): Since ~[rXl is lattice on ~z, we have from Lemma 5.2(ii)
{X} 4 {X + Ur} = {;{rX} + ;[rXl + Ur} 4 {;{rX} + u.} = U. + HrX}.
Since X is independent of U~, this is also true [or {rX}. Hence, choosing
Z := rX yields {X} 4 U~ + ;{Z}. Thus (c) holds.
(e) ::::} (d): We have c{x)(k) = Cu.(k)CHZ}(k) = 0 for all k rf. rfl. Thus (d) holds.
The equivalence of (b), (c), (d) and (e) has now been proved.
(a) ::::} (b): If Rep(X) = 00, then for a.rblt.r:arily large p, {X} 4 {X + ~}. Sinc(~ (b) is equivalent to (d) for arbitra.rily large p we get c{x)(k) = 0 if
k 1. pZ; hence c(x)(k) = 0 if k E 1:0, i.e. {X} 4: V. So {X + ~} ~ {X} holds. If R/~p(X) = mr for some mEN, then we have
Thus (b) holds.
(b) ::::} (a): Suppose p := Rep(X) < 00. Then {X} 4. {X + k}, and
also {X} 4. {X + n. So, by the equivalence of (b) and (d) it follows that
118 Chapter 5, Infinite divisibility
C{XI(~;) ::- 0 if k irZ or k i. pl,. Hence c(X}(k) = 0 if k ¢ q1. with (1 ~, lcrn(p, r);
agaill by (b) {:} (d) we have {X} 4. {X + ~}. From th(~ defillition of Rep(X) w(~ havp (} '5 p, and hence p = q ErN. Thu8 (a) holds.
Corollary 5.5. Let X 1)(· (L f1l, and ld q E N.
(i) {.X} 4 U jff Rep(X) .~ 00.
(ii) If {X} -1:: U'I + ~ {Z} for some Z inrleJi(~ndenl of U'I' then Rep(X) = qflcp(Z).
o
(iii) 1/ Hcp(X) = r eN, tlw.n {X} ;l U,. + HZ} for some Z ind(-:pr:ndt;;nt of U,. wdh R(~p(Z) .:;:. 1.
Proof: (i) Clearly, if {X} :1 U, t.hen Rep(X) = 00. The converse implicat.ion if; sbown in the proof of "(a) =? (b)" of Theorem 5.4.
Oi) If Rep(X). 00, then {X} 4. U; hence c(Z}(k) = c(x}(qk) = 0 for a.! 1
k E 21J\ or equivalently, {Z} :!L u. So Rep(X) = qRcp(Z). IfT:= Rep(X) < QO, t.hen t'EqN (d. Theorem 5.4). Since {qX} 4. {Z}, we haw lkp(ljX) '" Rl'p(Z). By Theorem 5.4 we \la.vB e(~,><"}(k) = c{x)(qk) :::: 0 if kif 57" killd ht'll(:(' flep(qX) E ~N, Suppo::-oc [lOW t.hat. Rep(qX) = * for some () .> I. Thpor(~lIl 5.4 implies that c(~x}(k) "" 0 if k f: [~;1., or eqllivalently,
I:pq(qk) = 0 if qk 1 ar?; ~in(:(> {X} .4 Uq + ~{Z} Th~~()rem 5.4 also i,.npliE~s t.hat. c(,q(k) = () if kif. qz. Hence it follows that (;(x}(k) = 0 if k ¢ (1.rZ. Once a.~aill, Th(~orem 5.1 now yieldK t.hat. flep(X) EarN, which contradicts the assumpt.ion that Rp.p(X) = 7'. So Rep(qX) "" ~.
(iii) Combining Theort~rn .5.4 and part (ii) of thi~ corollary compkte~ the proof.
o
Remark 5.6. (1) III fact, in tlH~ proof of "(b) ::::} (at of Theorem 5.4 we ::;how that, for
R(-:p( X) < (X), t.h(' set IrE N : {X} :~. {X + ~} } coincides with the set of all divi:;o[":-; of Rcp(X). Oi) Dy Theon'lll S.4 and Corollary 5 .. 5(i) wr have the following implication:
If X (/. {X + ~} for all r f: N, tluHl X ~ u,
N("x(., w(' prove ~orn!:' propt>fties of distributions wit.h finite replication llllmbcl'.
5.1. Replication lJumber of a. distribution 119
Lemma 5.7. Let X and Y be independent 1'v's withRep(X) = rc N andRep(Y) = PE N.
(i) Then Rep(X + Y).; mN with tn, = /(;m(p, r). (ii) Let q EN. Then Rep(qX) E mN with m = r/ gcd(q, r). (iii) Let (Zn) k !1. $eqwmce of 1'v'8 with Rep( Zn) E mN (n E N). Suppose that
{Zl~} ~ Z as n --;, 00 for some Z E tYrO, 1). Then Rep(Z) E m.N.
Proof: (i) Applying Corollary 5.5{iii) and Proposition 1.20(i) we get
{X + Y} :!:: {U. + ~{Z} + Up + ~{Zl}} 4 fUm + ~{Z} + ~{ZI}}
for some Z independent of Ur, 21 independent of Up, (Z, ZI) independent of Un" and Rep(Z) :::; Rep(ZI) = 1. Clearly, cp: +y}(k) = 0 if k ~ m'£.. Now Theorem 5.4 implies t.hat Rep(X + Y) .; m.N.
(ii) From Corollary 5.fi(iii) we obt.ain
{qX} 4 {q{X}} 4 {qUr + ~{Z}} (5.2)
for some Z, independent of [Jr, with Rep(Z) = 1. Let 8 ~ lcm(q, r). By the fa-cts t.ha.t. p8 = qr, p = gcd(q, r), gcd(;,~) ~ 1, and applying Proposition 1.20(ii) and (iii) we obtain
By Lemma 5.2(ii) we get
{UT !]! + ~{Z}} .i {qUr + ;{Z} + ;[Z]} = {UT / p + ;Z}
{Ur / p + ;{qZjp} + ;[qZjp]} 4 V,./p + ~{qZjp},
whenee by (5.2)
So Theorem 5.4 yields that Rep(qX) E ~N.
(iii) Since Rep(Z,,)EmN for all nEN, we have cz,,(k) = 0 if k~mZ; hence c7,(k) ~ 0 if k ~ mZ. Once again, Theorem 5.4 yields the assertion.
o
In t.he following examples we show that in each of the three parts of the foregoing lemma it is really possible that the replication number under eonsideration i::; Ii nontrivial mult.iple of m.
120 Chapter 5. ill finite divisibility
Examples 5.8. (a) Let X := U), + f,:{Z} with r, q EN, q i- 1, and 1 < p := Rp.p(Z) < 00,
Sinr.(1 () S ~{Z} < !, we have Rf.Ji(~{Z}) = 1, and lwnce, by Corol
lary 5.,5(ii), Rep(X) = r. Let j = lcm(q, '1") and m = gcd(q, r). Then we find from Proposition 1.20(i)
By Theorem 5.4: if Tn > 1, t.hen R{~p(U~ + X) = ,1, alld ifm. = 1, then Rcp(U'I + X) = ]11]7'. So, if in = 1, thCll RqJ(U'I + X) is a multiple of lr:m.(Rep(X), Rep(Uq )).
(b) Lot. X and Z bc defined as in (a). Then {(lTX} ,~ {q{7'X}} 4 {Z}. Hence Hcp(qrX) = P = fJT I gcd(r, rq).
(c) Let X := ~U with r EN \ {1}. TheIl it. is easily seen that Rep(X) = 1, Rcp( rX) = 0<.1, alld Rr:p( Ur + X) = 00.
(d) Lpt Xn he ddinrd by P(X" = 0) :=.: ~ - ~ ann P(Xt~ = 1) :!;: ~ + ~ (n f: N).
Thet! we haw X" !:... u], Rep(X,,) = 1, ,ulcl Rr:op(Ul ) = 2.
5.2 Generalization of Goldman's equation
In Coldman ([2:.\]), alld Arnold and Meech~n ([3]) cquation~ of t.he form
{"y+y}4X (5.3)
are ii(.udicd, wh(~n~ X and Yare indnpendrmt.. Arnold arid Mecden prove thf~ followillj.!; rf~slllt.: If Y in (5.3) does not. have itt:: distl'ibntion concentru.(.ed on
~7, for allY r EN, tlwn X ;; U. Goldman considers the case that. X and }' an' idf'ntically di::;tribut.r.d; he shows that U fI.nd U,. are the only solutions of (G.J). How~w('r, he poses th(~ problem to pwve this result by (~k~mAIltary rnnans.
As ;Ul applicat.ion of cquat,i())l (;).:1), consider tltf~ setting of fmctiOllal COlltaillPr loads as in Goldma.ll ([23]): On S\H.:C(~ssive days, a facility rocHives mil.krial from Kewral shippers bOlUld for thn same Je:-;(.infl.tion, consolidates and j),v.:k:-; it illl0 discrete COllt,::l.ilH'l"S of uniform capacity, and s('nds the contailH'rS on to l.he> destination; choose the 'contaiIH~r. full' as unit of volttme j
,tIle! proceed lllHkr t.be a:-;snmptioll that only fnll containers arc sent out; suppos(' 1.lH'1l tltrlJ tlw di:-;t.rihlltioll of the frktdional backlog after t.wo days is thE' Sil.me as thc.' dist.ribution after ooe day. The questioll t.hen is: what an' t.ill' possible distribut.ions of the fra.r.tional backlog after otH' day?
5.2. Generalization of Goldman's eqllation 121
III this section we study the following generalization of (5.3) using FSS's as an element.ary tool: Let n E N be fixed. Let X, )/1, ... , Yn be independent rv's, <wd suppose t.hat.
{X + Y1 + ... + Yn } 4: X. ( 5.4)
The main result of this section (cf. Theorem 5.9) is a characterization of the ditltribut.ions of X and ~ (j = 1, ... , n) s<ttisfying equation (5.4).
Clearly, from Lemma 5.2(1) we know t.hat, for any Y'I,' .. 'Y:l' X 4: U is a solution of (5.4). In addition, we prove some corollaries of which Corollary fd3 shows the result of Goldman ([23]).
This main result implies a.lso the result. of Arnold and Mccden ([3]): Take n = 1; then p<l.rt (ii) says t.hat there is r E N and a rv Z independent. of Ur
such that {Yi} is lattice on ~z and X 4: Ur + ~{Z}. If we now suppose that {Yd is not lattice on ;z for any 7' E N, then Arnold and Mecden's result.
follows since X 4. U is a trivial solution of (5.4). The main result is as follows.
Theorem 5.9 d
Let n EN, a.nti let X, Y1, ... ,Y,1 be independent 11.1'S with X t- U. Then the following statem.ents are equivalent.
(a) {X + Y1 + ... + Y'l} 4 x. (b) There exist TeN, i3L···, (in E [0, 1) with Ej'=l!3j E Z, and a 7"7; Z in
dependent of U" such that {lj} ,is lattice on ~ + ~ T. (j = 1, ... , n) and
X.!.. Ut, -I-- ~{Z}.
Proof: Let (a) hold. Then Cx (k) = c{X + Y1 + .. + Yo} (k) = Cx (k) IT};;;;1 c(Yj) (k), so
tl
ex ( k) (1 - II c{ Yj } ( k)) ::. 0 (k E Z). (5.5 ) j=l
d
SincE~ X i- U, for some kEN, we have cx(k) i- 0, and hence TIj:::l c{Yj) (k) = 1. Now let. 7' EN be such that
7~ n
II c(YJ)(r) = 1 and II Cly)}(k) =I- 1 (k = 1, ... ,1' - I). (5.6) j= I )'=:1
Thcll, as ICp'j) (T)I :s 1, we even have !cP.'j) (r)1 = 1 for .i = 1, ... , n, and it follows that there are fil, ... , !3" E [0, 1) such tha.t. elY,} (7') = exp(2Jri.8j) and
122 Chapter 5. lnnnite divisibility
'L j
/-:- 1 (1) E Z. Lemma 1.16 then yields that c(yi}(k + T) = exp(21l'i/JJ )c\YJJ(k)
(kEZ) (lnd t.hat. {Yj} is latt.ice on ~ + ;Z. Hence
n t"!-
Il c(y))(k + 7') = n c(Yj)(k) (k E z), }-I j=l
From (5.6) we obtain lljl=l c(Yj)(k) = 1 if k (FE, and TI}=I cry,}(k) 1- 1 if ki/r7:; frorn this, llsing (5.5), w(' tilld rx(k) = 0 for all kriTZ, Thcorern 5.4
HOW i rnplins t.hat t.here is a rv Z independent of Ur sucb thA,t X 4. [J,. + HZ}. SO (h) ::::}- (b) has been proved.
Now let (b) hold, and write::; = Y1 + ... + Y;l' Then S is lattice on ~Z. Applying Lemma 5.2(ii) we find
{X + S} 4. fUr + HZ} + S} 4. {Uf -I- HZ}} !!:= {X} = X
for :)OHl(~ 7, irldepf~ndfHlt of Ur • So (b) ::::}- (a) has been proved. o
w(~ HOW state some immediat.e consequences of Theor('rrJ ~.9.
Corollary 5.10 Let n E N, and let X, Y1,···, Yn be mdependent rv's stich that U # X -1:: {Yd, and {X + 1"1 + . , . + l~} :~: X. Then there f::dst r E Nand i3 E [0, 1) sv.ch. th.at x4u,.I·l? ,.
Proof: Since X;I~ {Y1 }, we know that there is a. reallllunher (J E (0, 1) :such
t!J;1,t l.hf: distribut.ion of X is concentrated On [i + .!.Z. Since X 4. Uj • -1_l{Z} r r r
the assertion follows. o
Taking Y1 , ••• , Y" iid in Theorem 5.9 auel pntting p = n{3, we obt.a.iu
Corollary 5,11 <!
L(;t /I. E N, and let Y j , . .• 1 Y;, bt: wi 7""li ',S independent of X -:f. 1.1. Then the following stotemenf::: (L1"f~ ~q1Lzvalenl..
«l) {X+Yl + ... +Y,,} 4x. (b) Thert: (~:ti8t r E N, P E Z, and a TV Z independent of U,. -such that {Yl} ·i.~
laUie(: 071. !,: + ~ Z and X 4. Ur + ~ {Z}.
5_2_ Generalization of GoJdm1'l.11'S equation 123
Remarks 5.12. (i) The special case n = 1 of Corollary 5.11 solves equation (5-3) studied by Arnold and Mecdcn.
(ii) Corollary 5-11 also characterizes the uniform dist.ribution on [0,1): if {X -+- Y} 4: X for all Y independent of X, t.hen X 4. U. For a generalization to rv'R fissmning values in a tom pact topological group we refer to Sta.pJet.on ([60])_ It i::; well known t.hat no algorithm exists for generating truly uniform rv)s. Characterization of the uniform distribution often provides useful tools for testing the quality of uniform pseudorandom number generator~_ Deng and GE~orge ([12], and the papers in its bibliography) provide methods for improving thi::; quality by taking the fmetional parts of sums of gerlerated pseudorandom Humbers. For generating pseudorandom numbers the goal is to produec st.rings of numbers which behave like a sequence of independent uniform tv's on [0, 1). The generat.ors yield integers in the set {O, _ .. ) m -I}, which are transformed to [0,1) by division by m. For large m one should ha.ve Ii, good approximation of t.he uniform distribution on [0,1).
The case n = 1 and 0:' = 0 of t.he following corollary is the result proved by Goldman.
Corollary 5.13 <.l
Let n E: N, (.); E lit, and let Xo, - -- ,Xn lu~ iid rtf's with Xo i- U_ Then the following statements ar-e eqttivalent.
(a) {Xu + Xl + '" + Xn + a} 4 Xo. (b) There exist r EN and P E Z stich that X 0 4. Ur + ~ {* - r-{ *}}.
Proof: Pro oj oj (a)::::} (b): Put Yi 4 Xo+~. Then applying Corollary 5.11 we find that {Y1 } has its distribut.ion concentrated on -;;;. + ~z for some
P c Z _ Hence the distribution of {Xo + s} is concentrated on .1'... + 1z, and n nr ~
therefore Xo is concent.rated on ;: {~ - r{~}} + ~z_ Analogous to the proof of Corollary 5.10, part (b) follows.
Proof of (b) "* (a): Applying Lemma 5.2(ii) several times we get
{Xo + Xl + ... + X n + a} !b {Xo + U). + ; {; - r{ ~} } + Q}
4. {Xo + Ur + ;(~ - r{;}) + n({~} + [~])}
4, {XO + U~ + ;} 4 {Xo + Ur } 4 Xo. o
We nOw give some simple examples to illustrate t.he scope of these results.
124 Chapter 5. Infinite divisibility
Examples 5.14. (a) Let n EN, Let X := U2 .j. ~, and leL Yl be it rv with its distribution concentrated OIl the point.s '27. and 2~ +~. Let further YL ·· ., )~I be iid rv'~ independent of X. Then {Y1 +. , ,+ Y;t} has its distribution conc:entrated on ()
and ~, <tnd so {X + h + , , ,+ }~,} ;~ X. This also follows from Corollary 5.11 by takilIg Z := 3' (b) Let keN, k? 3. Let (Xj)O' hn h sequence of iid rv's with Xo := U, +~, alld lot SrI := (Xo + '" + X 71 } (71 E N), w~~ distinguish two (:H.:;~~S: 1. k is CVCll; 2. k iK odd. Then
1. Corollary 5.1~1 implies Stl 4. Xo if'll, E ~N. Furthermore, we have
c; ~ [1 + i±.!. •. " 2 k 't" ( I k) ,. . {() k 1'} I 'II, = J nt()(:J or some J E . ,1""'"2 .-.. ,
If we de/inl' {tn. = { if n = j (mod ~) for S(lrrl!~ .i E {O, 1""'1- 1L then
Xo 4 {Sl1 - (l:~}. 2. Silllilarly, we get S" 4 U2 + W if 11 = 8 (lllod k) for sonH~ S E {O, ... , k --·1 }, and j = 2s if 8 < } - 1, j = 28 - k if .~ > ~- I.
5.3 Infinite divisibility (mod 1)
Sehalt.(~ W{)j) (lPals with infinite divi~iLilit.Y (infdiv) in the modulo 271" (mod 271") sense. lIe givcs a reprcsent.fition theorem for infdiv (mod 271) FSS';s, and h limit. thcorem for Sf:qllencers of infrliv (mod 271") fi'SS's. l'urthcrlllon~, Iltldm an infinit.e smallnes~ condit.ion in the modlllo 2Jr sense, he co!lsidflrS converW~TH.:(' of Sllms to infdiv (mod 271) distributions, In this section, 1.0 get furt.lit'r insight in infinit.e divisibilit.y (mod 1), we proVll a ncw cbanu:terLation for infdiv (mod 1) diKt.riblltions,
W(, !lOW give a.n outliTJ(~ of the conknt.fl of this sect.ion. After a brid illtrodllction to innIli tB divisibility ill the cla:,;sical sense (se(~ Sect.iou 5.3,1), W(' riJorlllulate, in Section .').3.2, Schatte's rf.]ll·cscntatioll theorem for infdiv (mod 1) FSS's. Fmthermorc, from Sehattc\ representation theorem we dpdllC(' some o\.hpr rl'presentations; three of those firc similar t.o thll LcvyKhillchinc-type, tlw Levy-type hlld Uw Kolmog;ofov-typc ca.nonical form for illfdiv (mod 1) FSS'::;, In Sectioll 0.3.3, we prove Oilr main result of this chapter dl(I.l'a.cterizillg illfdjv (mod 1) di,,\.dblltions (d. Tht~orcm 5,28). To prov(~ I.b i:; characterization, we repre::-;ellt iu fdiv (mod 1) FSS's in th(~ Levyt<hillchi!l(~.typ(' canonical form, i\,nd llt'e the characteri;,ation of distribut.ions with finit.e replicat.ion number <tK given in Section 5.1. In addit.ioIl, we prove SOllW ll(~W propcrtiN: of replication ll\llllbers of infdiv (mod 1) dbt.ribllt.ions,
5.3. Infinite divisibility (mod 1) 125
In Section 5.3.4 we consider the construction of infdiv chf's in the customary sense. The ma.in part. of the re~mlts given in Sect.ion 5.3 has been considered in Wilms ([72]).
5.3.1 PreliITlinaries
Tlw (One8pt of infinite divisibility (on lit) in the customary sense is very import.ant in probability theory, particularly in the context of central limit UlOormns. The basic properties, such as canonical representat.ions, and applications in the theory of limit distributions of sums of independent rv's h<tve been studied in e.g. Gnedenko and Kolmogorov ([22]) or Pet.rov ([49)). In Feller ([IS]) infinitely divisible distributions on N U {o} and on [0,00) have been studied in mOre det.ail. Lukacs ([43]) discusses stable and selfdecomposable distributions, and factorization problems. In addit.ion, much attention has been paid to the construct.ioIl of infinit.ely divisible distributions, and to give Il(~Cessary a.nd (or) sufficient conditions for infinite divisibilit.y in terms of df's rather than chf's; see (~.g. t.he survey paper by Steutel ([61]). There ;:tj"(~ also sorne applications, especially in statistical modeling (see Steutel [63]). (For a generalization of the notion of infinite divisibility on lotally compact abelian groups or sernigroups see Parthasarat.hy [48] or RASSAI [52]).
The concept of infinite divisibility and its basic properties will be given in terms of cbf's, the most. important tool in this field.
Definition 5.15. A l'V X (itt; chf 'Px) is said to be infinitely divisible (infdiv) if for every n E N there exists an o"tuple (Xn,i)?;"'l of iid rv's such that
X ciX X = ,\.1 + ... + tl,n,
Or equivalently, if for every n ~ N there is a chf i,pn such that ~x(t) = (~TL(t))n for alJ t E lit
Simple examples of infdiv chf's are provided by the degenerate, Poisson, geometric, Gamma, normal and Cauchy distributions. Next, we state t.hree well-known canonical representations.
Proposition 5.16. [Levy~Khinchine canonical representation] The function 'P is an in/div chi iff I.{J has the form
~ .1 2 (. J ( itx dx ) + x le( )) 'P(t) = exp ita + e - 1 ~.--:z --~-(. X 1 +:r x
(t E 1R), -DO
(5.7)
12(; Clldptef 5. lnnllitc divisibjJit,y
'I1I11,{·:11:' a E 1ft, and fJ ~s a nondecT~(],8i1ty, left-continuous, I)(mnded functum on 1Ft such that O( -(X)) = 0; for :c = 0 the integmnd is defined to lif; equal to _1,2/'2. The reprc,scntation is uniljw·:.
Proposition 5.17. [Levy canonical representation] The flmction if' is an iTljdi1) chI 'iff if' has the form
(crt)2 J. 'if:!: t.p(t) = cxp(ita - -- + (CHX ' .• 1 - --)dL(:.r))
2 1 +;(;2 IR\ {O)
(t ( lit),
where a c JR, (J2 E OCt, and £ is nondecreowi.nq and left cordir!1l(m,~ on (._.0:::1,0), and on (0,00), with £(-00) = L(oo) = 0; the integnzl.f, .. <:,<:)\{o)x 2 dL(:r;) /,8
jl:1i'l/(..: for cvery c > o. The 'representation is u7I:iljuc,
The abow~ mentioned canonical n~pref>rntation~ a.n~ 1!/~[I("~Hdization~ of t.ht> following rnpresent.ation, dU(.~ (.0 Ko]mogorov,
Proposition 5.18. [Kolmogorov canonical n.~presentationl Th(~ fmwUon I,p ~.5 a chi of an inJdiv distrtbttfion wdh finite second moment
-itT 'P has the form
/
00 c·/':Z: - I", - d,,: 'Ii' (x)) ;pU) :::.: €xp(da + "I, • :r
(I ElIt), (5.8) -.::1I .• ~
'I1Ihe'l"r; a., 1Ft, and J( 'l8 a. nondecreasmg, h:jl-contimwus, firmnded ftmction oTt IR: such tfwt [«(-(X)) = O. The n'pu'8f-;t),tatwn is unique.
5.3.2 ReprelSentation theorems
Thn concept. of iIlfiIlit(~ divisibility (!lIod 1) can be int.roduced as follows:
Definition 5. HL Tile rv X (E ,l'[0, 1)) (it::; FSS ex) is said 1.0 be infinitely divisible (mod 1) if fot' each n (N then> flxist~ an n-tnplp (X7l ,"I);:'=1 (X",m E .Ylo, 1)) of iid rv's ~1tch lh,d
x ~ {X".l + ... + X",/)},
or cqllivah'lltly, if for each n EN I !tore is a FSS Cn sHch t.hat. cxU;) = (c,,(k))" for <I]] k Ii; Z'"
5.3. Infinite divisibility (mod 1) 127
Clea.rly, from Proposition 1.19(i) it follow::; th<~t U is infdiv (mod 1); it will t.um out. t.bat U is an exceptiolHd infdiv (mod 1) rv sinr:e Rep(U) = 00.
Now, let Y he infdiv; then there pxists an n-tuple (Y",i)] of lid rv's such that
1 r !i 1" 1; . h .' - ".1 + ... + ",'" ence
/ ~ " .. p } - {{Y".d + ... + {Y,~,,,}}.
So, if Y is infdiv, then {Y} is infdiv (mod 1). In Section 5.3.3 we wm provide all aJlswflr t.o t.he following qnestion: are there :'lny infdiv (mod 1) rv's that ar8 not fractional parts of infdiv rv's?
Throughout Section 5,3, we shall specify the FSS c(k) with Rep(c) = r E N On ly for k E 7'Z, because in (.1) is case we have c( k) = 0 if k E 7'2 (d. Theorem 5.4). Moreover, infdiv (mod 1) distributions with infinite replication number will not be considered sinee U is tlHl only distribution with this property (d. Corollary 5.5(i)).
We nOw give Schatte's represent.ation t.heorem.
Proposition 5.20. [Schatte [551, thm 4.3] Let c be a FSS with Rcp(c) = r E N. Then c is in/diu (mod 1) ~ff c has the fonT!
2?fikx I 'J.' 2 ' () ( J e - -_?,~, S.Ill 7rX dS"(x)) C kr = exp ik(l! + ..'
1 - co::: 27r), (k E Z), (5.9)
1°,1)
where 0' E [0, 21r), and S is a nondecreasing, left-continuous, bounded junction on [0,1) with 8(0) = 0; Jor X = 0 the 'integrand is defined to be equal to ~k2. The representation is uniqz~e.
As a consequence of Gnedenko ([21], corollary on p.80) we have the following corolla,ry. We leave Ont. its proof since it is similar t.o that of Gnedcnko.
Corollary 5.21. If the FSS c is r-eprf'sentablf' i.n the j07"m (5.9), whe7'e S zs a function of bounded variation (not neces8arily rwndccn~o.sin9), then 8zu:h (L Tepn~$eTl.t(L
tion is t~mque,
128 Chapter 5. infinite divisihility
Remarks 5.22-We review 80rnc elementa.ry properties of infdiv (mod 1) 1"55.
(i) TIl(> product of a finite {lUmber of infdiv (mod I) FSS's is an infdiv (mod 1) FSS (corollary of thm 1.1 by Schatt!' [55]).
(ii) Lrt (en) be a sequence of infdiv (mod 1) FSS\; tludl that I:n - C (n ----t 00)
for some C, Thell c i.':{ an infdiv (mod 1) FSS (d- SchaHp [tlS], thm 4.2)-
To comn t.o the main result of this c:hapt.cr, we need a f!-lW other canonical forms; we first prove a repn~sellt(i.l.i()n theorem, wb ich C,aIl b!~ derived from SchaUfl's represent.at.ion (d. Proposition 5.20).
Lemma 5.23. Let c be (Z FSS with Rep( c) = r (N- Th.~71. c i8 infdiv (mod 1) iiT I~ has the form
(I ( J r:
27r;b: - J - iksin21rx dT ( .)) (k- ) (to: )
C ~:r) = (~XP ikrv + .,.- :! ,E Z, ;).10 1 - cos 271:r
[-H)
where u ( [0, 27T), (mil T is a nOndeCTf(I .. ~i71!J, leJt- contirwou8, brnmded /tmcfilm rnt [-~,~) with T(-~) ~ 0_ The reprcscntat'lo1i, Z8 tmiql1c.
Proof: L!~L (: be infdiv (mod 1), kl.lld SAt.
('.'l~ik:t- _ 1 - ik sin 21rx !k(:r) := ---- (:tE JR, b: Z).
1 - cos 21T':r
By Proposition 5.20 c can be n)presented by (5.9). By the fact that fk(:r) has period 1 W(~ ohtain for k E Z
wlwu'
c(h) =~ exp(ika f J A(:i:)dS(:t)) [O,l)
cxp(ikn + J fd:r)dS(x + 1) + J Jk(x)dS(;r))
I-~,(l) IO.~)
exp(iko' + J fd:r)(IT(x)),
I-H)
{ 8(:17 + 1) - sq)
J'(;r) = S() S( (I) ,';r+ 1)-8 2
5.3. Infinite divisibility (mod 1) 129
Obviously, the fUllction T is bounded and nondecreasing, and T( -!) = O. Similarly, if 0::1.10) holds, then (5.9) hold::;; hence cis infdiv (mod 1).
o
Next, we give three representations similar to th(~ Levy.Khinchine, Levy ;tlJ(.l th~' KollllOgOroV canonical form, re::;pectivdy (Sflfl also Propositions 5.16, [d 7 and 5.18).
Lemma 5.24. Let c be a FSS with Rep(c) = 7' E N. Then c is injdiv (mod T) iff c has the /onn
J .' 2rrikx 1 + x2
c(b') = exp(iku + (f'~l1"'h::r. - 1 - --2 )---2dB(x)) (k E 2.),(5.11) 1 + 3; x
I-H)
where U E: [0, 27r) I and 0 is a nondccreasinq, lc}t-cont'inuous) bounded function ()n [-~j~) with B(-~) = O. Tlw T'~pn·:'Mm.l.ation is 1wiquc.
Proof: Let. r. be infdiv (mod 1). Dy LerrlIHfi 5.23 c can be represented by (i:dO); hence for k E Z
c(kr) = J e211"ikx._ 1 - ik sin 27rx
t~xp( ik(j + -- . dT(:c)) 1 - cos 27r:r
I-H)
( ! ( h-ih 2nikx ) 1 + x 2 dB( )) exp iko: + f. - 1 - -~- --- x,
1 + x2 x, I-H)
where
J 1/ dT(y) O(:r) '= ----=-:--
, . 1 + y'2 1 - cos 27r}j I-~,.l;)
Obviously, the fUIlction () is bounded and nondeerea,<;ing. So (5.11) holds. Ana.logously, if (5,11) holds, then (5.10) holds.
o
Tht" proof of t.he following lemma if) similar to the analogou::1 theorem for r.hf's (see e.g. Lukacs [43], p.1l8).
130 Chapter 5. hdinite divisibility
Lemma 5.25. LeI, c lJe a FSS with Rcp(c) = TEN. Then c is infdiv (m.od 1) iff c has the fonn
c(kr) = cxp(ikct - 2(7Tko/ + J C2
11"ikx -} - ;~;IdL(x)) (k €' Z),(5.12)
I H)\F)j
where (V E iO, 27l"), (11 E: OC, , and the /tmction L i,~ Tl.owlecrea5ing and lef/, COT/.tinu(Hts on [-~,O), and on (0,1), with !,(-~) = 1.,0) = 0; ih(·.: mtcgral JI ~,~)\(U} ;r~d!{r) '/,.~ finite. The repre.sent(1.tioT/. i.e; unique.
Proof: Ld. G bf~ infdiv (mod 1). By Lemma 5.24 c can be reprosEmted by U:d ]). Then take cr~ = 8(0+) - 0(0) and express L in terms of O. Tltt~ proof of the COIlvi)rSi~ is evident.
[]
Similarly, w(~ prow
Lemma 5.26. Let c he (]. FSS with Rcp( c) = TEN. Then (: is infdi'U (mod 1) iff c }w,~ the form
! c ..... 'l._",h: - 1,- 21rik:r rl1\.,(.o .. )) c(h) = eXPOh~ + .L· .'-"
.<: (k E: Z), (5.13)
[ .. ~,~)
1Uh(~n~ (~ t [0, 271"), and I( is (L rW7!(h;creasing, hIt- (:(mtin'llo~~s, bonnd(~d fmICtion on [--1, t) with [((-~) = O. The r·qm::st-:ntatwn M nniql1t;,
Proof: Let c be infdiv (mod 1), By Leuulla. ;">.23 c can be repH:sented by (D.ll); hence
whmn
'Y = -/ (rrwrl 211'), '/ = 0 + 27T ! xde(,~'), 1·- ~'1)
5.3. Infinite divisibility (mod 1) 131
I((-1):=0, K(x):= J 1+y2de(y) (XE(-~,~)). I-~.:l:)
Obviously, the function ]( is bounded and nondecreasing. So (5.13) holds. Analogously, i r (5.13) holds, then (5.11) holds.
o
Remark 5.27. The represfHltations given in Lemmas 5.21 and 5.26 will be used in the proofs of Theorem G.28 and Corollary 5.31 to define a rv Y by an infdiv ehf. Sch:;lttp ([~~]) gi vN; the representation
for an infdiv chf. Since the integrand tends to inhnit.y if a: T 1 and t i Z, this formula is not correct. As shown in Lemma 5.23, the difficulty can be removed by transferring the function S to the interval [-t, ~); the integrand is then bounded on [-~,~) for all t E lit. This means that the function
. J e211"it;l' - 1 - it sin 27l"x q(27rt) = Axp(dct + ~- dS(x)) , 1 - cos 27l"X
(tG IR)
I-H)
it> ~Jl infdiv chL
5.3.3 Characterization of infinitely divisible (mod 1) distributions
We now gi ve the main theorem of this chapter, a )leW characterization of infdi v (mod 1) distributions; w~~ also prove some new properties of these di::MiLu(.ions.
Theorem 5.28. The rv X is inj(hv (mod 1) with Rep(X) = r E N iff
(5.14)
jor· SOTn~ infdiv 7''U Y, independent of U~; neCe$S(IT"ily, Rep(Y) = 1.
132 Chapter 5. Infinite divisibility
Proof: Let ex be infdiv (mod 1) wit.h Rf:',j!(X) = r EN. By Lemma 5.24 we know that ex Can be represent.ed by (5.11); so
(k ) ( . J ( 21fikx 27rik:r. 1 + :;;2 ()) CxF = (~XP zko: + e - 1 -"""'-." )--2 -df) :1:
1 + :1'. :r l-Hl
Defil!f~ a rv Y, independent of Ut., by itt:> chf (Bee Remark G.27):
() ( . J (21fitx . 27riLr ) 1 + :1:2
8( )) ( tpy 27ft ". E'XP '/let: + C" , 1 - --2 -.-2 -d X t ( JR.); 1 + ;r: :r
[-~,O
the Levy-Khinchine canonica.! wprosont.ation (d. Propo~it.ion 5.16) t.hnn says that 'j)y il:l .tn infdiv rhL Furtlwrmore,
(5.15)
By Corollary 5.,5(iii) wfl know that X ;1:: Ur + ~ {Z} for some Z, independt~nt. of U,., with Rep(Z) ::, 1, and so
Combining Proposit.ion 1.6, (G.lG) and (5.16) yifllds {Y} ~ {Z}. So (5.14) holds for some infdiv Y indepelldent of U, ..
Conversely, let (5.14) hold. From Theorem 5.4 w(~ know that ex (k) = 0 if kif.7'Z, and lU;ing cur(b') ,~" 1 it. follows cx(kr) :::: r:{y)(k) (h=:z). Since Y i:.; infdiv, wf~ havo {V} is infdiv (mod 1); hence, (:{l) cnn bp n~p]"(~sented hy (~d). So X is infdiv (mod I) wil.h flcp(X) = T.
Finally, if Y it-: infdiv, then if}' has no rc,d z~rns (sec c.g. Lukaes [43], thm 5.:1.1). Hence, cy(k) i- 0 for k t Z. SO, Rep(Y) = 1 (d. Theorem 5A).
o
Remarks 5.29. (i) For no infdiv rv Y we have {V} !!::. U since an infdiv rhf has no real ZE~roS (snn n.g. Lukacs [43], t.hm 5.3.1)
(ii) Obviollsly, [rom (5.14) it follows that {rX} ~ {V} for some infdiv Y, indepP11(IE'lti, of Ur , and Rcp(rX) = 1. Moreover, for all k E Zo, we have ex (b') : !C{rxJk) = Cjq (k) i- 0 sincr an infdiv chf hAS no real zeros. So, for a.llkEzo, cx(k) i- 0 iff kEr'l..
(iii) We give a rnorE~ st.rllrtural proof of tll(~ implication: If f~qllat.i()ll (5.14) holds, th(~)1 X is infdiv (mod i). SilICE' }; is infdiv, t.hel"E'~ nxists an n-tuple
O~l,,):I~l of iid rv's snch t.hat. Y -4 )~"l + ... + }~1.>l' Let. (U~'))~~l be all lHuple of indepeDckllt. l"V'S all distributed it~ [fr, and independent. of (Yr.,.). Take
X".i ~ U,I;) + ;-{1~,,;}; then X ~ {X l1 ,l + ... + X t1 •n } (cf. Lemma 5.2).
5.3. Infinite divisibility (mod 1) 133
Corollary 5.30. Th.e TV X 1"S midi'll (mod 1) with Rep(X) = 1 iff X 4. {Y} for some infdiv Pi Y with Rep(Y) = 1.
Thr. following corollary says that in studying an infdiv tv X, it. is no restriction to assume that X ha~ finite moment.t>.
Corollary 5.31. Th(: 1"'11 X 1:8 iTlJdiv (mod 1) wilh Rep(X) = r EN iff X 4. Ur +; {Y} for som.e infliw 1'0 Y, independent of Ur , with Rep(Y) = 1 and Jlt:y m < 00 (m E N).
Proof: LAt X be infdiv (mod 1) with Rep(X) = T. Then ex (:an be represented by (5.13). Define a rv Y, independent of (fr, by its chf:
y::)y(27rt)=cxp(ito;+ f e2'1r;I'·-,\-.27rit-r. dF(o(x)) (lErR), (5.17)
.r: I-H)
wll(~["(' 1.\: E [0,2,,), and Ko is nondccl'casing, Idt.-contiIlll()ns, bonnded on the interval [- ~, ~) such that ](0 ( -1) = O. The Kolmogorov canonical rcpre~ellt".ti()n thAI! says that rpy is an infdiv chf (cf. Proposition 5.18). As in the
proof of Theorem 5.28 it follow::; X 4. Ur + HY} (use Lemma 5.26). To proye that lEY'" < 00 for all Tn t: N, on account of Lukacs ([43], t,hm 2.3.1) it. suffices to show that. the absolute value of ~(2)>l)(t)ll=o is finite for all mEN. Thits is true since t.he fllIlction rpy(21T't) in (5.17) may be differentiated under t.11I' integral sign.
The converst~ it> t.rivial (see Theorem 5.28). o
Remark 5.32. The foregoing corollary can also he proved by using Wolfe ([76]). He shows: TA'l. Y be an infdiv df represented by (5.8), and let PEN; then Y has an absolute moment of the p + 2.t.h order iff [{ has an absolute moment of the p-th order. Obviously) since the function 1(0 in (5.17) is bounded On [-t, t) it follows that all absolut(~ moments of Ko exist.
Example 5.33. Let X "-' Exp(:\). The Kolmogoroy canonical representation for if:.:: is given by (d. Lukac~ [43], ]).121)
= -~I
yx(t) = cxp(in-· 1 + J (e;t:r. - 1 - ih) e :r; dx) (t d{). (5.18) o
134 Chapter 5, Infinite divisi/)ility
By Corollary [),30 {X} is infdiv (mod 1) wil.h Rep(X) = 1; its FSS c{X} can be represented a~ follows:
. QO .' ,c-).;': 00 m+l e-->'", (:{X}(k) = flX P(j(C21H
i.:X - l)-.-d:r) ;= Axp(ika -+ ~ J .h(:r)--(i:r) T ~ T o . 711=0 m -,
1 0<) c-,),(m,!))
exp(ikO:' + J fk(Y) L _., ··-dy) o mc~O m, + Y
1 , J fdy) ( .. ~~xp(zb:1: + ,-.. ,,·-·----dS y)), 1 -- cos 21l"Y o '
J" 00 e->'(n+·,,)
S"(:r) = (1 - co~ 27T 'U.) L . du o n~-O n + H
(:C f (0,1)),
Next, we prove S(Hne new elementary propm-t.if1s of the replication numbers of infdiv (mod 1) rv's, In contrast to Lemma 5,7, ill thiR N! ... <;e we can precisely 0p(~ciry t.heir replication numbers_
Theorem 5,34, Let Xl and X 2 be indr:Jif:ndent infdw (mod 1) nl'8 with Rcp(Xm) ,,"' 7-= E N, Then {Xl + X,d is infdiv (mod 1) wzth Rep(X1 .f- )(2) = /c'm(7'!, 1\),
Proof: Let q := krn(T!, '(2), and p:= qcd(T" T-J, Further, let f:m and C r ", be the FSS of Xm and U,.= (m = 1,2), respectively, By Remark 5.22(i) we have {X, + X1 } is infdiv (mod 1)_ Lmnma 5,7 implies t.hat RCp(Xl + )(,) E qN,
T1Hlol'cm 5,28 yields t.hat., for m = 1,2, X»1 4: UT~ + f-{Ym} for some infdiv }~~I indq_wndfmt. of [1,.= and Uq . I-knee '"
SiJ)(:p Y := (r2Y1 +1'!Y2 )!11 is infdiv, we know that Rcp(Y) = 1 (d- Corolhry fl.30). By Corollary 5_5(ii), we have RqJ(X, + X 2 ) = (}-
Theorem ,).35. Ld if EN, Let X br: infdtv (m.od 1) with Rep(X) addi!! (m,od 1) wdh Hcp(qX) = r / gcd(q, r),
o
r <.: N_ Then {qX} l8
,5.3. IIlfiIlite divisibility (mod 1) 135
Proof: Let p := gcd(q, r). From Theorem 5.28 we have X 4. [Jr + HY} for
some illfdiv Y; hence {qX} 4. {qUr +; {Y}}, A~ in the proof of Lemma 5.7(ii)
we get {qX} 4. U"/p + ;{qY/p}, Since {qY/p} it; infdiv (mod 1) with Rep(qY/p) = 1, it follows from Corollary 5.5(ii) that Rep(qX) = T/p.
o
We note that with a small <tdjulStment the alSlSertion of Theorcm 5,35 is also true if q E Z \ N, In general {qX} i:s not infdiv (mod 1) if q E lIi:\z and X is infdiv (mod I). For
example, if X 4. U and q = 3/2, then c(3x/2)(1) i- 0; hence Rep(3X/2) = 1-Since cuxm(2) = 0, it follows frolll Remark 5.29(ii) t.hat {3X/2} is not infdiv (mod 1).
Whrll (1 = ~ and X 4. {V} with V exponentially distributed we shall show that q{X} i::; not infdiv (mod I). To t.his end we use t.he fact that if V '" E:rp().), then {V} and IV] are independent, and [V] is geometrically distributed, i.c. P(V = k) = (1 - e-~)e-~k (k EN U {O}), Then the Kol~ mogorov canonical represcntation for IPWI Can be written as (see L\lkacs [43], p- 121)
By (5.18) and (5.19) we find for k E Z
q{.q(k) = '-PxCrrk)/IfIXI(7rk)
(t El!t). (5.19)
f"" . C-1~.7. 00. e-~'"
= cxp(1rik± + (e 2"lkx - 1 - 27rik:r.)--d:c - L (e",km - 1)-- ) o :£; m"-'[ Tn
]
- ('k !( 2rrik:r 1 :1 ... ' 2 .~) ( )d' l( "ik 1) (1)) - exp z u + e - - ~I\, SIll 7tX fl X ,j, - "2 e - g"2 o
x
6(:r:) -, !(l-COS27tY)iJ(Y)dY (O<::c<~), o
x
OCT.) = -!gO) + / (I - Cos 27rY).1(y)dy (~::; :r < 1). o
136 Chapter 5. Infinite divisibility
Ilow{>wr, () has a negative jump at t; hellce () is not nondecre,t:)inp;. So, on acwnnt of Corollary 5.21, HX} cal) not bfl infdiv (mod 1). In conclusion, if X is exponentially di::otributed, thell {X} is infdiv (mod 1) with Rep(X) = 1, whereas } {X} is not. infdiv (mod 1).
Usillp; thA nnw characterization of Theor(~rn ;),28, we can prove a limit (.heon:rn for infdiv (mod 1) di::;t.ributi()Tl:).
Theorem 5.36. Let·1' ( N, and if:! (Xn) be tl sequence ofinfdiv (mod 1) 1"'11\ 'withRcp(X,,) t TN,
and X" 4 U,. + H Y,J Jor' ~ome Y" independent of u~ ("II. E N). Let X .. ,1:'[0, 1),
and svppo.w: thul X" .!!.. X (n -..... 00). Thf:11. X is injd"iv (mod /) with
Rf':p(X) ErN, 07'l?Cp(X) = 00 (i.f!.. X 4 U), and {}~,}.!!... {LY} a.'in·, 00.
Proof: f3.y TI.emark 5,22(ii) X is illfdiv (mod 1), For kif. 1'',£ we have cx..(k) .. :. [) ror all 1l.E N; so c~,:(k)::; 0 if k'1. rZ. Then Theomm 5.4 yield::;
Ch;-l.i Rp.p(X) E TN, or Rep(X) 0", 00, i.e. X 4 U (cf. Corollary 5.S(i)).
Fml.hf)["Jnore, we have Gx,Jh) := ('lYuj(k) (k" z). Sinr.fl Xn ":'!., X (n ---l- (0), it. follows that cp,,,)(k) ---l- cj,;q(k) (kEZ), or equiv;-IJllntly, {Y,~} "'~!') {TX} (n . .,) 00).
o
nd()]'(~ Wt~ consider some exa.rnph~s for which the n~pli(~aLion number of t.he limit distributioll X it> a nont.rivlal multiple or Rep(X,J, we first construct
sonH~ iufdiv (mod 1) distribu tion~.
Remarks 5.37. By Corollary 5.:30 i3nd thfl analogue::; for chf's (se~~ Lukac::; [4:J], pp. 111-112), we const.ruct infdiv (mod 1) FSS's:
(i) If (: is an infdiv (mod 1) FSS wit.h Rcp(c) =:', r EN, then cA is :1, FSS with Rf:p( c·A) = r ( N for .tHy A. > o. (ii) It Is also possihle to construct infdiv (mod 1) 1"SS'::; in thE' same WoW as one cart const.rllct infdiv chf's. Let c be a FSS with Rcp(c) = .,. EN. The FSS /. with compound Poisson distribut.ion, genera.ted by (.', defined by (::;et~
(),~. Fdkr [is], p.555)
i'( k) = ex p (.\ (c( k) - 1)) (k (~ Z, ,\ > 0),
is 1.hen infdiv (mod 1) with Rcp(c) = l-
(iii) W(I also find an analogue for Dc Finttti's t.lWOl'cm (see e.g. Lllkaes [43], t.hm ~).4.1): a FSS (: is infdiv (mod 1) iff'exp(A,,(c'l(k)- I)) ---l- c(k) (n ---l- w) for all k € Z, aud S!'Hne A,. > 0 and :'IonIC infdiv FSS en. (n EN),
5.3. Infinite divisibility (mod 1) 137
Examples 5.38. (a) Let the FSS c of the rv 1" bE' defined by (5.20) with C = CUr for some r E N. Then
if k ETZ otherwise.
The di~;tributioll of Y is given by P(Y = ~) = l{o}U)e-J. + ~(1 - e-),) for j = 0, ... , T - 1.
(b) Let (Xm) be given a.s P(Xm = 0) = ~(1+2·'rn.), P{X ..... = !) ;; t(1-2-m )
with FSS
if k odd if keven,
and (X".m) a..s P(X'l.m :::;: 0) = H1 + rm/n), P(Xn.m == !) =: t(I - 2-m/n) with FSS em .To (n EN). Then it is easy to verify that cm(k) ~ (cn,m(k))"-.
As a result, Xm is infdiv (mod 1) with Rcp(Xm) = I (m EN), Xm ~ U2 .
(Tn ~ (0), Rep(U2) = 2.
(c) Let X,) 4. {Y;)} with 1"" :;:::; Zl + ... + Z," Zj"" Exp().,) (j t N). Since Yn
is infdiv, we have Rep(X ... ) ~ 1. Furthermore, {Y;)} .!!.." u. So X" .!.., U.
5.3.4 Attempted construction of infinitely divisible (mod 1) distributions
Since we havt~ infdiv distributions on NU{O}, and infdiv (mod I) distributions one could hope to construct new infdiv distributions in the following way: LE't. N be infdiv nonnegative integer-va.lued, and let W be infdiv (mod I) with Rep(W) ~ I and independent of N. Let X 4 N + W. Is then X infdiv in the customary sense? Unfortunat.ely, t.his is not true in general, as we shall see. Before we show this, we first note that the a:ssert.ion 'if X is infdiv (mod 1) t.hen [Xl is infdiv' does also not hold in general. We use a well-known result concerning infdiv distributions (see e.g. St,ellt.el [62], thm 3): if X is an infdiv rv wit.h df F, then, unless X is normal,
-log(F( -x) + 1 - F(x')) = O(:rlog x)) (x ~ 00).
So if N = [X], where X has a standard-normal distribut.ion, t.hen N is not infdiv. Alternatively, let N be a nonnegative juteger-valued rv that is not
138 Chapter 5, Infinite divif>i/)jJjty
infdiv, and let X := N + U, Then {X} is il)fdiv (lIlOd 1), while [X] is not. infdiv,
Wt> now return to the above proposed method. L~t X "-' B:tp(A), It. is known that {X} and [X] are iJldepe[ldent, find that IXI is geomdrically distributed, i.{~ P([X] = k) "'" (1 - J!)Pk, P = e-). (k r N U {O}). Furt.hnrnloHl, {X} is infcliv (mod 1) (with Rep(X) = 1) (SN~ 1-\]::;0 Example G,33(i)), and
[Xl and X = [X] + {X} arc infdiv. Therefore, in this case this construct.ioT) method is successful. However, w{~ will show that this construction fails
wht~ll w() tah~ W 4: {X}, and N geometrically dist.ributed with p t= c >'.
To this end we need a definition :1,[1<1 a r.haracterization of illfdiv distribu t,ions on [0,00),
Definition 5,39. [cf. Feller [18], p.439] A function 7/) OTl (0,00) is said to bo completely monotone if '1/' POsS(~SS()S
(kriva,l,ivp:) of hll orders on (0,00) with
(-I )"(1tr'IjJ(f) ? 0 (n EN U {OJ, t > 0).
Proposition 5.40. [d. Fe lle l' [18], thm 1, p.450] Ld 4' 1)(: (l positive function on (0,00) tho,t P0881'-881lS dcnvatives of all ()T(ler'8 on (0,00), and let ¢(t) = --;b log1j,(t) (t > 0). Then 4) '18 the LaplaccShdtjcs tra.nsfonn of an infdiv df on 10, 00) iff 1./J( 0) = 1 (Lnd 1) 'IS completely tnon()l.on~.
We now show that. the above proposed construction rnet.hod drws not work ill p;ener;t,l.
Claim: Let \I ""' Exp( 1). Let. N be independent of V, Md geometrically di:st.rihutt~d wit.h 1) = f~-8, i,e, P(N = k) = (l - e- 8 )c8k (k E N U {o}). Let.
X -!l:: N + {V}, Then X it:: not. infdiv.
Pruuf: By Proposition 5.40 it suHir.es to prove that ¢(t) 0", -ft 10g1/>(t) is not completely IIlOllot,OM, where '~I is the Laplace-Stielt.jes transform of X. Clearly,
tN -I{V} (1 - e .. ·8 )(1 - c"P+t)) 1/1 (t) = lEe lEe =m'~---:-:-_--'--:-:-
.. (1 - e-(8+t))(1 + t)(l .~ e.-I)'
WllPll('f' -(;fY¢(t) = (t(t) - /l(t, 1) + (J(t, 8), where n(t) ;= (1:1)4' and
. (-:-(11+/)(1 + 4C(1IH) + e- 1 (1J+1)) 0(1., 'fl) = -_._._ . .., (1) (1/, t > 0).
(1 -- c '/"t l 1
It turns out. that. -(/.//'(1) < 0, i,e, ¢ is not. complf'tely monotone. o
5.4. Stability (mod 1) 139
5.4 Stability (mod 1)
Stftble distributions in the customary senSA are defined as follows (d. Feller [lS], p.170): a. nondegencrate rv Xl is said to be stable, if for each n E N there exists teal numbers O;n > 0 and !3n such thM
(5.21)
where Xl,"" Xn ;:l.re iid rv's. Xl i~ strictly stable if for each n (5.21) holds with t3n = O. When 1.p denotes the dlf of Xl, then (5.21) is equivalent to cp"(t) = IP(O:nt) exp(it;3~). As is well known, tlw constant u" is of the form n l/~ with 0 < (~ ::; 2. Naturally, one would like to define :::t<l.bility (mod 1) as follOWf}: Xl E .1'[0, 1) is said to be stable (mod 1) if for each n E: N thArA are constant~ (~1l > 0 and !3n :::\lch t.hat.
(5.22)
where Xl, .. " X" are iid rv's. However, if Xl is stable, t.hen {Xd i::; stable (mod 1) only when (5.21) holds with 0:" = n or Q n = n2 since only then
{{xt} + ... + {Xn}} = {Xl + ... + X'l}
.!! {O:nX1 + ti,,} = {a,I{Xd + fn}·
Nok that G:" = n and On = 71,2 correspond to 0: = 1 and 0: "" t, respectively. If CI:' = l, then Xl has a Cauchy distribut.ion (cf. Lukacs [43], p.139); if
(~= i, then Xl ;.;, 1/Z2, where Z ..... N(O, 1) (cf. Feller [17), p.231). FSS's are very useful for st.udying distributions (mod I), but unfortu
nat.ely they cannot be lIsed t.o define stability (mod 1) similar to (5.22) since a FSS is only defined for integers. When trying to define self-decomp· osability (mod 1) one encounters the same problem; for the concept of selfdecomposability in the customary sense, we refer t.o Lukacs ((13], p.161). Therefore, we consider stability modulo 1 fl,c~Ol'ding to the following defilli· Han, which seems to be the most. generfl.l possible (see also Mardla. [46], sect. 4.2.4(b)).
Definition 5.41. Lot X E .1'[0,1) with FSS c. Then X is said to be stable (mod 1) if for each n E N t.here exist.s a real nunlbN {i'l E [0, 1) such that
(5.23)
holds, where X I, ... ,Xn are independent copies of X.
140 Ch8pter 5. Infinite divi.'iil>ilily
It if; evident that (5.23) is equivalent to c(k) = (c(k)cxp(-27fik;3,Jn)t· Therefore, if X is stable (mod 1), then X is infdiv (mod 1).
In the following two lemmas we characterize stable (mod 1) dititributions.
Lemma 5.42. The jollowmg 8tatcment~ are eql1ivalent. (a) X is stahlr; (mod 1).
(h) X :4 u, or· th.er~ exist r €: N, ,; t [0, l/T) 8nch that X 4. U,. + ~.
Proof: Let X be stable (mod I). Then, by (5.2:!),
for sorne ;in c-: [0,1). If cx(k) = 0 (k E 7.,0), thnn X i::. U. Alternatively, <I
suppose X 1:- U; then Corollary 5.1:.1 applic>s. Convcn::ely, it is clear that t.}w tv'S U fwd Ur + ( are stable (mod 1).
o
Lemma 5.43. Th~ 8d of 8lable (mod 1) distributwns comcidr~8 with the set of distributions that are lirnits of {X I + ... + X n. - /i,,}, as n -. 00, where (X.i )1' rm:: iui tv'S 'in >\:,[0,1), (i" E 10,1), and eq~wtion (5.2.'1) holtis.
Proof: Let Xl be stable (mod 1). Then equation (5.2;3) holds. HflllCC the distrilmt.ioll of Xl is a limit of the dist.ribution of {Xl + ... + Xn - r:i't~} as n ---4 ()(J.
Sllppose there exist a scqU(~nce (Xn)f of iid rv's in X[O, 1), Y E ,-\'[0, 1)
and a sequence (//,.,),( such that {Xl + , .. + X" - lin} ~~~ Y (n ---4 OC)), i.e.
for k ( Z
If (').,(~;) '" 0 for all k E 20, then Y ~ U. Hence Y is stabk~ (mod 1) (d. LcmrlJ<t 5.42). Alternatively, supp0S(' that t.here exists an inkger kEN such that Cl , (k) # o. Take r = Tllin{k EN: (y(k) 1:- O}, Theil hJr)1 = 1, and thus jcy(r)1 = 1. By Corollary l.17(ii), Y is concentrklt.t>d 011 {€ +!: : j = 0, ... , T -I} for some £: t. [0, ~). Lemma 1.16 yields cy(k) = 0 if /,; i rZ, and hence, by Theorem 5.4,
}I 4. U,. + £:. Lf'rllrna ;).42 implies that }'" is stfl.hir (mod 1), o
References
[1] AARSSEN, K_) AND L. DE HAAN, "On the maximal life span of humans", Mathematical Population Studies, (1994), to appear-
[2] ADRAMOWlTZ, M.) AND l.A, Sn:GON, Handbook of mathematical functwn.~, Dover Publications, Inc., New York, ninth ed" 1972,
(3] ARNOLD, B-C., AND G. MEEUEN, "A charackrization of t.he uniform distribution based on summation modulo one, with application to fractional backlogs", Australian Journal of Statist.ics) 18 (1976), pp_ 173---175,
[4] I3ALLERINI, R., AND S.1. RESNlCI(, "Records in the presence of a linear trend", Advances in Applied Probability, 19 (1987)) pp, 801-828_
[5] BARNDORPF. NIELSEN, 0" "On the rate of growth of t.he partial maxima ~eq\lence of independent identically distributed random variables", Mathematical Scandinavian, 9 (1961), PI>- 383-394,
[6] DARYSHNIKOV, Yo, B. EISENBERG, AND C. STENGLE, "A necessary and ~ufficiellt condition for the existence of the limiting probability of a tic for first plad', unpublished paper, (1994)_
[7] BHATTACHARYA, R,N-, "Speed of convergence of the n-fold convo& luLion of a probabilit.y measure on a compact group)), Zeitschrift fUr Wahrscheinlichkeitstheorie und verwandete Gebiete, 25 (1972), pp_ J -10.
18] BILLINGSLEY, p" Probability and measure, Wiley, New York, 1979,
[9] BINGl-lAM, N.H., C.M- GOLDIE, AND J ,L, TEUGELS, Regular variation, Cambridge University Press, Cambridge, 1987.
[10] BRANDS, J. J. A. M" An ("L.~ymptotic problem in e:r.tr-emal processes, tech_ report, RANA 91-10, Eindhoven University of Technology, Eiudhoven, The Netherlands, 1991-
[11] BRANDS, J_J_A_M.) F,W. STEUTEI..., AND R.J.G. WILMS, "On the number of maxima in a discrete sample", Statistics and Probability Letter~, 20 (1994), pp, 209-217_
[12] DENG, L.Y., AND E.O- GtORCE, "Some characterizations of the uni· form distribution with applications to ral)dom number generation" , Annals of the Institut.e of Statistical Mathematics, 44 (1992), pp_ 379 ··385,
141
142 Refenmces
[13] DVORETKZKY, A., AND .T. WOLFOWlTZ, "SlIIllS of fhndOIl) illt.f~gerS reduced modulo m'\ Duke Mathmnatir.al Journal, 18 (1951), pp. 501·-507.
[11] BISENI3ERG, B., G. S'l'ENGLl-::, AND G. S'I'HANG, "Tbe asymptotic prohability of a tic for first place", Annals of Applied Probability, 3 (1993), pp. 731···745.
[15] ELLIOTT, P.D.T.A., Pmbabilisfzc rwmbe7' theory, voL 1 of Crundl(>hrf~ll der matbernklti~che Wi~::;E'~llschaftE~ll 2:m, Springer- VE~rlag, New
York, 1979.
[16J FAINLEIB, A.S., "A generalization of Essmm's inequality and it.s applieat.ion t.o probhhi]istic rnunber theory", !zve:;tiyh Akademii N;-l..nk SSSR Scriya MatAUlatichesliliya, 32 (1968), Pl". 859 879.
[17] FELLER, W" An mtrodudion to probabihty theory and it::; application,~, vol. 1, Wiky, New York, second E~d., 1965.
[1 s] ............ , An introdllction to probali'ill,ty thwlT"Y (lnd its Iq)pliwtioH8, voL 2, Wiley, New York, second ed., 1971.
[19] GALAMBOS, ,J., The aSllrnplo(ic tht:;ory oj f!.7:tremr. order .c;tatistic$, I(rieger, MktlabM, Florida, 1987.
[20] Gi<:Nl)GTf,N, VAN PER, n.n., "T1H~ distrihut.ioll of l"andom variables n~ducr.d modulo /1", Statistica Neerlandica, 2G (1972), pp. 1···13.
[21] GNEDENKO, D.V., "Sur ladistribution limite du terme maximum d'une serle aletoire", Annals of Mathematics, 44 (1943), pp. 4n--45:~.
[22] GNEDENKO, B.Y., AND A.N, KOLMOGOROV, Limit distnil11tums f()1~ 8wn8 (if indqwndf.1!t 1"(mdc)1f! 'II(L·(iable.~, AddisOll- Wesley, Cambridge, Ma.ss., USA, 1%4.
[23] GOl.DMAN, A.,J., "F'ra.(:tional container-loads and topological groups", OpE>ratinns RE'St'arr.h, 16 (19G8), pp. 1218-1221.
[21] GnENANDER, U., P1"()b(LbiWh:s on algebraic structures, Wiley, New York, 1963.
[2G] CYIRES, 13., "On the sums of random variables mod 211"", in Proceedings of t.h(~ 5t.h Pannoniall Symposiulll on Mathemat.ical StM,iRt.icfl, W. Crossman ct al. (eds.), Visegr<id, HungMY, 198~, pp. 75 86,
[26] HAAN, ng, L., On 1"(::!),ILia.'I" variation and its application to the 'Wp.c1.k non'1II'7"ljP.riN: oj Mm!ple extremes, PhD the~is, Mathematical Centre Tract 32, M,lthemati(:s Centn\ A llIS tNdalll , The Netherbnds, 1970.
[27] HAAN, n~:, L., ANn S.1. RESNICK, "Local limit theoreHl8 for sample ext.remes", Annals of Probabilit.y, 10 (1982), pp. 3£16-41:3.
[28] HEYER, H., Probability mWSllTeS on locally compact groups, Springer-Verlag, Berlin, 1977.
References 143
[29] HOLEWIJN, P.J " Contrib1ltions to the theOT"y oj asymptotic di#ribution 1rl.(Jrilllo 1, PhD thesis, Delft University of Technology, Delft, The Netherlands, 1965.
[301 --, "Note on WcyPs criterion and the uniform distribution of independent random variables", Annals of Mathematical Statistics, 40 (1969),
Pli. 1124 1125,
[31) --, "On the uniform distribution of sequences of random variables", Zeitschrift. fiir Wahrscheinlichkeitsthcoric und verwandet.e Gebietc, 14 (1969), pp, 89-92.
[32] JAGERS, A.A" AND F.W. STEUTEL, "Problem 247 and solution", Statistica Necrlandica, 44 (1990), p, 180.
[33] K A R,AMA'fA, J" "Sur un mode de croissance rcgulihe des fonctions", Mathematica (Cluj), 4 (1930), pp, 38-53.
[34] KAWADA, Y" AND K 1'1'0, "On the probability distributions on a com~ pact. group", Proceedings of the Physical-Mathematical Society Japan Series 3,22 (1940), pp. 977-998.
[35] l(AWATA, T" Fourier (tnalusi8 in probability theory, Academic Press, London, 1972,
[36] KEMPr~RMAN, J.H,B" "Probability methods in the theory of distribut.ions modulo one", Cornposit.io Mathcmatica, 16 (1964), pp, 106-137,
[37] KOKSMA, J .F., "The theory of asymptotic distribution modulo one", in Nuffic international summel" session in science, J.F, Koksma and L. I<lliper~ (eds,), Bl."cukelen, The Netherlands, 1962, Noordhoff, pp, 1-22.
[38] KOLASSA, J.E., AND P. MCCULLAGH, "Edgeworth series for lattice distributions", Annals of Mathematics, 18 (1990), pp, 981-985.
[39] KUIPERS, L., AND H, NIEDERREITER, Uniform distributio~ of sequences, Wiley, New York, 1974,
[40) LEWIS, T., "The factorisation of t.he rectangular distribution", Journal of Applied Probability, 4 (1967), pp. 529 542,
[11] LOSSER,S, O.P" AND L, RADIi:, "Problem E3436 and solution", The American Mathematically Monthly, 101 (1994), pp. 78·-80.
(42] LOYNF:s, R.M" "Some results in t.he probabilistic thcory of a.wrnptotic uniform dist.ribution modulo 1", Zcitschrift. Wr Wahrscheinlichkeitstheoric ulld verwandete Gebiete, 26 (1973), pp, 33-41-
[43] LUKACS, E" Char(wteristic functions, Griffin, London, 1970,
1441 LYONS, R., "Characteri::-:atiolls of measures whose Fourier-Stieltjes t.mnsforms vanish at infinit.y", Bullet.in of the Ameri~an Mathematical Society, 10 (1984), pp, 93-96.
144 Hcferellces
[45] , "Fourier-Stieltjes coefficients and asymptotic distribution mod-ulo 1", Annals of MatheIIlatic~, 122 (1985), pp. 155-170.
[46] M ARDIA, K, V" Statistics of directwnal data, Academic Press, London, 1 D72.
[47] MISES, YON, H" "La distribution de la plus grande de n valenrs", Selected Povers 1I, The American Mathematical Society, (.1936), pp. 271 294.
[48] PAHTHASARATHY, lCR., PmiJability measrm.:;s on mdr"ic 8ji(J.Cf.8, Academic Press, New York, 1967,
[49J PETROY, V,V" Sums of independent random variabks, SpringerVt'rlag, Berlin, 1975.
[50] PrcKANDs, ,1., "The continuous and differentiable domaius of attraetion of the extreme value distributions", Annals of Probability, 11 (1986), pp. 996--1004.
[51] RESNICK, S.I., Extreme values, n::,gular' va7'iation and point processes, Springcr- Verlag, New York, 1987.
[52] RESSEL, P., "Semigroups in probabilit.y t.heory", in Probabilistic measures on grollps, Oberwolffl.eh, Gmmany, 1991, pp. 337·-3G3.
[53] RODI3INS, H., "On the equidistriblltion of sums of independent randorn va.rin.bk~", Pro(:('cdings of t}II' American Mathematical Society, 4 (1953), pp. 786--799,
[G4J SCHATTE, P., "Zur Vert.eilung der Mant.i~~e in der GleitbHnma.darsh+ IUTLg eiller Zufalbgri.)~se", Zeitschrift. fiir AnWMandte Mathmnatik and Mcchanik, J:3 (1073), pp. 553 .. 5GG.
[5.5] -", ''OJ) SumS modulo 211" of independent. random variables", Mathematisr.he Nar.hrichten, 110 (1983), pp. 243-262.
[56] --, "On the asymptotic uniform di~tribut.ioll of sums red need mod 1", Mat.hernati~che Nachricht.en, 115 (1984), pp, 27528l.
[57J --, "On t.he asymptotic uniform dist.ribution of t.he 11,-fold convolution mod 1 of ,i lktt.t.ice distribllt.ion", Mat.hmnat.isehe Nachrichtcn, 128 (1986), pp. 233 241.
[58] --, "On mantissa distributions ill computing and neufonl's law", .Journal of Information Procc~~ing and Cybmneties, 24 (1988), pp. 44:~ 4;)5.
[50] --, "0)) llle(I.K\II"i~$ of uniformly distribut.ed sequences and Benford\ law", Monatshdt.c flir Mathematik, 107 (19m)), pp. 245 256.
[60] STAPU:TON, .J.B., "A eharact.erizatioll of t.he uniform di~tribut.ion on a C()llipact. t.opological group", Annals of Mathemat.ical Sta.tj~tk~, :~4
(1963), pp. 319326.
References 145
[61 J STEUTEL, F. W., "Some recent results in infinite divisibility" , Stochast.ic Processes and theit Applications, 1 (l973), pp. 125-143.
[62] -~', "On the taiLs of infinitely divisible distributions", Zcitschrift fUr Wahrscheinlichkeitstheorie und verwhndete Gebicte, 28 (1974), pp. 273·-276,
[6:.1] .-, "Infinite divisibility in theory and prac:t.ice", Scandinavian Journal of Stati~tics, 6 (1979), pp. 57-64.
[64] S'l'EUTEL, F.W., AND J.G,F. THIEMANN, "On the independence of integer and frA,ct.ional part~", StMistica Nccrlandica, 43 (1989), pp. :)3--59.
[G51 SWE£TlNG, T .. J., "On domains of uniform local attraction in extreme value theory", Annals of Probability, 13 (1985), pp. 196-205,
[66] TITCHMARSH, E.C., 'the th(~OT"1/ oj jl1nctions, Oxford University Press, London, second ed., 1960,
[67] TUCKER, H,G., A gruduate C01J.rse in pTub(zbdity, Academic Press, New York, 1967.
[68] WERMUTH, E.M.E., "Some elementary properties of infinite products", Tlll~ American Mathemat.ical Monthly, 99 (1992), pp, 530-537.
[69] WEYL, H., "Ober die Glcichverteilung von Zahlcn mod. Eins", Mathematisdw Annalen, 77 (1916), llP. 313-352.
[70] WHITTAKER, E.T., AND G.N. WATSON, A course of modern analysis, C\tmbri(\ge University Pre$$, New York, fourth ~d., 1962.
[71] WILMS, R.J .G., Propert'les of Fourier-Stidtjes sequences of di5trib'ILtion~ with 811pport in [0, 1), t.ech, report, Memorandum COSOR 91.06, Eindhoven University of Technology, Eindhoven, The Net.herlands, HJ91.
172] --, frljinite divisible and stable di8trib~ltions modulo 1, tech. report, Memorandum CaSOR 93-22, Eindhoveu University of Technology, Eindhoven, The Netherlands, 1993.
[73J --, "On t.he limiting distribution of fractional parts of ext.reme order statistics", in Extreme Value Theory and Appliclttions, J. Galambos ct at. (eds.), Gaithersburg, M~.ryland, USA, 1993, Klllwer Academic PH blishers, pp, 433-446.
174] WILMS, R.J.G., AND J.J.A.M. l1l{ANDS, "On theasympt.otically uniform dist.ribution modulo 1 of extreme order st.atistics", Statistica Neerlandica, 4R (1994), pp. 63-70.
(75] WlLMS, R . .J.G., AND J.G.F. THIEMANN, "ChMact.erizations of shiftinvariant distrihutions based on tmrnmat.ion modulo one", Australian Journal of Statistics, 36 (1994), to appeftr.
176] WOLFE, S.J., "Oil moment.s of infinitely divisible distribution funct.ions", Annalt' of Mathematical Statistics, 42 (1971), pp. 2036-2043.
146 Rdm:enGes
Notation - Index
Abbreviations
chf df FSS iff iid illfdi v
mod a rv o
chara.ctMistic function, 6 distribution function, 4 FOllrier-Stieltjes sequence, 6 if and only if indepeudent and identically distributed infini tely divisible, 125 modulo a (a > 0) random variable, 4 end of proof
Notations
N
Z IR I(:
[a, bj (a, b)
i:tl !3k(:f,) 4(.1:) A (:1: )
gcd(p,q) lcm(p, q) ~A lA 0,0, "" ();(F) w(F)
the (let {l, 2, ... } of positive integers the set of integers, Zo:;;;;: Z \ {O} the set of rea.l Hllmbers, D4 := {:t Eli: x ~ O} th{~ f;et of complex numbers the dosed interval {:l:E lR : a ~ x ::; b} the open interval {:r E IR : a < x < b}; similarly, [a, b) and (a, bj
n if :r E (n - 1, n] with n E Z c·>r;b: (kEZ, XEIR)
I 'Z - ! y2 d ( ) -;zr; Loo e 2 y:r E IR
cxp( -e-X) (x E JR)
the greatest. common divisor of p and q; gcd(Ji, 0) ;= p (p, q e N) the least common multiple of]J and q (p, q E N) the set {{a; aEA} (A C oc, ~EIR) the indicator function of tht~ ~et A (A C JR) Landau's 0, 0 and I'v symbols inf{.?: E lR : F(:r) > O}, the left endpoint of a df F sup{:r IF JR : F(::t) < I}, the right endpoint of a df F
147
148
x ~ Exp(>,) X ~ N(tl., 17 2 )
Conventions
{:r} [:1:] 1JI
---'> ~'W ·_··f
il --.~
Ii
·1;'10,1) F(A) Fx, Ix F, F P>I<C F®G A, <f\~, \v,., FE D(G)
C"'(A)
(;-
A(P, X)
R('J!( X) U U,.
NotatioIl-IIldex
a rv X is exponentially distribut.fld with lI<:X = ± (A > 0) a rv X is normally distributed with EX = Jl and
VarX = (11 (p. E JR, (J> 0)
the binomial coefficient k! (;:~k)! (n, kEN U {O}, n ? k)
the fractional part of :l: (;r E lit), (-) thp iIlt.~g~r part of :r: (:r e lit), G
wE'ak COIlV(~rgence, 4
re<iuccd weak convergence, G
reduced convNg;c~nc~' ill distrib\ltioll, f)
equalit.y in distribution, 4 the chf of a rv X, G the Laplace-Stieltjes transform of a rv X, 6 the rss of a rv X and fl. df F, rflspec:tively, 6
the set of rv's with df in F([O, 1)),4 the ~d of df'::; with support. in A (A C lit), 4 the df and the density of a rv X, r(~SlH~c1.ivdy, 4
tllE' reduc('d alld thE: cOlljugal(~ d[ o[ F, respectively, 5, 10 convolution of dr's P and G, 9 convolution (mod 1) of df's F <iIld G, 9 extreme vahw distributions, 6~! the df F belongs to the domain of attraction of an
ext.reme value df G, 62 thl' flf F belongs to the m-timcs differentiable domaill
of attraction of an extreme value df G, 96
the :.;et. of rn-times continuously diH(lrentiable functions 011 A ~ lR (m E N), 17
the function £ is regularly varyi1lg kl.t. 00 with index P (p E lit), 63
t.hp (gellPraliz~~d) illvnrsE' of Ii. [1lncUon G : JR. Ilt, 40 t.hr srt of seqnencrs (X))f that arc independent COpil~::;
of it rv X with df F and w(F) = 00, 38 Uw rnplicat.ion number of a rv X, 11S a rv with (continuous) uniform ditltribu(.ioH ()JJ [0,1), 13 a rv wit.h dw:rete uniform diRtribll !.ion on 10, 1), 13
Author-Index
A arf:'::;cn , 1<. 61 Abramowitz, M. 17,30,74,97 Arnold, B.C. 3,115,120,121 nallerini, R. 61 Barndorff-Nielsen, 0,49 B;-l.ryflhnikov, y, 47,51 Bhattacharya, R.N. 20 13illingslcy, P. 4,27 Bingham, N.R 59,62 Brands, J.J.A.M. 47,51,52,56 Deng, L, Y. 3,123 Ovordzky, A, 4 Eisenberg, 8.47,48,51 Elliott, P.D.T.A, 7,10,20 Fainldb, A.S. 20 Fdl!"r, W, 7,9,12,62,115,125,
1:36,138,139 G.dambos, .), 62,64,65,69,72,102 Genngten, van del', B.B. 4 George, E,O, 3,123 Gnedenko, B.V. 62,125,127 Goldie, C.M. 59,62 Goldman, A.J. 3,115,120,121 G ren andel', U, 10 Gyil'€s, B, 4 Haan, de, L. 61-64,70,97,104 Heyor, H, 10 Hoiewijn, P.,l. 1,2,15 It.b, K. 20 .lagers, A,A, 2,47 Karamat.a, J, 62 Kawada, y, 20 Kawata, T. 7,8,10,33 Kemperrnan, .l.H,B, 1 Koksma, J.F. 1
149
Ko] a.s::;a , J.E. 3,79,89 Kolmogol'ov, A,N. 25,125,127 Kuipers, L. 14,15 Lewis, T, 26 Lassen;, o.P. 2,47 Loyues, R..M, 1 Lukacs, E, 7-9,94,125,129,132,
133,135,136,139 Lyons, R. 4 Mardia, K.V. 4,7,8,10,20,139 McCullagh, P. 3,79,89 Me(~den, G. 3,115,120,121 Mises, von, R. 65 Niederreiter, H, 14,15 Part.hasarathy, KH. 125 Petrov, V.V, 94,125 Pickands, J, 96,97 Rad(~, L. 2,47 Resnick, S.l. 39,40,61-64,67,70,
73,74,97,99,103-105,110 Ressel, p, 125 Robbins, H, 4 Schat.tc, p, 2,3,5,11,16,18-20,22,
23,115,124,127,128,131 Staplet.on, j,H, 123 St.egnn, I.A. 17,30,74,97 Stengle, G. 47,48,51 Stcut.cl, F,W, 2,47,51,52,91,125,
137 Strang, G, 47,18,51 Sweeting, T,J. 97 Teugels, J,L. 59,62 Thiemann, J,G,P, 91,115 Titchmarsh, E.C. 24 Tucker, H.G. 19,25
1GO
Wat:son, C,N. 17 Wermut.h, E,M,E, 21 Weyt, If. J
Whi1.takN, E.T. 17 Wilms, H"J.G. 10,47,51,52,61,
llG,12G Wolf~~, S.,J. 1.:33 Wolfowitz, J, 1
Author-Illdex
Subject-Index
absolutely continuous 4 a::;ymptotically negatively corre
lated 80 asymptotically uucorrelated 80 a.symptotically uniform in distri-
bution (mod 1) 16 lnaxima 52,69 prod ucts 30 sum::; 20,26
a.thldic record::; 61 atlxiliary function 59 Benford's law J binomi.\l coefficient 148 Bord-Cantelli lemma 49 bounded va.riat.ion 127 canoni(~al representation 125
Kolmogorov 126,129,133 Lcvy-Khinchine 125,129,132 Levy 126,129
central limit theorem 125 characteristic function 6 coin tossing 47 completely monot.one 138 continuity theorem 8 convergence on a circle 5 convergent 23 convolution 9 eonvolution (mod 1) 9 convolution theorem 9 Coriolis' conditions 24 cumulative hazard function 40,52,97 De Finetti'::; theorem 136 dense 59,75 density 4
151
di::;<.: 4
distri bu tion function 4 conjugate 10 reduced 5
distri but-ion arithmetic 12 Cauchy 29,65,125,139 compound Poisson 136 discrete 11 exponential 6,29,40,47,52,64,
65,77,91,133,135,148 Gamma 35,95,125 geometric 47,65,125,135 integer-valued 46-51,57,65,137 lattice 11 lognorma.l 65
normal 65,77,95,125,139,147, 148
Pareto 57,65,73,111 Poisson 65,125
divergent 23 domain of attraction 62
differentiable 97 endpoint
left 147 right 147
Enler's constant 104 Euler-MacLaurin SUIll formula 17,
80,82,106 extreme value distributions 63,96 extreme value theory 61,96 factorization problems 125 Faa di Bruno's formula 30,97 first significant digit 3
152 SlIb}ect-Index
FOlnier transform 7 Fonrier"Stidtjes sequence 6,7 fractional backlog 120 fractiOlHd part 6 fractional parts of
maxima ~n products 30 XllmS 20,23
G(l,IllTll<t function (10,74 golf tournament 48 greatest commOn divisor 147 hazard rate 40,,57,72,76 high tide levels f) 1
indicator function 147 inequality of Fainleib 20 infinite divisibility (mod 1) 124,
131,137 infinite smallness coudition l24 infinitely divisible 125 infinitely divisible (mod I) I2() intep/~r part 6 inv~~rs(·: (goIH·mJized) 40 invcrllion formula 10 Kttramata's representation 63,104 La,)l<hu's symbols 117 Laplace-Stidtjcs transform 6 lat.t.ice 12,22 h>.a,sf, cornman multiple 147 life span 61 local uniform convergence 69,97 logarithmic series distribution 51 mantis(;;t dist.ribution J Tll~~tric 7()
ttl i Hi llllllll 41 mOlll('llts of maxima 103 rH~edk 4 llormillg COllstallts 62,97 Ilmnbcr of maxima 4G,47 number theory 14 Poincare's theorun 4 pre-packing problem 4 prohability gellerating function ,51
probahility spa.c€ 4 random number generator 12:~ random variabJe 4 rate of convergence 20,57,94,96,
100,105,110,113 reduced convergence in distrihu-
tion 5 redu(:ed weak convergence G regulhr variation 59,61 regularly varying 63 remaillillg life 64 replication numhet 11 f) Riemann-Lebesgue lemma 7 rollnding ~~rr()r 89 same type 62 Scheff6's theorem 27 self-decomposability 139 sdf-decomposability (mod 1) 139 slowly varyillg ():\ span 12 st.abh~ l:.m stable (mod 1) 139 statistical modeling I2fi tail equivalent 64,67,70,76
locally 45 three series t.heorem 25 type
Frechet 63 Gumbel 38,63 Weibull 63
uniform dist.ribution continuous 13,91,95,116 discrek 1 :3,9l, 116
uniformly distributed (mod 1) 14 almost surely IS
Unicjl.lClHlSS theorem 7 Von Mi~es condition 65,69,103 weak convergence 4 weighted tlums 28 Weyl criterion 14,15
Samenvatting
Deze dissertatie behandelt het gedrag van breukdclcn van stochasten, in het bijzonder het limictgedrag van een rij brcukdelen en het gedrag met betrekking tot oneindige deelbaarhcid in modulo 1 (mod 1) zin- Het hreukdecl van CCn rei:\el getal is gedefinieerd als het verschil tllssen dit getal en het gehele deel van dit get.al. Bij de bestudering van hct gedrag Van een rij hrcukdclen vall stochastcn ~pee1t het begrip asymptotische llniformc verdeling (mod 1) een belangrijke rol. De ontwikkeling van asymptotische uniforme verdeling (mod 1) began in getaltheol'ie. In de jaren zestig werden methodcn ell resultatell uit de kansrekening gebruikt om de theorie van verdeling (mod 1), en de thcorie va.n asymptotische uniforme verdeling (mod I) verdei' tc ontplooien_
Schattc bc~teedde veel aandacht aan a..<;ymptotischc uniformiteit in verdcling (mod 1)_ Een rij stochasten is asymptotisch uniform in verdeling (mod 1) ,'LIs de rij breukdelen van deze stochasten in verdeling convergeert naar een uniform verdeelde stochast op [0,1). Schattc bestudeerde asymptotische uniformitcit in verdeling (mod 1) vM sommen van onafhankelijke en gelijkvcrdcelde fltochasten. Aan een generalisatic hiervan, de studie van sornmAn van OnaflH'l.nkelijke, niet noodzakelijk gelijkvcrdeelde, f)tochasten, is echtcr gccn aandacht geschonken_ Tevens beschouwde Schatte oneindig dedbaarheid (mod 1) met behulp Van het bcgrip herhalingsgetal. Het belang van dit begrip blijft cchter onderbelicht.
De klassiekc cxtrerne·waardentheorie, waarvoor veel wiskundige methoden zijn ontwikkeld, speelt cen beiangrijke 1'01 in de kansrekening. De studie van breukdelen van maxima daarentegen is nicuw, en vaak kan hierbij wi!)kuudig gereedschap uit de klassieke theorie worden aangewcnd_ Breukdelen van ~tocha..'lten zijn bcstud~erd in de context van Operations Research, en Kpelen een 1'01 bij het genereren van (pseudo )random getallen en bij de analysc van afrondingsfoutcn.
Het. dod van deze dissertatie is bij te dragen aan de ontwikkeling van de theorie van verdeling (mod 1) cn die va.n asymptotische uniformiteit in verdeling (mod 1) _ In Hoofdstuk 1 wordt een kort overzicht gegeven van enkelc definitles van asymptotische llniformc verdeJing (mod 1) die in de litcratuur bestlldeerd zijn_ Tevens worden in dit hoofdstuk eigenschappen
153
154 Sawt'nvat ting
van Fouricr-Stidtjcs rijen besproken; deze rijen zijn van essentieel be lang hij de :)tudie van brcukdclen van stochastcll.
Th~t. Tlwrel!(kel van deze dissertatie, te weten hoofdstuk 2, 3 en 4, behanddt hct limietgedrag van een tij btClukdelen van stor.haflten. In Hoofdstllk 2 wordt ccrst het limietgedrag van sommen van onanlankelijke, niet noodzakelijk gdijkv~~rde(~l(k, ::;tochastcn bcscbouwd. In dit hoofdstuk worden tcvcns vool'WaardCll bcpaald vaal' de convnrgentie van produUml vall ona111ankelijke ell gdi.ikv(~nl~r.ld(· stocha.':iten naar de uniform verdeelde limiet.
In Hoofdstuk 3 worden maxima van onafhankelijke en gelijkvel'deelde st.odla~ten bcstudeerd. Om voldoende voorwaarden te gevcn voor het e011-verg(:n~Il v,w ePTl rij breukdelen van maxima worden asyrnptotisdw (jgenschappen van het aantal maxima in een steckprod met positievc gchcclw,lardigc - onafuankclijke en gclijkvcrdccldc - stochasten ondcrz()cht. Om a.symptotischc uniforrniteit in verdeling (mod 1) te waarborgcn, worden voorwa.ankll OJ) de 'hazard rate' van de onderliggende verdding lwpaald. Anders da.n mell :(,OU v~~rwachten, wordt hier aangetoond dat hd limiet.gedr<:\.g vall brenkdden van maxima uiH. direct. gerAlat.eerd is <tan deze t.heorie.
In Hoofdstuk 4 wordt hct limietgedrag van eovariant.ies van het bn~ukdeel en hN gehele (hid van £len flt.ochas!, (om vall covariant.ies van het breukdeel van P(~ll st()cha~t en de ~tOCha.8t zelf bcschouwd. Dc volgcndc dric t.ypen st()cha~Cen WOl'dml bE)keb~ll: sommell (HI maxima van onaf11ankelijke en gelijkverdcddc stochastcn, en vcclvoudcn van cen stochast. Voldoendc voorwaardell wordell bepaald opdat het breukdeel ell bet. gehele ded a.sylrlptotisr.h m*at.id gecorreJecnl zi.in, ell opdat het breukdeel van f'f1n st.ocha"c;t en de stodlilSt. /,elf asympt.otisc'h onWiwrreliierd zijn.
Tcnslottc, Hoofdstuk 5 is gcwijd aan oneindige dcclbaarheid in-modulo 1-zin. Om ccn duidclijkcr inzicht in oncindig dcelbaarheid (mod J) te verkrijgen, wordt eel! uiellwe ka,raktel'ifl!'l"ing van oneindig deelbare v(~rdclingen (mod 1) gE)gevcn. Hicrtoe worden reprcsentaticstellingcn analoog aa.n de bekcTl(k kanoni{~ke voonstdlingen en (~en karakterisering van verdelingtlll met. (~('ll (~illdig h(~rhalillg-~getal afw~leid. De laat.stgenoemde karaktmisering is u:;sf~ntiAd bij de bestndt>ring van een generalisatie van een bepaalde stoehastisehe vcrgc1ijking. 1(~vens word t er kort ingcgaan op ~tabiele venlp]jllgell illlllodulo I-zill.
Curriculum Vitae
Df) auteur van dt~7,e rlissertatie, Roeland Joannes Gerardus Wilms, werd geboren op 2 okLober 1964 tc Maa~hracht. In 1983 bchaalde hij ~ijn VWodiplOlna aan het Bisschoppdijk College te &ht, en begon in datzclfdc jaar met zijn studie wiflkunde aan de Katholieke Universiteit tc Nijruegen. Eind 1ge8 stlldeerde hij af binnen df: rkhting praktijkvariant Statistiek en Operations Research. Eerst. werkte hij atm een stage-opdracht uitgevoerd bij de Mathematisch·Statistische Advit~sa.fdeling te Nijmcgen ouder de verantwoorde1ijkheid van drs. Ph. van Elteren, in opdracht van de KEMA te Arnhem. Onder contra.ct bij de KEMA werden gedurende vier maanden dezp. werkzaamheden voortgezet. Bovcndien sehl'eef hij een scriptie over een mathematisch-ecollomisch onderwerp onder begeleiding van dr. J.A.M. Pott.ers en dr~. P.E.M. Dorm.
Na het vervull~m van ~ijn miiitaire dienf;tplicht werd hij in september 1990 aar)gesteld als assistt~nt ill opleiding hij de vakgroep Besliskunde en Stodlastiek aan dc Technische Universiteit te Eindhoven. Deze dissertatie is een weerspiegeling van het onderzoek dat gedurcnd{~ vier jaar werd vcrricht OTlch'I' begeleiding v<t,Tl proLdr. F.W. Steutel ~n dr. J.G.F. Thiemann.
155
STELLINGEN bchorende bij het proefschrift
Fractional parts of random variables Limit theorems and infinite divisibility
van
Roel Wilms
1. Yoor (~en gegeven positicf getal :/:, zlj d(x) het ecrstc signifitante deciIIlal~ cijfer, d.w.?;.
waarbij [yl het. !lehele ded van y i,o;, en {y} := 11 - [yl het breukdeel van y, We ~eggen dat CCll rij stoc.hasten (Z,,)r a(l.,n de 'wet van Benford' voldoet al.~
lim J'(d(Z,,) = k) = Ie) log!tt.! 7/"-+00 J.;.
(k=1, ... ,9),
Als {Wing X,,} in verdding converge(~rt naar cen hOlf}ogecn verJeeld(~ stochast op [0,1) voor n ---T 00, Jail voldoet (Z,,)I afi.Il de wet van Benford, Voor Z" :== tllax(X I , ... , X,,) IIlet (Xj)f' CUI rij OIHtfhankclijke en gelijkverdeeldc Kt()cltafM~1l kunnen voldol:'nde voorwaarden worden afgeleid opdat (Z,,')f' aan
(k Wi.~t Vall Benfmd voldo(·'t (:.o:ie hoofdstllk ~ in dit, prodsc.hrift.).
2. Zijm ~ N \ {1}, ell laa,t X en Y ollafhankdijke tltochasu~n ~ijn die all(,(~1t w,lanh'll ilt <"1(.' verzamdillg {U,,,.,·m-··q altIlIwmen. Rizzi ([1]) ('II Sr:(lzzahwa ([21) gevell fOl'lllUieS voor ("](: k<'l,nsverdeling, de vcrwachting (~n {k variant.ie Vall S :-:;;- Xf Y (mod m,). Sco;\zafava bestud~!ert bovendili!! T := X····· Y (mod m.), ell leidt via analow~ be[(~b~llinp;ell forrnlll(~s af voor d(' kan.wedding, df' vcrwacht.ing ell dc vH.rianti(~ van T, Dezp laatstc Iwn'b'lllllg(~tl W()j"(](~ll ovcrhodig door T tc schrijvf'1l ;tl:; X + Z (rnodm), w,larlAi Z:= Y (mod 10.).
ill Ilizzi, A., SOW(' tlJ('or(~nts Of! t.he ::iurn modulo 'In of two iIl{h~pCIldent randolll v:U'i;~hl(!::-;, Mr:tmn, 48 (1000), pp. 14V-IGO.
[2] S(,ozzafava, P., Sum 'l.lld diff<~nmC(~ modulo 'III. betW(~(~11 Cwo independ('llt, I'<lHdolll variahles, Mdron, 40 (1 ()01), pp. 4%-r> 11.
:l. Lil,d. III, X, Y PI! S W'FPwll ·..;ijll al.'i ill s1,(~lli IIg 2, V('ro!"l(I(~rstel d<lt m priP!!1 i~, diU F(X -" k), P(Y = k) i-: IQ (~: = 0, ... ,III - 1), (~ll dat S Hni ("O!"!II v('I\I('I~ld is OV('l" d(' V('j'~a!IJ()lillg {O, ... , ill - I}, d.W.I:. r(s "- k) _. 1.
(k - 0, ... , '/II - 1). ~:k()Zzafav:l ([:iJ) lwwij"t. dat olldf'l' (k',,;(' vo()rw:wrd;;~ .\ of } - li Iii form wl"dl'('ld is owr d('I:I' V(Il'%'\.lll('iing.
III t('~'i'lIsjdlillp.; to! lid b()V()Il~t.i~iUI(h~ is hl.)t. voor i(~dN gd"d 11"/, rnow~lijk
o!lafiIallki'lijki.', lli,'i·llllifOl'jll vpnl('('ld(' stOi:h;lfil.l'.1I X ('n Y 1,(' uf!ls(,rHeren zo <lat .\")-" (mod "1/1.) lllliform Vl't'lkdd is OWl' d(' vI'I';mllldillg {O, ... , 'lJl - l}.
[:l[ S('OZ~,il.LI.V;I., F" Unifol'lll di:-;tril)llt.ioll a.lId ~IOll lllodlllo 11"/, of indcpcnd"lIt. l'il.lldnm variahks, 8t.(1t1S/.U.~8 and Pml)(J.lriNi.lI LI:tters, 18 (199;3), pp. :ll:l-TH.
4. Beschouw het volgende probleem: gooi n muntell, met kans p op 'kop'; gooi daarna verder met alleen de munten die 'geen kop' opleverden, ena zovoort, totdat alle munten 'kop' opgeleverd hebben. RAde ([5]) is geinteresseerd in het gcdrag voor n --+ 00 van hct aantaJ munten Kr., die in de laatste worp gegooid worden. De stochast /(" convergeert niet in verdeling, maar vom grote n en p niet dieht. hij 1, heeft. /{" bij goede benadering een logarithmische-reeksverdeling (de [4]).
[4] Brands, J.J.A.M" F.W. Steutel and R.J,G, Wilms, "On the number of maxima in a discrete sample", Statistics and Probability Letters, 20, (1994).
[5] Lossel's, O.P. and L. Ratle, "Problem E3436 and solution", The Amer· !ca.n Mothanatically Monthly, (1994), pp. 78·80.
5. Zij c cen Fourif!r-Stieltjes rij, en (COl) cen rij oneilldig deelbare (mod 1) Fourier-Stieltjcs rijen met eindig herhalingsgctal 7' E N, zo dat en - C a15 n --. 00. Het is bekend dat en kall worden gereprescntccrd door
als keen vcclvond van T is
anders
met (~n f:: [0, 27r), en 8" is cell niet·dalende begrensdc fnndie op [0,1) zO dat H,,(O) = 0 (zie hoofdstuk 5 in riit prodsehrift).
Dan is c oneindig declhaar (mod 1) rnet herba,1ingsgetal rn d.e.s.d,a.
f 1 - cos 21[ kx 0 ( ) ----- d ".:1: '--. 00
. 1. - COR 21[:1: (11. --. 00)
[11.1)
alb~ll ab k gtl CI1 vedvolld vall 'III. is (y,ie 17]). Schat.t.f. ([61) bewijst dcze sldlillg v()()r T == I.
iG] SchaUe, P., On SilinS modulo 21[ of illdcpendf!nt. random variables, Mathemcztische Nachrir.htl':n, 110 (1983), pp. 213-262.
[7] Wilms, R . .I.G., In.finltely div'isiblc and st.able distributions mod1do 1, (.o(:h. report., Memorandum cas OR 03.22, TU Eindhovf!ll, 1903.
6. Zij V een stochast met Fourier,"Stieltj!'~ rij e, die voldod aan r.(k) f- 0 vOOr alle k f: Z, Laat V en W onaflliUlkdijkc stochastcll zijn IllC~ waanlell in [0,1), en zij X := {V +- W}, waarbij {!J} IJet. breukded V::ln 11 it;, hm stochast di(~ llniform verdeeld is over lwt int.erval [0, 1) wordt aangegeven door U.
Dan X 4- U d.e.s.d.a. W -d::. U (:t,ic hoofds(uk 5 ill dit prodschrift), Dit H':oultw.tt gt~ller:tlis!:(:rt Stelling 2 V;Ul Deng ;mel Georpp ([8]).
[S] Dellg, L, y, and KO. G!~orge, G(:Jl(~ration of llniforrn wlri'l.tcs from several neltrly uniformly dist.ril)1lti~d varirlhll:s, Crmmnmicobl)rLS in 81.(~t1,~tZi:,~ '" S,:rnulatzon (Hul CnmplItrr.tum, 10 (l\Vl(l), 14~l- L'"lrj.
7. u(~ kwalih:it vall de tnllzid V<I.!\ (:Ike hchwk symr()nisclH~ ro("lgrucp,
w,ml"v~tn d(: b,.mkti:rvoll(: Im.dza.nppr N:1l s()ln(:ardi:rr~ hq~()lJ, i,s sV~rk afgCWJ
lll('ll. D(~ popillaritdt Vall de groep is da<trellt,q:,cn :ot,crk toq:,ellolllclL
8. Zoab (:1" voor hOllden \litlaat£Otroh:n aallgekgd Zijll, ;t,olldcn CI" \1001' vli('g('I"S opla;l.t;.;troh'n a,u)I~e1q!,d m()(!tcll wordelL
U. In N.:tl winkdcl'Htd·lllll di(~npIl (k ()pPIlillg::it.ijd(~ll vall :tllu willkds h()tzelfde
1(: AiIL
10. Bij ()lI\!()ld(H~lIdi' rUI)(:t.iolliTi.'[1 \!inlli'l\ nUll ()rg'\.[li~al.i(~ J.' lwt versehil tuSS('11 hN 11l;\.\I<\+,/~II1(~11 t, i'll de 'wcl"kv Iocr' dHt CCll lll<UI'II!}.T die ~ich (Jill per~oOldi.ikl: l'1:d(:Il(:lI u:I'lIgtl'eh (~i~l1 1-',011(\1.:!1 h;w(\dl"l1k ol1tV(tlll',t, terwijl de l<l.f/~
f"lltll·1.iO!\;I.rIs 1'('11 li'ill'1"I'11 voddrllk kri.igt,.
11. Bij hardlopl'll kan d(, Sll(:IIl('iil hN lwst opppvocrd word(:ll door d(: 11'11g1.i' vall d(~ pas 1.(' VCI'j\I'()l,(')1. Ikt. ;~<\.!)l,al pi\.~SCli Jil'!" 1,ijdseenlwid blijrt ilinhi.i gdi.ik (zi(' ook swllillg 8 hi.ig(:voq~d ill lwt prodschrift [9)). IJij walld('[PIl daan'Ilt(:g<'ll Imll dl' slIPlh('id h('t, l)('st oj)!!;('v()(7l'd Word('ll door h<::t ,l.;tn
t.al pa~:-;(,Il I>l'r tijds('('ll h('id t<' v(']'groll'lI. H lNvoor IW:-itaM. ('('n {7envol1dige vl'rklaring.
ID] f-/;\.lISi'lI, B.C. (1988), Mo)\otonici(,y propl'rt,ii~S of inlif))t.dy divi:oil)ie
dist.rihlltio!\s, pmrf.~dm:j~, TU Rilldll()v('11.