Complex Brownian Motion Representation of the Dyson Model

1 downloads 0 Views 218KB Size Report
Aug 3, 2011 - PR] 3 Aug 2011. Complex Brownian Motion Representation of the Dyson Model. Makoto Katori ∗and Hideki Tanemura †. 3 August 2011.
arXiv:1008.2821v3 [math.PR] 3 Aug 2011

Complex Brownian Motion Representation of the Dyson Model Makoto Katori ∗and Hideki Tanemura



3 August 2011

Abstract Dyson’s Brownian motion model with the parameter β = 2, which we simply call the Dyson model in the present paper, is realized as an h-transform of the absorbing Brownian motion in a Weyl chamber of type A. Depending on initial configuration of the Dyson model with a finite number of particles, we define a set of entire functions and introduce a martingale for a system of independent complex Brownian motions (CBMs), which is expressed by a determinant of a matrix with elements given by the conformal transformations of CBMs by the entire functions. We prove that the Dyson model can be represented by the system of independent CBMs weighted by this determinantal martingale. From this CBM representation, the Eynard-Mehta-type correlation kernel is derived and the Dyson model is shown to be determinantal. The CBM representation is a useful extension of h-transform, since it works also for infinite particle systems. Using this representation, we prove the tightness of a series of processes, which converges to the Dyson model with an infinite number of particles, and the noncolliding property of the limit process. AMS 2000 Subject classifications: 60J70, 60G44, 82C22, 32A15, 15A52 Keywords: the Dyson model, h-transform, complex Brownian motions, entire functions, conformal martingales, determinantal process, tightness, noncolliding property

1

Introduction and Results

Dyson’s Brownian motion model is a one-parameter family of the systems of one-dimensional Brownian motions with long-ranged repulsive interactions, whose strength is represented ∗

Department of Physics, Faculty of Science and Engineering, Chuo University, Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan; e-mail: [email protected] † Department of Mathematics and Informatics, Faculty of Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan; e-mail: [email protected]

1

by a parameter β > 0. The system solves the following stochastic differential equations (SDEs), dXi (t) = dBi (t) +

β 2

X

dt , X i (t) − Xj (t) 1≤j≤n,j6=i

1 ≤ i ≤ n,

t ∈ [0, ∞),

(1.1)

where Bi (t)’s are independent one-dimensional standard Brownian motions [3, 19]. In the present paper we consider the case with β = 2, since in this special case the system is realized by the following three processes [9, 10], (i)

the process of eigenvalues of Hermitian matrix-valued diffusion process in the Gaussian unitary ensemble (GUE) [3, 16, 5],

(ii)

the system of one-dimensional Brownian motions conditioned never to collide with each other [6],

(iii)

the harmonic transform of the absorbing Brownian motion in a Weyl chamber of n type An−1 [6], WA n = {x ∈ R : x1 < x2 < · · · < xn }, with a harmonic function given by the Vandermonde determinant Y (xj − xi ) = det [xij−1 ]. (1.2) h(x) = 1≤i,j≤n

1≤i 0}. The zero set of the function (1.3) is supp ξ ∩ {u}c . 2

P With the SDEs (1.1), we consider the diffusion process Ξ(t) = i∈I δXi (t) in M and P the process under the initial configuration ξ = i∈I δxi ∈ M is denoted by (Ξ(t), Pξ ). We write the expectation with respect to Pξ as Eξ . We introduce a filtration {F (t)}t∈[0,∞) on the space C([0, ∞) → M) defined by F (t) = σ(Ξ(s), s ∈ [0, t]). Let C0 (R) be the set of all continuous real-valued functions with compact supports. For any integer M ∈ N, a sequence of times t = (t1 , t2 , . . . , tM ) with 0 < t1 < · · · < tM < T < ∞, and a sequence of functions f = (ft1 , ft2 , . . . , ftM ) ∈ C0 (R)M , the moment generating function of multitime distribution of (Ξ(t), Pξ ) is defined by )# (M Z " X . (1.4) Ψt [f ] ≡ E exp f (x)Ξ(t , dx) ξ

ξ

tm

m=1

m

R

We put M0 = {ξ ∈ M : ξ({x}) ≤ 1 for any x ∈ R}. Since any element ξ of M0 is determined uniquely by its support, it is identified with a countable subset {xi }i∈I of R. For a finite index set I and v = (vi )i∈I , vi ∈ R, let Zi (t), t ≥ 0, i ∈ I be a sequence of independent complex Brownian motions (CBMs) on a probability space (Ω, F , Pv ) with Zi (0) = vi , i ∈ I. We write the expectation with respect to Pv as Ev . The real part and the imaginary part of Zi (t) are denoted by Vi (t) = ReZi (t) and Wi (t) = ImZi (t), respectively, i ∈ I, which are independent one-dimensional standard Brownian motions. P For any sequences (ui )i∈I and x ∈ R, if we set ξ = i∈I δui , Φxξ (Zi (·)), i ∈ I are independent conformal local martingales,

since Φxξ is an entire function. Each of them is a time change of a CBM [18]. A key observation for the present study is that the equality   h i Y uk − zj  h(z) det Φuξ i (zj ) = det  = 1≤i,j≤ξ(R) 1≤i,j≤ξ(R) uk − ui h(u)

(1.5)

(1.6)

1≤k≤ξ(R),k6=i

Pξ(R) A holds for any ξ = i=1 δui with ξ(R) ∈ N, u = (u1 , . . . , uξ(R) ) ∈ Wξ(R) and z = (z1 , · · · , zξ(R) ) ∈ Cξ(R) . It implies that from a harmonic function h(z) given by (1.2), we have a martingale for a system of independent CBMs {Zi (·) : 1 ≤ i ≤ ξ(R)} in the determinantal form, det1≤i,j≤ξ(R) [Φuξ i (Zj (·))]. Let 1(ω) be the indicator function of a condition ω; 1(ω) = 1 if ω is satisfied and 1(ω) = 0 otherwise, and Ip = {1, 2, . . . , p} for p ∈ N. The main theorem of the present paper is the following. Pξ(R) Theorem 1.1 Suppose that ξ = i=1 δui ∈ M0 with ξ(R) ∈ N. Let 0 < t < T < ∞. For any F (t)-measurable bounded function F we have     ξ(R) i h X   Eξ F Ξ(·) = Eu F  δVi (·)  det Φuξ i (Zj (T ))  . (1.7) 1≤i,j≤ξ(R)

i=1

3

In particular, the moment generating function (1.4) is given by [ Z ξ(R) M X X t 1 Jm = I p Ψξ [f ] = ξ ⊗p (dv) p=0 J1 ,J2 ,...,JM ⊂Ip

×Ev

"

M Y Y

m=1 jm ∈Jm

WA p

m=1

# i h χtm (Vjm (tm )) det Φvξi (Zj (T )) , i,j∈Ip

(1.8)

where χtm (·) = eftm (·) − 1, 1 ≤ m ≤ M. We call the above results the complex Brownian motion (CBM) representations of the Dyson model. In order to show the simplest application of this representation, we consider the density function at a single time for (Ξ(t), Pξ ) denoted by ρξ (t, x). It is defined as a continuous function of x ∈ R for 0 < t < T such that for any χ ∈ C0 (R)  Z Z χ(x)Ξ(t, dx) = dx χ(x)ρξ (t, x). (1.9) Eξ R

R

Ev [Φvξi (Zj (T ))]

Ev [Φvξi (Zj (0))]

Φvξi (vj )

Since = = = δij by (1.3) and (1.5), the equality (1.7) gives the following expression for (1.9) Z h i v ξ(dv)Ev χ(V (t))Φξ (Z(t)) R Z Z Z √ = ξ(dv) dx p0,t (v, x) dw p0,t (0, w)χ(x)Φvξ (x + −1w), R

R

R

p where ps,t (x, y) = e / 2π(t − s), 0 ≤ s < t, x, y ∈ R, since V (0) = ReZ(0) = v ∈ supp ξ and W (0) = ImZ(0) = 0. Then, if we define the function Z Z √ Gs,t (x, y) = ξ(dv)p0,s(v, x) dw p0,t (0, w)Φvξ (y + −1w) (1.10) −(y−x)2 /2(t−s)

R

2

R

2

for (x, y) ∈ R , (s, t) ∈ (0, T ) , we obtain the expression for the density function ρξ (t, x) = Gt,t (x, x),

x ∈ R,

0  0 = 1. In the following four sections, we give the proofs of Theorem 1.1, Corollary 1.2, Proposition 1.3, and Proposition 1.5, respectively.

2

Proof of Theorem 1.1

It is sufficient QMfor the proof of Theorem 1.1 to consider the case that F is given as F (Ξ(t)) = i=1 gi (X(ti )) for M ∈ N, 0 < t1 < · · · < tM < T < ∞, with symmetξ(R) ric bounded measurable functions give the proof for the case Pξ(R) gi on R , 1 ≤ i ≤ M. We A with M = 2, i.e., for ξ = i=1 δui , u = (u1 , . . . , uξ(R) ) ∈ Wξ(R) 

Eξ [g1 (X(t1 ))g2 (X(t2 ))] = Eu g1 (V(t1 ))g2 (V(t2 )) 7

det

1≤i,j≤ξ(R)

Φuξ i (Zj (T ))

 

.

(2.1)

The generalization for M > 2 is straightforward by the Markov property of Brownian motion as implied in the following proof. We use the fact that the Dyson model is obtained as the h-transform of the absorbing Brownian motion in the Weyl chamber WA / WA ξ(R) [6]. Put τ = inf{t > 0 : V(t) ∈ ξ(R) }, then the LHS of (2.1) is given by     h(V(t2)) |h(V(t2))| Eu 1(τ > t2 )g1 (V(t1 ))g2 (V(t2 )) = Eu 1(τ > t2 )g1 (V(t1 ))g2 (V(t2 )) . h(u) h(u) (2.2) For a finite set S, we write the collection of all permutations of elements in S as S(S). In particular, we express S(Ip ) simply by Sp , p ∈ N. Then by the Karlin-McGregor formula [8] (2.2) is given by  X Eu  sgn(σ)g1 (V(t1 ))1WAξ(R) (σ(V(t1))) σ∈Sξ(R)

  |h(V(t2 − t1 ))| ×Eσ(V(t1 )) 1(τ > t2 − t1 )g2 (V(t2 − t1 )) , h(u)

with σ(u) = (uσ(1) , . . . , uσ(ξ(R)) ) for each permutation σ ∈ Sξ(R) , in which all Pcontributions from the paths with τ ≤ t1 are canceled out by taking the signed sum, σ∈Sξ(R) sgn(σ) 1WAξ(R) (σ(V(t1)))(·). Here the Markov property of Brownian motion has been used. The Karlin-McGregor formula is again used to realize the condition 1(τ > t2 − t1 ) and the above is written as  X sgn(σ)g1 (V(t1 ))1WAξ(R) (σ(V(t1))) Eu  σ∈Sξ(R)

=



×Eσ(V(t1 ))  X

σ,σ′ ∈Sξ(R)

X

σ′ ∈Sξ(R)

sgn(σ ′ )g2 (V(t2 − t1 ))1WAξ(R) (σ ′ (V(t2 − t1 )))

h Eu sgn(σ)g1 (V(t1 ))1WAξ(R) (σ(V(t1 ))) ′

×sgn(σ )g2 (V(t2))1WAξ(R)   h(V(t2 )) . = Eu g1 (V(t1 ))g2 (V(t2 )) h(u)

|h(V(t2 − t1 ))|  h(u)

 |h(V(t2 ))| σ σ (V(t2)) h(u) −1 ′



 (2.3)

Note that the average of g1 (V(t1 ))g2 (V(t2 )) with positive weight |h(V(t2 ))|/h(u) over the paths conditioned τ > t2 in (2.2) is now replaced by that with signed weight h(V(t2 ))/h(u) over all paths in (2.3). Then we use the equality (1.6) in (2.3). Note that Vi (t), 1 ≤ i ≤ ξ(R) and Wi (t), 1 ≤ i ≤ ξ(R) are independent. Then we can regard the probability space (Ω, F , Pv ) as a 8

product of two probability spaces (Ω1 , F1, P1 ) and (Ω2 , F2, P2 ), and Vi (t), 1 ≤ i ≤ ξ(R) are F1 -measurable and Wi (t), 1 ≤ i ≤ ξ(R) are F2 -measurable. We write Eα for the expectation with Pα , α = 1, 2. Then        i−1 i−1 = det E2 h(Z(t)) = E2 det (Zj (t)) E2 (Zj (t)) , 1≤i,j≤ξ(R)

1≤i,j≤ξ(R)

where we have used the independence of Zj (t), 1 ≤ j ≤ ξ(R), in the lasthequality. By the i  √ P i−1 E2 ( −1Wj (t))i−1−p xp . binomial expansion, E2 [(Zj (t))i−1 ] = G(Vj (t)) with G(x) = i−1 p=0 p Since G(x) is a monic polynomial with degree i − 1,   E2 h(Z(t)) = h(V(t)). (2.4)

Combining (1.6), (2.3), (2.4) and the fact (1.5), we have (2.1). For the proof of (1.8) with M = 2, we first prove that for any N1 , N2 ∈ N # " 2 NX 1 +N2 X Y Y X 1(J1 ∪ J2 = Ip ) χtm (Xjm (tm )) = Eξ m=1 jm ∈Jm

J1 ,J2 ⊂Iξ(R) , ♯J1 =N1 ,♯J2 =N2

×

Z

WA p

p=max{N1 ,N2 }

ξ ⊗p (dv)Ev

"

2 Y Y

m=1 jm ∈Jm

Applying (2.1) with gm (x) =

X

Y

J1 ,J2 ⊂Ip , ♯J1 =N1 ,♯J2 =N2

# i h χtm (Vjm (tm )) det Φvξi (Zj (T )) . (2.5) i,j∈Ip

χtm (xjm ), m = 1, 2, we see that the LHS

Jm ⊂Iξ(R) ,♯Jm =Nm jm ∈Jm

of (2.5) equals to X

Eu

J1 ,J2 ⊂Iξ(R) , ♯J1 =N1 ,♯J2 =N2

=

NX 1 +N2

"

2 Y Y

m=1 jm ∈Jm

X

# i h χtm (Vjm (tm )) det Φuξ i (Zj (T )) i,j∈Iξ(R)

Eu

p=max{N1 ,N2 } J1 ,J2 ⊂Iξ(R) ,♯(J1 ∪J2 )=p, ♯J1 =N1 ,♯J2 =N2

"

2 Y Y

m=1 jm ∈Jm

χtm (Vjm (tm ))

det

i,j∈J1 ∪J2

# h i Φuξ i (Zj (T )) ,

where we have used the fact (1.5) with Φvξi (Zj (0)) = δij . Then the RHS of the last equation coincides with the RHS of (2.5). By using relation   ξ(R) 2 X 2 ξ(R) 2 X  Y Y X Y Y χtm (xjm ). exp ftm (xjm ) = {χtm (xjm ) + 1} =   m=1 jm =1

m=1 jm =1

J1 ,J2 ⊂Iξ(R) m=1 jm ∈Jm

the equality (1.8) with M = 2 is readily derived form (2.5). By the similar argument, (1.8) is concluded from (1.7) for any M > 2. 9

3

Proof of Corollary 1.2

Since the Fredholm determinant (1.12) is explicitly given by (1.13) with (1.14), (1.8) in Theorem 1.1 implies that for proof of Corollary 1.2, it is enough to show that the following equality is established for any M ∈ N, (N1 , . . . , NM ) ∈ NM " # ) ( Z M Nm   Y Y (m) (n) (m) (m) Kξ (tm , xi ; tn , xj ) det dxNm χtm xi Q M m=1

=

WA Nm m=1

N X

i=1

X

1

p=maxm {Nm } ♯Jm =Nm , 1≤m≤M

×

Z

WA p

M [

Jm = I p

m=1

ξ ⊗p (dv)Ev

"

!

M Y Y

m=1 jm ∈Jm

1≤i≤Nm ,1≤j≤Nn 1≤m,n≤M

# i h χtm (Vjm (tm )) det Φvξi (Zj (T )) . i,j∈Ip

(3.1)

If we take the summation of (3.1) over all 0 ≤ Nm ≤ ξ(R), 1 ≤ m ≤ M, the LHS gives (1.13) with (1.14) and the RHS does (1.8). In this section we will prove (3.1). So in the following, we fix M ∈ N, (N1 , . . . , NM ) ∈ NM . P \IPm−1 Nk , 2 ≤ m ≤ M. Put N = M Let I(1) = IN1 and I(m) = IPm m=1 Nm and τi = k=1 Nk k=1 PM (m) ), 1 ≤ i ≤ N. Then the integrand in the LHS of (3.1) is simply written m=1 tm 1(i ∈ I R QM QN (m) as i=1 χτi (xi ) det1≤i,j≤N [Kξ (τi , xi ; τj , xj )], and the integral QM WA m=1 dxNm (·) can m=1 Nm R Q −1 be replaced by { M dx (·). The determinant is defined using the notion m=1 Nm !} RN of permutations and we note that any permutation σ ∈ SN can be decomposed into a product of exclusive cyclic permutations. Let the number of cycles in the decomposition be ℓ(σ) and express σ by σ = c1 c2 · · · cℓ(σ) , where cλ denotes a cyclic permutation cλ = (cλ (1)cλ (2) · · · cλ (qλ )), 1 ≤ qλ ≤ N, 1 ≤ λ ≤ ℓ(σ). For each 1 ≤ λ ≤ ℓ(σ), we write the set λ of entries {cλ (i)}qi=1 of cλ simply as {cλ }, in which the periodicity cλ (i + qλ ) = cλ (i), 1 ≤ i ≤ qλ is assumed. By definition, for each 1 ≤ λ ≤ ℓ(σ), cλ (i), 1 ≤ i ≤ qλ are distinct Pℓ(σ) indices chosen from IN , and {cλ } ∩ {cλ′ } = ∅ for 1 ≤ λ 6= λ′ ≤ ℓ(σ), λ=1 qλ = N. The determinant det1≤i,j≤N [Kξ (τi , xi ; τj , xj )] is written as X

(−1)

N −ℓ(σ)

=

σ∈SN

Kξ (τcλ (i) , xcλ (i) ; τcλ (i+1) , xcλ (i+1) )

λ=1 i=1

σ∈SN

X

ℓ(σ) qλ Y Y

(−1)

N −ℓ(σ)

ℓ(σ) qλ n Y Y Gτcλ (i) ,τcλ (i+1) (xcλ (i) , xcλ (i+1) )

λ=1 i=1

o

−1(τcλ (i) > τcλ (i+1) )pτcλ (i+1) ,τcλ (i) (xcλ (i+1) , xcλ (i) ) ,

(3.2)

where the definition (1.11) of the correlation kernel Kξ is used. In order to express binomial expansions for (3.2), we introduce the following notations: For each cyclic permutation 10

n

o cλ , we consider a subset of {cλ }, C(cλ ) = cλ (i) ∈ {cλ } : τcλ (i) > τcλ (i+1) . Choose Mλ such that {cλ } \ C(cλ ) ⊂ Mλ ⊂ {cλ }, and define Mcλ = {cλ } \ Mλ . Therefore if we put qλ  Y c dxcλ (i) χτcλ (i) (xcλ (i) )pτcλ (i+1) ,τcλ (i) (xcλ (i+1) , xcλ (i) )1(cλ (i)∈Mλ )

Z

G(cλ , Mλ ) =

R{cλ } i=1

×Gτcλ (i) ,τcλ (i+1) (xcλ (i) , xcλ (i+1) )

 ,

(3.3)

(−1)♯Mλ G(cλ , Mλ ).

(3.4)

1(cλ (i)∈Mλ )

the LHS of (3.1) is expanded as QM

X

1

m=1 Nm ! σ∈SN

(−1)

N −ℓ(σ)

ℓ(σ) Y

X

c

λ=1 {cλ }\C(cλ )⊂Mλ ⊂{cλ }

From now on, we will explain how to rewrite G(cλ , Mλ ) until (3.8). We note that if we set F ({xcλ (j) : cλ (j) ∈ Mcλ }) Z o Y n dxcλ (i) χτcλ (i) (xcλ (i) )Gτcλ (i) ,τcλ (i+1) (xcλ (i) , xcλ (i+1) ) = RMλ i:c (i)∈M λ λ

×

Y

pτcλ (j+1) ,τcλ (j) (xcλ (j+1) , xcλ (j) ),

(3.5)

j:cλ (j)∈Mcλ

which is the integral over RMλ , then (3.3) is obtained by performing the integral of it over c RMλ = R{cλ } \ RMλ , Z o Y n G(cλ , Mλ ) = dx χ (x ) F ({xcλ (j) : cλ (j) ∈ Mcλ }). (3.6) τ c (j) c (j) λ λ cλ (j) c R

M λ

j:cλ (j)∈Mcλ

In (3.5), use the integral representation (1.10) for Gτcλ (i) ,τcλ (i+1) (xcλ (i) , xcλ (i+1) ) by putting the integral variables to be v = vcλ (i) and w = wcλ (i+1) . We obtain F ({xcλ (j) : cλ (j) ∈ Mcλ }) Z Z Y ξ(dvcλ (i) ) = RMλ i:c (i)∈M λ λ

× ×

Z

Y

RMλ i:c (k)∈M λ λ

Y

Y

RMλ i:c (i)∈M λ λ

n

dxcλ (i) p0,τcλ (i) (vcλ (i) , xcλ (i) )χτcλ (i) (xcλ (i) )

o n √ vcλ (i) dwcλ (i+1) p0,τcλ (i+1) (0, wcλ(i+1) )Φξ (xcλ (i+1) + −1wcλ (i+1) )

pτcλ (j+1) ,τcλ (j) (xcλ (j+1) , xcλ (j) ).

j:cλ (j)∈Mcλ

11

o

Z

=

"

Y

Y

ξ(dvcλ (i) )Ev RMλ i:c (i)∈M i:cλ (i)∈Mλ λ λ vcλ (i) 1(cλ (i+1)∈Mλ ) Φξ (Zcλ (i+1) (τcλ (i+1) ))

×

vc (i) Φξ λ (xcλ (i+1)

×

Y

×

j:cλ (j)∈Mcλ

+





χτcλ (i) (Vcλ (i) (τcλ (i) ))

−1Wcλ (i+1) (τcλ (i+1) ))

1(cλ (i+1)∈Mcλ )



 pτcλ (j+1) ,τcλ (j) (Vcλ (j+1) (τcλ (j+1) ), xcλ (j) )1(cλ (j+1)∈Mλ )

×pτcλ (j+1) ,τcλ (j) (xcλ (j+1) , xcλ (j) )

1(cλ (j+1)∈Mcλ )

# .

Using Fubini’s theorem, (3.6) is given by Z

Y

RMλ i:c (i)∈M λ λ

× × × × ×

ξ(dvcλ (i) )Ev

Y

i:c (i),cλ (i+1)∈Mλ

Zλ R

Mc λ

Y

j:cλ (j)∈Mcλ

Y

vcλ (i)

Φξ

"

Y

χτcλ (i) (Vcλ (i) (τcλ (i) ))

i:cλ (i)∈Mλ

(Zcλ (i+1) (τcλ (i+1) ))

n o dxcλ (j) χτcλ (j) (xcλ (j) ) pτcλ (j+1) ,τcλ (j) (Vcλ (j+1) (τcλ (j+1) ), xcλ (j) )

j:cλ (j)∈Mcλ ,cλ (j+1)∈Mλ

Y

pτcλ (j+1) ,τcλ (j) (xcλ (j+1) , xcλ (j) )

j:cλ (j),cλ (j+1)∈Mcλ

Y

vc (i) Φξ λ (xcλ (i+1)

i:cλ (i)∈Mλ ,cλ (i+1)∈Mcλ

+



#

−1Wcλ (i+1) (τcλ (i+1) )) .

(3.7)

For each 1 ≤ i ≤ qλ with cλ (i) ∈ Mλ , we define i = min{j > i : cλ (j) ∈ Mλ } and i = max{j < i : cλ (j) ∈ Mλ }. Then we perform integration over xcλ (j) ’s for cλ (j) ∈ Mcλ before taking the expectation Ev . That is, integrals over xcλ (j) ’s with indices in intervals i < j < i for all i, s.t. cλ (i) ∈ Mλ are done. For each i, s.t. cλ (i) ∈ Mλ , if i < i − 1, ( i−1 Z ) Y χτcλ (i) (Vcλ (i) (τcλ (i) )) dxcλ (j) χτcλ (j) (xcλ (j) ) pτcλ (i) ,τcλ (i−1) (Vcλ (i) (τcλ (i) ), xcλ (i−1) ) j=i+1

×

i−1 Y

R

vcλ ( i )

pτcλ (k) ,τcλ (k−1) (xcλ (k) , xcλ (k−1) )Φξ

k=i+2

12

(xcλ (i+1) +



−1Wcλ (i+1) (τcλ (i+1) ))

coincides with the conditional expectation of i Y

vcλ ( i )

χτcλ (j) (Vcλ (i) (τcλ (j) ))Φξ

(Vcλ (i) (τcλ (i+1) ) +

j=i+1



−1Wcλ (i+1) (τcλ (i+1) )).

with respect to Ev [·|Vcλ (i) , Wcλ (i+1) ]. Since Wi (·), i ∈√{cλ } are i.i.d. random variables which are independet of Vi (·), i ∈ {cλ }, Vcλ (i) (τcλ (i+1) )+ −1Wcλ (i+1) (τcλ (i+1) ) has the same √ Q vc ( i ) distribution as Vcλ (i) (τcλ (i+1) )+ −1Wcλ (i) (τcλ (i+1) ) = Zcλ (i) (τcλ (i+1) ). Since i:cλ (i)∈Mλ Φξ λ Q vc (i) (Zcλ (i) (τcλ (i+1) )) = i:cλ (i)∈Mλ Φξ λ (Zcλ ( i ) (τcλ (i+1) )), (3.7) is equal to Z

RMλ

Y

i:cλ (i)∈Mλ



ξ(dvcλ(i) )Ev 

Y

i:cλ (i)∈Mλ

(

i Y

vcλ (i)

χτcλ (j) (Vcλ (i) (τcλ (j) ))Φξ

j=i+1

)

(Zcλ ( i ) (τcλ (i+1) ))  .

Using only the entries of Mλ , we can define a subcycle cbλ of cλ uniquely as follows: Since cλ is a cyclic permutation, qbλ ≡ ♯Mλ ≥ 1. Let i1 = min{1 ≤ i ≤ qλ : cλ (i) ∈ Mλ }. If qbλ ≥ 2, define ij+1 = ij , 1 ≤ j ≤ qbλ −1. Then cbλ = (cbλ (1)cbλ (2) · · · cbλ (qbλ )) ≡ (cλ (i1 )cλ (i2 ) · · · c(iqcλ )). S λ Moreover, we decompose the set Mλ into M subsets, Mλ = M m=1 Jm , by letting o n ∃ (m) λ λ , 1 ≤ m ≤ M. Jm = Jm (cλ , Mλ ) = cλ (i) ∈ Mλ : i < j ≤ i, s.t. cλ (j) ∈ I

Note that by definition Jλm ∩ Jλm′ 6= ∅, m 6= m′ in general, and Jλ1 = IN1 ∩ Mλ = IN1 ∩ {cλ }, for 2 ≤ m ≤ M, Jλm ∩ I(k) ⊂ Jλk for 1 ≤ k < m ≤ M. Finally we arrive at Jλm ⊂ IPm k=1 Nk the following expression of G(cλ , Mλ ),   Z qc M λ Y Y Y Y vcc (i)  Φξ λ (Zc ξ(dvcλ (i) ) Ev  (3.8) χtm (Vjm (tm )) cλ (i+1) (T )) , RMλ i:c (i)∈M λ λ

m=1 jm ∈Jλ m

i=1

Sℓ(σ) Pℓ(σ) where the martingale property (1.5) is used. Let M ≡ λ=1 Mλ . Since N − λ=1 ♯Mcλ = ♯M, the LHS of (3.1), which is written as (3.4), becomes now QM

X

1

X

(−1)

♯M−ℓ(σ)

Z

ℓ(σ) Y

Y

Nm ! σ∈SN Sℓ(σ) RM λ=1 i:c (i)∈M λ λ IN \ λ=1 C(cλ )⊂M⊂IN    ℓ(σ)  M qc λ  Y Y Y Y vcc (i) . ×Ev  Φξ λ (Zc (T )) χtm (Vjm (tm )) cλ (i+1)   λ m=1

λ=1

m=1 jm ∈Jm

ξ(dvcλ (i) )

(3.9)

i=1

Sℓ(σ) λ We define σ b ≡ cb1 cb2 · · · cd σ ) = ℓ(σ). ℓ(σ) and Jm ≡ λ=1 Jm , 1 ≤ m ≤ M. Note that ℓ(b M The obtained (Jm )m=1 ’s form a collection of series of index sets satisfying the following conditions, which we write as J ({Nm }M m=1 ): 13

for 2 ≤ m ≤ M, Jm ∩ I(k) ⊂ Jk for 1 ≤ k < m ≤ M, (C.J) J1 = IN1 , Jm ⊂ IPm k=1 Nk and ♯Jm = Nm for 1 ≤ m ≤ M. M P ) = ♯(Jm ∩ For each (Jm )M m=1 ∈ J ({Nm }m=1 ), we put A1 = 0 and Am = ♯(Jm ∩I m−1 k=1 Nk PM SM Sm−1 m=1 (Nm − Am ), which k=1 Jk ), 2 ≤ m ≤ M. Then, if we put M = S m=1 Jm , ♯M = M (m) (m) with ♯I = Nm , 1 ≤ m ≤ M, means that from the original index set IN = m=1 I we obtain a subset M by eliminating Am elements at each level 1 ≤ m ≤ M. By this reduction, we obtain σ b ∈ S(M) from σ ∈ SN . It implies that, for all σ b ∈ S(M), the M number of σ’s in S which give the same σ b and (J ) by this reduction is given by N m m=1 QM m=1 Am !, where 0! ≡ 1. Then (3.9) is equal to !Q M M X X [ X m=1 Am ! 1 Jm = M Q M (−1)♯M−ℓ(bσ ) M m=1 Nm ! σ m=1 M:maxm {Nm }≤♯M≤N (Jm )M b∈S(M) m=1 ⊂J ({Nm }m=1 )   Z ℓ(b σ ) qc M λ Y Y vcc (i) Y Y  Φξ λ (Zc χtm (Vjm (tm )) ξ ⊗M (dv)Ev  ×♯M! cλ (i+1) (T )) WA ♯M

m=1 jm ∈Jm

λ=1 i=1

!

M [

M Y Am ! = 1 Jm = M ♯M! Nm ! M m=1 m=1 M:maxm {Nm }≤♯M≤N (Jm )M m=1 ⊂J ({Nm }m=1 ) # "M Z Y Y  vi  ⊗M χtm (Vjm (tm )) det Φξ (Zj (T )) . ξ (dv)Ev ×

X

WA ♯M

X

i,j∈M

m=1 jm ∈Jm

(3.10)

Assume maxm {Nm } ≤ p ≤ N, 0 ≤ Am ≤ Nm , 2 ≤ m ≤ M and set A1 = 0. Consider ) ( M m−1 [ [ M M Jk ) = Am , 2 ≤ m ≤ M , Λ1 = (Jm )m=1 ⊂ J ({Nm }m=1 ) : ♯( Jm ) = p, ♯(Jm ∩ Λ2 =

(

m=1

(Jm )M m=1 : ♯Jm = Nm , 1 ≤ m ≤ M,

k=1

M [

m=1

Jm = Ip , ♯(Jm ∩

m−1 [ k=1

Jk ) = Am , 2 ≤ m ≤ M

Since theSCBMs are i.i.d. in Pv , the integral in (3.10) has the same value for all (Jm )M m=1 ∈ M Λ1 with m=1 Jm = M and it is also equal to # "M Z Y Y   χtm (Vjm (tm )) det Φvξi (Zj (T )) ξ ⊗p (dv)Ev WA p

i,j∈Ip

m=1 jm ∈Jm

for all (Jm )M . In Λ1 , for P each 2 ≤ m ≤ M, Am elements in Jm are chosen from m=1 ∈ Λ2 S Sm−1 m−1 m−1 k=1 Jk , in which ♯( k=1 Jk ) = k=1 (Nk − Ak ), and the remaining Nm − Am elements (m) (m) in Jm are from I with ♯I = Nm . Then   M Pm−1 Y Nm k=1 (Nk − Ak ) . ♯Λ1 = Nm − Am Am m=2 14

)

.

In Λ2 , on the other hand, N1 elements in JS for each 2 ≤ 1 is chosen from p , and then Pm−1 SIm−1 m−1 m ≤ M, Am elements in Jm are chosen from k=1 Jk withS♯( k=1 Jk ) = k=1S(Nk − Ak ) m−1 and P the remaining Nm − Am elements in Jm are from Ip \ m−1 k=1 Jk with ♯(Ip \ k=1 Jk ) = m−1 p − k=1 (Nk − Ak ). Then ♯Λ2 =



p N1

Y M Pm−1 k=1

m=2

(Nk − Ak ) Am

P   p − m−1 k=1 (Nk − Ak ) . Nm − Am

QM P Since M m=1 Am !/Nm !. Then (3.10) is equal to m=1 (Nm − Am ) = p, we see ♯Λ2 /♯Λ1 = p! the RHS of (3.1) and the proof is completed.

4

Proof of Proposition 1.3

Suppose that the initial configuration ξ ∈ M0 satisfies the conditions (C.1) and (C.2) with constants C0 , C1 , C2 and indices α, β. Since Theorem 1.1 can be generalized to the case with an infinite number of particles, we see that the LHS of (1.20) is given by "  # 4 Z    i h X ξ ⊗p (dv)Ev Fp ϕ(Vi (t)) − ϕ(Vi (s)) det Φvξi (Zj (T )) p=1

i∈Ip

Rp

1≤i,j≤p

with F1 (x1 ) = x41 , F2 (x2 ) = 3x21 x22 + 2x1 x2 (x1 + x2 )2 , F3 (x3 ) = 2x1 x2 x3 (x1 + x2 + x3 ) and F4 (x4 ) = x1 x2 x3 x4 . Then Proposition 1.3 is concluded from the following estimate. Lemma 4.1 Let {ai }pi=1 be a sequence of positive integers with length p ∈ N. Then for any T > 0 and ϕ ∈ C0∞ (R) there exists a positive constant C = C(C0 , C1, C2 , α, β, T, ϕ), which is independent of s, t, such that " p  # ai Z i h Y ξ ⊗p (dv)Ev ϕ(Vi (t)) − ϕ(Vi (s)) det Φvξi (Zj (T )) 1≤i,j≤p Rp

i=1

≤ C|t − s|

Pp

i=1

ai /2

, s, t ∈ [0, T ].

(4.1)

Proof. Choose L ∈ N so that supp ϕ ⊂ [−L, L], and put 1L (x, y) = 0, if |x| > L and |y| > L, and 1L (x, y)=1, otherwise. By the Schwartz inequality the LHS of (4.1) is bounded from the above by Z

Rp

" p  " p #1/4 2ai #1/2 Y Y ξ ⊗p (dv)Ev ϕ(Vi (t)) − ϕ(Vi (s)) Ev 1L (Vi (s), Vi (t)) ×Ev

"

i=1

det

1≤i,j≤p

h Φvξi (Zj (T ))

# i4 1/4

15

i=1

.

Q Since VP Ev [ pi=1 (ϕ(Vi (t)) − ϕ(Vi (s)))2ai ] ≤ i (t), i ∈ N are independent Brownian motions, P Q p p ′ 2 c1 |t−s| i=1 ai and Ev [ pi=1 1L (Vi (s), Vi (t))] ≤ c2 e−c2 i=1 |vi | , s, t ∈ [0, T ], for some c1 , c2 , c′2 , which are independent of s, t. And from the estimate (1.19) we have Ev

"

#   X p h i4 1/4 vi θ ′ det Φξ (Zj (T )) |vi | ≤ c3 exp c3

1≤i,j≤p

i=1

for some c3 , c′3 , which are also independent of s, t, and θ ∈ R(max{α, (2−β)}, 2). Combining θ ′ 2 the above estimates with the fact that, for any c, c′ > 0, R ξ(dv)ec|v| −c |v| < ∞, which is derived from the condition (C.2) (i) and the fact θ < 2, we obtain the lemma.

5

Proof of Proposition 1.5

Let τ = inf{t > 0 : Ξ(t) ∈ / M0 } and τij = inf{t > 0 : Vi (t) = Vj (t)}. From the formula (1.7), for any ξ ∈ M0 with ξ(R) ∈ N,   h i i h X Pξ τ ≤ T ≤ E u  1(τij ≤ T ) det Φuξ i (Zj (T ))  =

Z

1≤i,j≤ξ(R)

1≤i 0, lim Φaξ∩[−L,L] (Z1 (t)) = Φaξ (Z1 (t)),

L→∞

in Lk (Ω, Pv ),

holds for any k ∈ N. Hence, the inequality (5.1) holds for ξ ∈ M0 with the conditions (C.1) and (C.2). By the strong Markov property of CBM for v ∈ WA 2      i h   vi ui Ev 1(τ12 ≤ T ) det Φξ (Zj (T )) = Ev 1(τ12 ≤ T )EZ(τ12 ) det Φξ (Zj (T − τ12 )) 1≤i,j≤2

1≤i,j≤2

By the martingale property of Φvξi (Zj (T )) we can apply the optional stopping theorem and see that the RHS of the above equation coincides with       vi vi Ev 1(τ12 ≤ T ) det Φξ (Zj (τ12 )) = E1 1(τ12 ≤ T )E2 det Φξ (Zj (τ12 )) 1≤i,j≤2 1≤i,j≤2 √     Y 1 1 −1 = v1 − v2 a − v1 a − v2 a∈supp ξ\{v1 ,v2 } ## " "   Y G Vk (τ12 ), Wk (τ12 ) , ×E1 1(τ12 ≤ T )E2 W1 (τ12 ) − W2 (τ12 ) k=1,2

16

√ Q where G(v, w) = a∈ξ\{v1 ,v2 } (a − v − −1w), and the fact that V1 (τ12 ) = V2 (τ12 ) almost surely equality. Since Wk (τ12 ),ik = 1, 2 are i.i.d. under P2 , we have h was used in the last  Q E2 W1 (τ12 ) − W2 (τ12 ) = 0. This completes the proof. k=1,2 G Vk (τ12 ), Wk (τ12 )

Acknowledgements. A part of the present work was done during the participation of the authors in the international conference “Selfsimilar Processes and Their Applications”, Angers, July 20-24, 2009, whose last day was the special day for the 60th birthday of Professor Marc Yor. The authors would like to dedicate the present paper to Professor Marc Yor. M.K. is supported in part by the Grant-in-Aid for Scientific Research (C) (No.21540397) of Japan Society for the Promotion of Science. H.T. is supported in part by the Grant-in-Aid for Scientific Research (KIBAN-C, No.19540114) of Japan Society for the Promotion of Science.

References [1] P. Billingsley, Convergence of Probability Measures, John Willey & Sons, Inc., New York, 1999 (2nd ed.). [2] P. M. Bleher, A. B. Kuijlaars, Integral representations for multiple Hermite and multiple Laguerre polynomials. Ann. Inst. Fourier. 55, 2001-2014 (2005). [3] F. J. Dyson, A Brownian-motion model for the eigenvalues of a random matrix, J. Math. Phys. 3, 1191-1198 (1962). [4] B. Eynard, M. L. Mehta, Matrices coupled in a chain: I. Eigenvalue correlations. J. Phys. A 31, 4449-4456 (1998). [5] P. J. Forrester, Log-gases and Random Matrices, London Mathematical Society Monographs, Princeton University Press, Princeton, 2010. [6] D. J. Grabiner, Brownian motion in a Weyl chamber, non-colliding particles, and random matrices, Ann. Inst. Henri Poincar´e, Probab. Stat. 35, 177-204 (1999). [7] O. Kallenberg, Foundations of Modern Probability, Springer, Berlin, 1997. [8] S. Karlin, J. McGregor, Coincidence probabilities. Pacific J. Math. 9, 1141-1164 (1959). [9] M. Katori and H. Tanemura, Symmetry of matrix-valued stochastic processes and noncolliding diffusion particle systems, J. Math. Phys. 45, 3058-3085 (2004). [10] M. Katori and H. Tanemura, Noncolliding Brownian motion and determinantal processes, J. Stat. Phys. 129, 1233-1277 (2007). [11] M. Katori and H. Tanemura, Zeros of Airy function and relaxation process, J. Stat. Phys. 136, 1177-1204 (2009). [12] M. Katori and H. Tanemura, Non-equilibrium dynamics of Dyson’s model with an infinite number of particles, Commun. Math. Phys. 293, 469-497 (2010). [13] M. Katori and H. Tanemura, Noncolliding squared Bessel processes, J. Stat. Phys. 142, 592-615 (2011). [14] M. Katori and H. Tanemura, Noncolliding processes, matrix-valued processes and determinantal processes, Sugaku Expositions (AMS) (in press); arXiv:math.PR 1005.0533. [15] B. Ya. Levin, Lectures on Entire Functions, Translations of Mathematical Monographs, 150, Amer. Math. Soc., Providence, 1996.

17

[16] M. L. Mehta, Random Matrices, Elsevier, Amsterdam, 2004 (3rd ed.). [17] T. Nagao, P. Forrester, Multilevel dynamical correlation functions for Dyson’s Brownian motion model of random matrices, Phys. Lett. A247, 42-46 (1998). [18] D. Revuz and M. Yor, Continuous Martingales and Brownian Motion, Springer, Now York, 1998 (3rd ed.). [19] H. Spohn, Interacting Brownian particles: a study of Dyson’s model, In: Hydrodynamic Behavior and Interacting Particle Systems, G. Papanicolaou (ed), pp. 151-179, IMA Volumes in Mathematics and its Applications, 9, Springer, Berlin, 1987.

18