A New Look at Optimal Stopping Problems related to ... - CiteSeerX

4 downloads 0 Views 210KB Size Report
Keywords and phrases: Generalized parking problems, martingales, optimal stopping, .... In section 2.5 we take another look on the classical problem of.
appeared in Statistica Sinica, 7, 93-108 (1997)

A New Look at Optimal Stopping Problems related to Mathematical Finance M. Beibel and H. R. Lerche Institut f¨ ur Mathematische Stochastik Albert–Ludwigs–Universit¨at Freiburg i. Br., Germany

Abstract: A method is proposed to solve optimal stopping problems. Several examples classical and new ones - are discussed. Especially the values of American options (straddle and strangle) with infinite horizon are calculated.

Keywords and phrases: Generalized parking problems, martingales, optimal stopping, perpetual options.

Postal Address: Institut f¨ ur Mathematische Stochastik, Universit¨at Freiburg, Eckerstr. 1, 79104 Freiburg, Germany. Email addresses: [email protected] [email protected] Corresponding author: H. R. Lerche, Phone: 0049 761 203 5662, FAX: 0049 761 203 5661

1

General Ideas

In several papers on sequential Bayes testing and change-point detection (see for instance Beibel (1996), Chapter II of Lerche (1986), or Woodroofe, Lerche, and Keener (1993)) the following argument is used: The Bayes risk R(T ) is represented for all stopping times T with R(T ) < ∞ as R(T ) = Eg(LT ) ,

(1)

where Lt denotes a certain stochastic process connected to the likelihood process evaluated at time t and where g is a positive function with a unique minimum, let’s say at a∗ . Then we have R(T ) = Eg(LT ) ≥ g(a∗ ). If Lt is a time-continuous process and passes a∗ with probability one, the optimal stopping time will be T ∗ = inf{t > 0|Lt = a∗ }. If Lt is discrete in time, one will usually not hit a∗ exactly, therefore one has to stop ahead of a∗ . This is also the case for the ‘parking problem’ described in Chow, Robbins and Siegmund (1971, page 45 and 60). There g(x) = |x| and Ln = X1 + ... + Xn where the Xi are geometrically distributed. Therefore M. Woodroofe has called situations as described above ‘generalized parking problems’ (see Woodroofe, Lerche, and Keener (1993)). Of course for time-continuous processes Lt the solution is trivial, when one has the representation (1). Nevertheless to find a representation of this type is sometimes not obvious (see e. g. Beibel (1996)). One can combine the above technique with the one recently used by Shepp and Shiryaev (1993). This yields an easy method to handle also some tricky optimal stopping problems. Since our examples are formulated more naturally as maximization problems, we switch for convenience from minimization to maximization. To explain our technique 



more thoroughly let Z = Zt ; 0 ≤ t < ∞ denote a continuous stochastic process for which we want to maximize E(ZT ) over all stopping times T with respect to some filtration F = (Ft ; 0 ≤ t < ∞) with P (T < ∞) = 1. We will discuss a general approach to transform such a problem to a generalized parking problem. The basic idea is to find another continuous stochastic process Y adapted to F, a function g with a maximum uniquely located at some point y ∗ and a positive martingale M with M0 = 1 such that 1

Zt = g(Yt )Mt for 0 ≤ t < ∞. By the properties of g we have Zt ≤ g(y ∗ )Mt . Since M is a positive martingale we obtain for any stopping time T with P (T < ∞) = 1 that E(ZT ) ≤ g(y ∗ ). In order to prove the optimality of T ∗ = inf {t > 0|Yt = y ∗ }, one only needs to show that P (T ∗ < ∞) = 1 and E(MT ∗ 1{T ∗ 0} are both nonempty, we have peα1 x + (1 − p)eα2 x sup Gp (x) = sup Gp (x) = inf x≥0;h(x)>0 h(x) x≥0 x≥0;h(x)>0 and peα1 x + (1 − p)eα2 x sup Gp (x) = sup Gp (x) = inf x≤0;h(x)>0 h(x) x≤0 x≤0;h(x)>0

!−1

!−1

.

Note that for all p ∈ (0, 1) it holds 0
0

  1 sup e−α1 x h(x) < ∞ p x≥0

and 0
0

  1 sup e−α2 x h(x) < ∞ . 1 − p x≤0

For fixed x with h(x) > 0 the function p → [peα1 x + (1 − p)eα2 x ]/h(x) is linear. Therefore the functions m1 (p) and m2 (p) given by m1 (p) =

peα1 x + (1 − p)eα2 x x≥0;h(x)>0 h(x) inf

and m2 (p) =

peα1 x + (1 − p)eα2 x x≤0;h(x)>0 h(x) inf

are concave functions on (0, 1) with values in (0, ∞). The function m1 is nondecreasing and the function m2 is nonincreasing. Condition (2) and (3) yield lim m1 (p) =

p→1

1 < ∞ and supx≥0 (e−α1 x h(x))

lim m2 (p) =

1 supx≤0 (e−α2 x h(x)) we ob-

tain limp→0 (m1 (p) − m2 (p)) < 0. In a similar way we can show that limp→1 (m1 (p) − m2 (p)) > 0. Therefore m1 (p) − m2 (p) has at least one zero in (0, 1).

8

2

Remark In general p∗ is not unique. We will now show that p∗ is unique, if there exists a point x˜ > 0 with e−α1 x˜ h(˜ x) = supx≥0 (e−α1 x h(x)). Suppose there exist p∗ and p∗∗ with 0 < p∗ < p∗∗ < 1 such that m1 (p∗ ) − m2 (p∗ ) = 0 = m1 (p∗∗ ) − m2 (p∗∗ ) . This implies 0 ≥ m1 (p∗ ) − m1 (p∗∗ ) = m2 (p∗ ) − m2 (p∗∗ ) ≥ 0 and so m1 (p∗ ) − m1 (p∗∗ ) = m2 (p∗ ) − m2 (p∗∗ ) = 0 . Since m1 is concave and nondecreasing this yields m1 (p∗∗ ) = m1 (p) for all p ∈ (p∗∗ , 1). Therefore we have m1 (p∗∗ ) = limp→1 m1 (p) = 1/ supx≥0 (e−α1 x h(x)). This is a contradiction to m1 (p∗∗ ) ≤

eα1 x˜ 1 p∗∗ eα1 x˜ + (1 − p∗∗ )eα2 x˜ < = . h(˜ x) h(˜ x) supx≥0 (e−α1 x h(x))

Theorem 4 Let p∗ be chosen according to Lemma 1 and let C ∗ = supx∈R Gp∗ (x). If there exist points x1 > 0 and x2 < 0 such that Gp∗ (x1 ) = C ∗ = Gp∗ (x2 ), then n

o

sup Ee−rT h(XT )1{T 0|Xt = x1 or Xt = x2 }.

Proof: For all stopping times T it holds n

o

n

o



E e−rT h(XT )1{T log K, x2 < log L and 0 < p < 1 of the following system of equations: ex1 − K p∗ eα1 x1 + (1 − p∗ )eα2 x1

=

ex1 ex1 − K

=

p∗ α1 eα1 x1 + (1 − p∗ )α2 eα2 x1 p∗ eα1 x1 + (1 − p∗ )eα2 x1

−ex2 L − ex2

=

p∗ α1 eα1 x2 + (1 − p∗ )α2 eα2 x2 p∗ eα1 x2 + (1 − p∗ )eα2 x2

p∗ eα1 x2

L − ex2 + (1 − p∗ )eα2 x2

Let C ∗ denote the common value of ex1 − K p∗ eα1 x1 + (1 − p∗ )eα2 x1

and

p∗ eα1 x1

L − ex2 . + (1 − p∗ )eα2 x1

Then 



σ2 σ2 sup E e−rT max L − eσWT +(µ− 2 )T , 0, eσWT +(µ− 2 )T − K



= C∗

T

and the supremum is attained for σ2 T ∗ = inf t > 0|σWt + (µ − )t = x1 2 (

11

or

σ2 σWt + (µ − )t = x2 2

)

.

1 Proof: Let Gp (.) be as in (4). Since supx≥0 e−α1 x h(x) = e−α1 x˜ h(˜ x) for x˜ = log αα1 −1 +

log K, there exists a unique p∗ with supx≥0 Gp∗ (x) = C ∗ = supx≤0 Gp∗ (x). For any p ∈ (0, 1) we have limx→∞ Gp (x) = 0. As h(x) ≥ 0 for all x and h(log K) = 0, the function Gp (.) assumes for any fixed p ∈ (0, 1) its maximum over (log K, ∞) at some point x in (log K, ∞). Each such point is a solution of Gp (x)0 = 0. On (log K, ∞) this equation is equivalent to pα1 eα1 x + (1 − p)α2 eα2 x ex = . ex − K peα1 x + (1 − p)eα2 x The function

ex ex −K

is strictly decreasing on (log K, ∞) and the function pα1 eα1 x + (1 − p)α2 eα2 x peα1 x + (1 − p)eα2 x

is strictly increasing. Therefore there is at most one solution of Gp (x)0 = 0 in (log K, ∞). With similar arguments one can show that for any fixed p ∈ (0, 1) the function Gp (x) assumes its maximum over the interval (−∞, log L) at the point, which is the unique solution of pα1 eα1 x + (1 − p)α2 eα2 x −ex = L − ex peα1 x + (1 − p)eα2 x in (−∞, log L).

2.5

2

Parabolic Boundaries

Let h be a measurable function such that supx∈R {h(x)/H(x)} < ∞, where H(x) =

Z



u2

eux− 2 u2β−1 du ,

0

with β ∈ R+ . We further assume that the supremum of h(x)/H(x) over R is attained at a unique point x∗ and that this supremum is strictly positive. Let C ∗ = {h(x∗ )/H(x∗ )}. Let x0 < x∗ and let Xt = Wt + x0 for 0 ≤ t < ∞, where W is a standard Brownian motion with W0 = 0.

12

Problem 3 Find a stopping time T of X that maximizes (

−β

E (T + 1)

XT  h √ T +1 

)

.

This problem is treated in van Moerbeke (1974a) (under different assumptions on h). Theorem 5 Under the above assumptions it holds (

−β

sup E (T + 1) T

)

(

)

 X ∗  XT  T h √ = E (T ∗ + 1)−β h √ ∗ = H(x0 )C ∗ , T +1 T +1 

where ∗

T = inf

(

X t t > 0 √

t+1

=x



)

.

.  √ Proof: Let Mt denote the process (t + 1)−β H Xt / t + 1 H(x0 ). It holds β

(t + 1)

Z



2

2

uXt − u2 t − u2

e

e

u

2β−1

du = H

0

2

X √ t t+1

!

.

2

2

Moreover exp{uXt − u2 t} = exp{ux0 } exp{uWt − u2 t} and E(exp{uXt − u2 t}) = exp{ux0 }. Therefore (Mt ; 0 ≤ t < ∞) is a positive martingale with EM0 = 1 and by the definition of C ∗ it holds that Xt (t + 1)−β h √ t+1

!

= H(x0 )

h H





√Xt  t+1  Mt √Xt t+1

≤ H(x0 )C ∗ Mt .

This implies (

−β

E (T + 1)

XT h √ T +1

!)

≤ H(x0 )C ∗ E(MT ) ≤ H(x0 )C ∗

for all F X –stopping times T . On the set {T ∗ < ∞} one has h



√XT ∗ T ∗ +1



/H



√XT ∗ T ∗ +1



= C∗

and so ∗

−β

(T + 1)

XT ∗ h √ ∗ T +1

!

= C ∗ H(x0 )MT ∗ .

In order to complete the proof it is therefore sufficient to show P (T ∗ < ∞) = 1 and E(MT ∗ ) = 1. The law of the iterated logarithm immediately yields P (T ∗ < ∞) = 1. Let ρ

13

2

denote the probability measure on R+ with Lebesgue–density exp{ux0 − u2 }u2β−1 /H(x0 ). Let Q denote the probability measure on σ(Ws ; 0 ≤ s < ∞) with Z ∞ 2 dQ 1 Z ∞ uXt − u2 t − u2 2β−1 uXt − u2 t 2 e 2 u = e ρ(du) = e du = Mt , dP FtW H(x0 ) 0 0

for 0 ≤ t < ∞, where FtW = σ(Ws ; 0 ≤ s ≤ t). Let B denote a standard Brownian motion and Θ a random variable with distribution ρ that is independent of B. Under Q the process (Xt ; 0 ≤ t < ∞) has the same distribution as (x0 + Bt + Θt; 0 ≤ t < ∞). Therefore Q(T ∗ < ∞) = 1 and so the assertion follows.

2

Example (A classical stopping problem) We now consider the special case h(x) = x, x0 = 0, and β = 21 . That means we want to maximize E{WT /(T + 1)}. This problem is treated in Shepp (1969) and Taylor (1968) and was initiated by Chow and Robbins (1965) and Dvoretzky (1965). An easy calculation shows that Z



u2

x2

eux− 2 du = e 2

Z

x

z2

e− 2 dz.

−∞

0

Differentation yields the following transzendental equation for the threshold x∗ (see Shepp (1969)) 0 = (1 − x2 )

Z



0

u2

eux− 2 du − x .

√ Remark Let Ta denote the stopping time Ta = inf{t > 0|x0 + Wt ≥ a t + 1}. Since Mt is a martingale the optional stopping theorem yields E (Ta + 1)−β = for a > x0 and β > 0. Note that

H(x0 ) H(a)

sup H(z) = H(a) < ∞ . This is a special case of the

−∞ t) = e−λt for some λ > 0 and all t > 0. Let    θ0 t + σWt

Xt =  

for t ≤ τ

,

θ0 τ + θ1 (t − τ ) + σWt for t > τ

where σ > 0 and θ0 6= θ1 ∈ R. This model has been considered by Shiryayev (1963) who studied the problem of detecting the change of drift as soon and as reliable as possible if one observes X sequentially. Now let r ≥ 0 and θ0 and θ1 be such that θ0 + σ 2 /2 > r > θ1 + σ 2 /2. Obviously eX is a geometric Brownian motion whose mean changes at the random time point τ . We 



will discuss the problem of maximizing E e−rT eXT 1{T λ, we have for any t > 0 that 











E e−rt eXt ≥ P (τ ≥ t)e(θ0 −r)t E eσWt = e(θ0 −r+σ and lim supt→∞ E e−rt eXt

2 /2−λ)t





= +∞. This implies supT E e−rT eXT 1{T λ since the change occurs ‘too late’. If θ0 + σ 2 /2 − r = λ and θ0 − θ1 ≤ λ, then 

−rt Xt

E e

e





=e

σ2 −r+θ0 2



t



(θ1 −θ0 )(t−τ )+

E e





≥e

σ2 −r+θ0 2



t −λt

e

Z

t

λdu = t.

0

Therefore we have again limt→∞ E(e−rt eXt ) = +∞. The case ‘θ0 + σ 2 /2 − r = λ and θ0 − θ1 > λ’ is less obvious. Unfortunately the method which we use below cannot be applied in that case. Therefore we shall assume from now on that θ0 + σ 2 /2 − r < λ.

15

Problem 4 Let θ0 + σ 2 /2 − r < λ. Find a stopping time T of X that maximizes 



E e−rt e−XT 1{T