Optimal stopping, randomized stopping and singular control with ...

0 downloads 0 Views 174KB Size Report
Feb 18, 2018 - optimal stopping and randomized stopping with partial information. ...... Singular control of stochastic linear systems with recursive utility.
arXiv:1802.06347v1 [math.OC] 18 Feb 2018

Optimal stopping, randomized stopping and singular control with partial information flow N. Agram1,2, S. Haadem1,B. Øksendal1 and F. Proske1 18 February 2018 Abstract The purpose of this paper is two-fold: • We extend the well-known relation between optimal stopping and randomized stopping of a given stochastic process to a situation where the available information flow is a sub-filtration of the filtration of the process. We call these problems optimal stopping and randomized stopping with partial information. • Following an idea of Krylov [K] we introduce a special singular stochastic control problem with partial information and show that this is also equivalent to the partial information optimal stopping and randomized stopping problems. Then we show that the solution of this singular control problem can be expressed in terms of (partial information) variational inequalities, which in turn can be rewritten as a reflected backward stochastic differential equation (RBSDE) with partial information.

MSC [2010]: 93EXX; 93E20; 60J75; 62L15; 60H10; 60H20; 49N30. Keywords: Optimal stopping; Optimal control; Singular control; RBSDEs; Partial information. 1

Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, N–0316 Oslo, Norway. Email: [email protected], [email protected], [email protected], [email protected]. This research was carried out with support of the Norwegian Research Council, within the research project Challenges in Stochastic Control, Information and Applications (STOCONINF), project number 250768/F20. 2 University of Biskra, Algeria.

1

1

Introduction

There are several classic papers in the literature on the relation between optimal stopping, randomized stopping and singular control of a given stochastic process with filtration F := ˇ ska [GS] and the references therein. {Ft≥0 }. See e.g. Krylov [K], Wang [W], Gy¨ongy and Siˇ The purpose of this paper is to extend this relation to a situation where the admissible stopping times are required to be stopping times with respect to a given partial information flow H = {Ht }t≥0 with Ht ⊆ Ft for all t, and the admissible controls are required to be H-adapted. This is a common situation in many applications, and one of our motivations for this paper is to be able to study such more realistic optimal stopping problems. To the best of our knowledge, the only paper in the literature that deals with this type of partial information optimal stopping is Øksendal and Sulem [ØS2], where the study is based on a maximum principle for singular stochastic control of jump diffusions, associated reflected backward differential equations and optimal stopping. In the current paper we extend the result of Øksendal and Sulem [ØS2] to a more general setting, using a more direct approach. Our main idea is based on the following two key elements: (i) We prove an extension of Lemma 2 (a), p. 36, in Krylov [K] to a partial information flow situation. ˇ ska [GS] to partial information. (ii) We extend the results in Gy¨ongy and Siˇ

2

Framework and problem formulations

Let (Ω, F , F = {Ft }t≥0 , P ) be a filtered probability space satisfying the usual conditions. Let T ≤ ∞ be a fixed terminal time and let H := {Ht }t≥0 ⊂ F be another filtration satisfying the usual conditions. We assume that Ht ⊆ Ft , (T )

for all t. Further, let TH = TH all functions

denote the set of all H-stopping times τ ≤ T , i.e. the set of τ : Ω → [0, T ],

such that {ω : τ (ω) ≤ t} ∈ Ht for all t ∈ [0, T ]. For example, we could have Ht = F(t−δ)+ ; t ≥ 0. In the following we let {k(t)}t≥0 be a given F-predictable process which is continuous at t = 0 and satisfies sup E[|k(τ )|] =: κ < ∞. τ ∈TH

2.1

Partial information optimal stopping problem

We first consider the following partial information optimal stopping problem: 2

Problem 2.1 Find Φ ∈ R and τ ∗ ∈ TH such that Φ := sup E[k(τ )] = E[k(τ ∗ )].

(2.1)

τ ∈TH

Since Ht ⊆ Ft for all t, we call this a partial information optimal stopping problem. If we had Ht ⊇ Ft for all t, this would be an inside information optimal stopping problem. A special inside information optimal stopping problem is studied (and solved) in Hu and Øksendal [HØ], based on Malliavin calculus and forward integration theory.

2.2

Partial information randomized stopping

Next we formulate the corresponding partial information randomized stopping problem: Problem 2.2 Let GH be the set of H-adapted, right-continuous and non-decreasing processes G(t) such that G(0− ) = 0 and G(T ) = 1 a.s. Find Λ ∈ R and G∗ ∈ GH such that Z T  Z Λ := sup E k(t)dG(t) = E G∈GH

2.3

0

T ∗



k(t)dG (t) .

0

Partial information singular control

Finally, we introduce our corresponding partial information singular control problem: Problem 2.3 Let AH denote the set of all H-adapted non-decreasing right-continuous processes ξ(t) : [0, T ] → [0, ∞] such that ξ(0− ) = 0 and ξ(T ) = ∞. Find Ψ ∈ R and ξ ∗ ∈ AH such that Z T  Z T  ∗ ∗ k(t) exp (−ξ (t)) dξ (t) . k(t) exp (−ξ(t)) dξ(t) = sup E Ψ := sup E ξ∈AH

ξ∈AH

0

0

We will prove that all these 3 problems are equivalent, in the sense that Φ = Λ = Ψ, and we will find explicit relations between the optimal τ ∗ , G∗ and ξ ∗ .

3

3

Randomized stopping and optimal stopping with partial information flow

In this section we prove that Problem 2.1 and Problem 2.2 are equivalent. The following ˇ ska [GS] to partial result may be regarded as an extension of Theorem 2.1 in Gy¨ongy and Siˇ information: Theorem 3.1 Λ := sup E G∈GH

Z

T



k(t)dG(t) = sup E [k(τ )] =: Φ.

0

τ ∈TH

Proof. Choose τ ∈ TH and define G(n) (t) = 1{t≥τ >0} + (1 − e−nt )1{τ =0} , n = 1, 2, . . . . Then G(n) (t) ∈ GH and we see that Z T  Z (n) E [k(τ )] = lim E k(t)dG (t) ≤ sup E n→∞

G∈GH

0

T



k(t)dG(t) .

0

Since τ ∈ TH was arbitrary, this proves that sup E [k(τ )] ≤ sup E

τ ∈TH

G∈GH

Z

T



k(t)dG(t) .

0

To get the opposite inequality, we define for each G ∈ GH and r ∈ [0, G(T )) = [0, 1), the time change α(r) by α(r) = inf{s ≥ 0; G(s) > r}.

Then {ω; α(r) < t} = {ω; G(t) > r} ∈ Ht , so α(r) ∈ TH for all r. Moreover, G(α(t)) = t for all t and hence "Z # Z Z T  G(T ) 1 E k(t)dG(t) = E k(α(r))dr ≤ sup E[k(τ )]dr = sup E [k(τ )] . 0

0 τ ∈TH

0

τ ∈TH



4

Singular control and optimal stopping with partial information

In this section we prove that Problem 2.1 and Problem 2.3 are equivalent. First we establish an auxiliary result: 4

Lemma 4.1 Let ξ ∈ AH and t ∈ [0, T ]. Then Z t Z exp(−ξ(s))dξ(s) ≤ 1 − exp(−ξ(t)) ≤ 0+

t

0+

exp(−ξ(s− ))dξ(s); t ∈ [0, T ],

i.e., exp(−ξ(t))dξ(t) ≤ d (− exp(−ξ(t))) ≤ exp(−ξ(t− ))dξ(t); t ∈ [0, T ]. Proof. (i) We first prove the second inequality: Let ξ ∈ AH . The Itˆo formula for semimartingales (see e.g. Theorem II.32 in Protter [P]), gives that for all f ∈ C 2 (R) we have Z Z t 1 t ′′ ′ − f (ξ(s− ))d[ξ, ξ]cs f (ξ(t)) = f (0) + f (ξ(s ))dξ(s) + 2 + + 0 0 X − ′ + {f (ξ(s)) − f (ξ(s )) − f (ξ(s− ))∆ξ(s)}, 0 0. Since ξ has finite variation, we have d[ξ, ξ]cs = 0. Therefore, applying the Itˆo formula to the concave function f (x) = 1 − exp(−x); x ∈ R, we get 1 − exp(−ξ(t)) = + ≤

Z

t

exp(−ξ(s− ))dξ(s) +

0 X

{− exp(−ξ(s)) + exp(−ξ(s− )) − exp(−ξ(s− ))∆ξ(s)}

0 0 choose for all ζ ∈ V(ξ).   s < t, 0; (s−t)a ζ0 (s) = ; t ≤ s ≤ t + δ, δ   a; s ≥ t + δ,

for some t ∈ [0, T ] and some bounded Ht -measurable random variable a(·) ≥ 0, then ζ0 ∈ ˆ and (5.4) gives V(ξ) hR     RT t+δ (s−t)a ˆ ˆ ˆ ˆ d ξ(s) + E t k(s) exp −ξ(s) k(s) exp − ξ(s) adξ(s) δ  t+δ i R t+δ a ˆ − t k(s) exp −ξ(s) ds ≥ 0. δ

Since this holds for all such a(·) and all δ > 0, we conclude that Z T      ˆ ˆ ˆ E k(s) exp −ξ(s) dξ(s) − k(t) exp −ξ(t) Ht ≥ 0; t ∈ [0, T ]. t

Next, let us choose

ˆ and 1. dζ1 (s) = dξ(s) ˆ 2. dζ2 (s) = −dξ(s). 12

ˆ for i = 1, 2 and (5.4) gives Then ζi ∈ V(ξ) Z T  n o ˆ ˆ ˆ ˆ E k(s) exp −ξ(s) −ξ(s)dξ(s) + dξ(s) = 0.

(5.5)

0

Note that by the Fubini theorem we have  Z T Z T   ˆ ˆ ˆ k(s) exp −ξ(s) dξ(s) dξ(t) 0 t  Z T Z s   ˆ ˆ ˆ = dξ(t) k(s) exp −ξ(s) dξ(s) 0 0 Z T   ˆ ˆ ˆ = k(s) exp −ξ(s) ξ(s)d ξ(s).

(5.6)

0

Substituting (5.6) into (5.5) we get  Z T Z T      ˆ ˆ − k(t) exp −ξ(t) ˆ ˆ E k(s) exp −ξ(s) dξ(s) dξ(t) = 0. 0

t

This proves part a) of the following theorem: Theorem 5.2 (Variational inequalities) a) Suppose ξˆ ∈ AH is optimal for (5.1) - (5.2). Then Z T      ˆ ˆ ˆ E k(s) exp −ξ(s) dξ(s) − k(t) exp −ξ(t) Ht ≥ 0; t ∈ [0, T ],

(5.7)

t

and

E

Z

T t

     ˆ ˆ ˆ ˆ = 0; t ∈ [0, T ]. k(s) exp −ξ(s) dξ(s) − k(t) exp −ξ(t) Ht dξ(t)

(5.8)

b) Conversely, suppose (5.7) - (5.8) hold for some ξˆ ∈ AH . Then ˆ ζ) ≤ 0 for all ζ ∈ V(ξ). ˆ D(ξ,

(5.9)

Proof. Statement b) is proved by reversing the argument used to prove that (5.9) ⇒ (5.7) - (5.8). We omit the details. 

13

5.2

Reflected BSDEs with partial information

We recall a direct approach to optimal stopping with partial information, as presented in e.g. Øksendal and Zhang [ØZ]: Define the Snell envelope Y (t) for 0 ≤ t ≤ T by Y (t) = sup E[k(τ ) | Ht ], H τ ∈Tt,T

H where Tt,T is the family of H-stopping times τ such that t ≤ τ ≤ T .

Theorem 5.3 If Y (t) is the snell envelope as defined above, then there exists an H-adapted, non-decreasing, right-continuous process K(t) and an H-martingale M(t) such that (Y (t), K(t), M(t)) is the unique solution of the RBSDE given by the following equations and inequalities: • dY (t) = −dK(t) + dM(t); • Y (t) ≥ E[k(t) | Ht ]; • Y (T ) = E[k(T ) | HT ]; • (Y (t) − E[k(t) | Ht ])dK(t) = 0; t ∈ [0, T ].

5.3

Singular control and related RBSDE under partial information

It is possible to express Theorem 5.2 in terms of a reflected backward stochastic differential equation (RBSDE) with respect to a partial filtration, as follows: Consider the problem to find H-adapted processes p(t), ξ(t) such that p(·) is c`adl`ag,

p(t) = E

Z

t

ξ ∈ AH ,  T dξ(s) Ht ; t ∈ [0, T ],

 p(t) − E k(t) Ht ≥ 0, for all t ∈ [0, T ] and 

  p(t) − E k(t)|Ht dξ(t) = 0, for all t ∈ [0, T ].

(5.10) (5.11) (5.12) (5.13) (5.14)

If such a pair (p, ξ) exists, we call it the solution of the singular RBSDE (5.10) - (5.14) with filtration H. The process k(t) is called a reflecting barrier. 14

Remark 5.4 Note that the RBSDE (5.12) can be written in the following equivalent way:  dp(t) = −dξ(t) + dM(t); t ∈ [0, T ] (5.15) p(T ) = 0, for some (unique) H-martingale M. It now follows from (5.15) and (4.1) that      n o ˆ ˆ ˆ d exp −ξ(t) pˆ(t) = exp −ξ(t) −k(t)dξ(t) + dM(t) + p(t)dZ(t),

where

Z(t) =

X

ˆ ˆ − )) + exp(−ξ(l))∆ ˆ ˆ {exp(−ξ(l)) − exp(−ξ(l ξ(l)}.

0≤l≤t

Let τ be the stopping time of jumps for ξ. Further let L1 := max{n ≥ 0 : τn ≤ T }. Define ri := τi and ˆ ˆ − )) + exp(−ξ(t))∆ ˆ ˆ g(t) = exp(−ξ(t)) − exp(−ξ(t ξ(t). Integrating and taking conditional expectation with respect to Ht , we get " n  # Z T     X ˆ ˆ ˆ Ht − E exp −ξ(t) pˆ(t) = E k(s) exp −ξ(s) dξ(s) p(τi )g(τi ) Ht ; t ∈ [0, T ]. t

i=1

Now let ǫ be such that ǫ ≤ r1 − r2 , then    Z T     − ˆ 1 − ǫ) pˆ(r1 −ǫ) = E ˆ ˆ H − +p(r )E g(r1) H − . exp −ξ(r k(s) exp −ξ(s) dξ(s) 1 r1 r1 (r1 −ǫ)+

Letting ǫ → 0 gives 

Z



ˆ − ) pˆ(τn ) = E exp −ξ(r 1 or

pˆ(r1− ) Further,



 =E 

T

r1

Z

T

r1

   − ˆ ˆ k(s) exp −ξ(s) dξ(s) Hr1− − p(r1 )E g(r1 ) Hr1− , 









ˆ ˆ k(s) exp −ξ(s) dξ(s)    − . H   r1  ˆ 1) p(r1− )E g(r1 ) Hr1− + exp −ξ(r

Z  ˆ exp −ξ(r2 − ǫ) p(r2 v) = 

− −

Z

T (r2 −ǫ)+ T

 ˆ ˆ k(s) exp −ξ(s) dξ(s)

  ˆ exp −ξ(s) dM(s)

r2 −ǫ p(r2− )g(r2)

15



− p(r1− )g(r1 ).

This gives  Z T     ˆ ˆ ˆ exp −ξ(r2 ) p(r2 ) = E k(s) exp −ξ(s) dξ(s) Hr2− r2   − − E p(r2 )g(r2 ) Hr2−   − − p(r1 )E g(r1 ) Hr2− .

So



 p(r2 ) = E   

 −E 

Z

T

r2





ˆ k(s) exp −ξ(s)    ˆ 2 ) + E g(r2 ) H − exp −ξ(r r2

p(r1− )g(r2)    ˆ 2 ) + E g(r2) H − exp −ξ(r r2

  ˆ H −   dξ(s) r2 

   Hr−  . 2 

On the other hand we have that     " n # Z T k(s) exp −ξ(s) ˆ X ˆ H −  − E   dξ(s) p(ri− )g(ri )1(t,T ] (r1 ) Ht , pˆ(t) = E  r1 ˆ t+ exp −ξ(t) i=1 with explicitly known p(ri− ) for all i.

Substituting this into (5.7) and (5.8) we get the following theorem. Theorem 5.5 (RBSDE formulation) 1. Suppose ξˆ ∈ AH is optimal for (5.1) - (5.2). Let pˆ(t) be the solution of the corresponding RBSDE (5.12), i.e.  Z T ˆ p(t) = E dξ(s) Ht ; t ∈ [0, T ]. t

Then

    Pn − ˆ E exp −ξ(t) pˆ(t) + E i=1 p(ri )g(ri )1(t,T ] (r1 ) Ht    ˆ − exp −ξ(t) k(t) Ht ≥ 0 for all t ∈ [0, T ], 

16

and

    Pn − ˆ E exp −ξ(t) pˆ(t) + E i=1 p(ri )g(ri )1(t,T ] (r1 ) Ht    ˆ ˆ = 0 for all t ∈ [0, T ]. − exp −ξ(t) k(t) Ht dξ(t) 



 ˆ Thus pˆ(t), ξ(t) solves the RBSDE (5.10) - (5.14) up to the time n o ˆ = ∞ ≤ T. T∞ := inf t > 0; ξ(t)

  ˆ 2. Conversely, suppose pˆ(t), ξ(t) is the solution of the RBSDE (5.10) - (5.14) up to the time T∞ . Then ˆ ζ) ≤ 0 for all ζ ∈ V(ξ). ˆ D(ξ,

References ˇ ska, D. (2008). On randomized stopping. Bernoulli, 352-361. [GS] Gy¨ongy, I., & Siˇ [HØ] Hu, Y., & Øksendal, B. (2008). Optimal stopping with advanced information flow: selected examples. Banach Center Publications, 1(83), 107-116. [K]

Krylov, N. V. (1980). Controlled Diffusion Processes. Springer Science & Business Media.

[ØS2] Øksendal, B., & Sulem, A. (2012). Singular stochastic control and optimal stopping with partial information of Itˆo–L´evy processes. SIAM Journal on Control and Optimization, 50(4), 2254-2287. [ØZ] Øksendal, B., & Zhang, T. (2012). Backward stochastic differential equations with respect to general filtrations and applications to insider finance. Communications on Stochastic Analysis, 6(4), 13. [P]

Protter, P. Stochastic Integration and Differential Equations, 2003. Springer (2nd Edition).

[W] Wang, B. (2004). Singular control of stochastic linear systems with recursive utility. Systems & Control Letters, 51(2), 105-122.

17