an optimal order regularization method which does not ... - CiteSeerX

0 downloads 0 Views 264KB Size Report
This paper de nes an optimal method to reconstruct the solutions of operator ..... equality. The next theorem generalizes this inequality for variable Hilbert scales. ..... for solving linear ill-posed problems, Appl. Num. Math., 4 (1988), pp. 395{417.
AN OPTIMAL ORDER REGULARIZATION METHOD WHICH DOES NOT USE ADDITIONAL SMOOTHNESS ASSUMPTIONS MARKUS HEGLAND

Abstract. This paper de nes an optimal method to reconstruct the solutions of operator equations of the rst kind. Only the case of compact operators is considered. The method is in principle a discrepancy method. It does not require any additional knowledge about the solution and is optimal for all standard smoothness assumptions. In order to analyze the properties of the new regularization method variable Hilbert scales are introduced and several well-known results for Hilbert scales are generalized. Convergence theorems for classes of optimal and suboptimal methods are derived from a generalization of the interpolation inequality. Key Words. Ill posed problems, Fredholm integral equations of the rst kind, optimal reconstruction, regularization, discrepancy principle, Hilbert scales. AMS(MOS) subject classi cation. 65R10, 65J10, 46E35, 45B05

1. Introduction. We will study new methods to reconstruct the solution x of an

operator equation (1)

Ax = y

from inexact information about the right hand side y. Here A 2 L(X; Y ) is a (given) linear operator between the Hilbert spaces X and Y . We assume that we know y 2 Y and  2 R such that ky ? yk  . The exact right hand side y is not known. We further assume that the singular values and vectors of A are known. In practice often only approximations of the singular values and vectors are known. It is planned to discuss the consequences of this incomplete knowledge in a future paper. If range(A) is not closed this problem is ill-posed and regularization methods are used to compute approximate solutions x of Equation (1). They are de ned by (2)

x = R (A A)Ay

where A is the adjoint of A and the operators R (AA) are continuous approximations of the inverse of AA if the generating functions R (t) are bounded on the spectrum (A A). In this general setting, the theory is developed in [12, 2, 32, 33, 5], see also [14]. We study Tikhonov regularization which de nes approximate solutions by (3)

x ; := (AA + (kAk (A A) )) Ay : 2

+

+

For any linear operator B we denote by B its Moore-Penrose inverse. The generating function of Tikhonov regularization is R (t) = (t + (kAk =t))? . +

2

1

Centre for Information Science Research, The Australian National University, Canberra, ACT 2601. [email protected], appeared in SIAM J. Numer. Anal., 29 (5), pp. 1446{1461 

1

The regularized solution x ; may equally be de ned as minimum of the Tikhonov functional:

kAx ; ? y k + (x ; ) = inf fkAx ? y k + (x)jx 2 D( )g: 2

2

Tikhonov [31, 30] suggested that the regularizing functional (x) should be chosen such that the sets ? [0; r] are compact and so the corresponding R (AA) are compact for > 0. For the solution of Fredholm integral equations of the rst kind he suggests to use squared norms of derivatives. This was also proposed by Phillips in [26]. Our choice for the regularizing functional is 1

(x) = (x; (kAk (A A) )x): 2

+

An amazing fact is that what usually is called Tikhonov regularization is the case (t) = 1 which does not have the property that the ? [0; r] are compact for r > 0. The theory for this case is developed in [14]. We will use the term Tikhonov regularization for the slightly more general case as de ned in Equation (3). Alternative regularization methods to solve Ax = y have been constructed by truncating the eigenvalue expansion of AA, by polynomial approximation (iterative methods) of (A A) or by projection in nite dimensional spaces. These methods are discussed and further literature is cited in [14, 18]. It is well known that the form of R (in our case of ) determines the best possible convergence rates. These depend also on the smoothness properties of the exact solutions. The smoothness properties are usually given by x 2 range((AA)= ). (In [15, 18] these spaces are provided with a norm. ) It is essential that the regularization parameter is chosen in an optimal way to get optimal convergence rates. Best possible convergence rates for general methods are discussed in [16, 29, 34, 15]. Optimal strategies to choose the regularization parameter for general regularization methods are described in [32, 5]. For Tikhonov regularization with (t) = 1 the problem of optimal choice of has been extensively studied [27, 28, 6, 7, 8, 3, 9] and it is well known that the best convergence rate obtained for x 2 range((A A)= ) is O(=  ) if   2 and O( = ) else. If the exact solution is smoother ( > 2), iterated Tikhonov regularization may be used to get the optimal orders O(=  ) [4, 9, 11]. Another possibility uses Hilbert scales [17]. In our context this means (t) = ts. If any upper bound for the smoothness parameter  is known it is possible to get optimal rates as shown in [22, 23, 7, 25]. Furthermore an order optimal method which doesn't need any further information is described in [25, 24]. We will choose  such that for any  > 0 and x 2 range((A A)= ) we get optimal convergence rates. This is achieved by 1

+

2

2

( +1)

2 3

( +1)

2

(4)

(t) = t t ; t  1 ( )

where (t) is monotonically increasing. For practical computations it is important that  is only slowly growing. In the following we always assume that A is injective, compact 2

and the range(A) is dense in Y . We choose  such that

(( =i) ) = log(i); i 2 N;

(5)

1

2

where fig is the non-increasing sequence of singular values of A. Between these points we assume  to be interpolated in such a way that it is monotonically increasing. Now we present some methods to choose the regularization parameter . If is too large, x ; may not be a good approximation for the exact solution x. Regularization smoothes the error term but also smoothes the exact solution and so introduces a bias. The balancing of this bias and the disastrous e ect of the error (the Moore-Penrose inverse A is unbounded) is the art of choosing the regularization parameter as a function of the given error level . For many methods explicit (a priori) formulas () are known [31, 30, 14, 22, 23, 32]. However such formulas always need some knowledge about the solution which may not be accessible. Therefore implicit (a posteriori) formulas are much more common. A rst a posteriori method is described in the paper by Phillips [26], which predates even Tikhonov's papers. The method was reinvented by Morozov [21] and later by Marti [19] for the solution in nite dimensional spaces. It is generally known as discrepancy principle and computes as a solution of +

kAx ; ? y k = :

(6)

It is equally p well known that in the case  = 1 this method at best leads to convergence order O( ) [14]. It is discussed in various contexts in [33, 19, 15, 13]. We will use a variant of this method which leads to optimal order convergence for our :

kAx ; ? y k = 2:

(7)

This method was suggested in [32, 18]. For Hilbert scales ((t) = ts) this method was used by Neubauer in [25]. To get optimal order for any  and x 2 range((AA)= ) he combines this regularization with a nite dimensional approximation. An alternative strategy was proposed by Engl [3, 4, 7, 8] for  = 1: 2

kA (Ax ; ? y )k = ?q ?p

(8)

Here p and q have to be chosen depending on the smoothness parameter to get convergence of optimal order O( = ). An optimal order choice for the general class of regularization methods is described in [5]. Here is chosen as the solution of 2 3

(9)

g0( )? (dR (AA )=d [I ? AA R (AA )]y ; y ) = C 1

where

g( ) = supfjR (t)j : t  0g: 3

2

The basis of all analysis presented here is the singular value decomposition of the compact operator A: 1 X A = i ui vi (10) i=1

The sequence fig1 i is a monotonically decreasing null sequence. As A is assumed injective the vectors vi form a complete orthonormal system in X and as range(A) is dense in Y the vectors ui also form a complete orthonormal system in Y . Thus 1 X kxk = (x; vi) ; x 2 X =1

2

2

i=1

and

1 X kyk = (y; ui)2 ; y 2 Y: 2

i=1

In Section two we will describe an implementation of our method using the singular value decomposition. The third section introduces variable order Hilbert scales which are a generalization of Hilbert scales with respect to A as discussed in [15, 18]. For these variable scales optimal convergence rates have been computed by [16]. We generalize the interpolation inequality of Hilbert scales to get convergence rates of our methods. A general convergence theorem which is a variation of the general principle of numerical analysis consistency + stability ) convergence is proved in Section 4. This theory extends the original Hilbert scales theory in two directions: First for extremely smooth solutions it gives convergence rates which are o(s) for all s > 0. Second it goes also in the opposite direction. Thus for extremely unsmooth solutions convergence rates are found which are smaller than any O(s). Results of this kind where also obtained by Franklin [10]. Several convergence theorems for optimal and suboptimal methods in the context of variable Hilbert scales are proved. In the last section the optimal method is compared for some examples with the original discrepancy method. 2. The Optimal Discrepancy Method. In this section we give formulas using the singular value decomposition of A to compute x ; and . If the function 1= is bounded, the inverse in Equation (3) which may be written as 1 X (11) (A A + (kAk (A A)? ))? = (i + ( =i ))? vi vi : +

2

1

2

1

2 1

i=1

2

1

is a continuous mapping on X for > 0. This follows from the principle of uniform boundedness [1]. Thus x ; exists for any > 0;  > 0; y 2 Y . The regularized solution is computed as: 1 X (y ; ui) v : x ; =  +i  (12) ( =i ) i i i =1

2

4

2 1

2

Using the singular value expansion we get the following expression for the discrepancy function:

d( ) = kAx ; ? y k ! 1 X  (  = ) i = i + ( =i ) (y ; ui) i (13) = k(AA(kAk (AA )? )? + I )? y k : The basis for the solvability of d( ) = 4 is given by the following proposition which 2

2 1

2

2 1

2

=1

2

2

2

2

2

2

1

1

1

2

2

in it's form is quite standard [14]. The proof given there may easily be adapted to our case and is therefore omitted. Proposition 2.1. Suppose that (t)  C > 0 is de ned on  (kAk (A A)? ). Then the discrepancy function d( ) is a monotonically increasing continuous mapping from (0; 1) onto (0; ky k ). Thus we can specify x ; as follows: 2

1

2

x ; = (AA + (kAk (AA)? ))? Ay ; with = d? (4 ) if ky k > 2 and x ; = 0 if ky k  2: 2

1

(14)

1

1

2

(This follows from the previous proposition and the intermediate value theorem.) The condition ky k > 2 holds always asymptotically if y = y 6= 0 so we assume it is valid for the following. To compute d? (4 ) we use Newton's method to get the zero of q (15) f ( ) = 1= d(1= ) ? 1=(2): 0

1

2

Obviously this function is also continuous and monotone. The global convergence of Newton's method in this case is guaranteed as may be seen by a similar argument as e.g. in [4]. Practical tests showed convergence in about six steps for a wide range of problems and functions  and starting value = 0. If we choose  by Equations (4) and (5) we get the following formula for the new method: ! 1 X  i (16) x ; = i + ( =i ) i (ui; y )vi: i =1

2

1

2 log( )

The following convergence theorem which states optimality of the new regularization method under all standard smoothness assumptions will be proved in Section 4. Theorem 2.2. Let x ; be de ned as in (16) with = d? (4 ). Then for any  > 0, x 2 range((A A)= ) such that kAx ? y k   we get optimal convergence: 1

2

(17)

kx ; ? xk = O(=  ): ( +1)

Note that x ; is computed without using any knowledge of  . 5

2

3. Variable Scales. In [17] Hilbert scales are de ned as the completion T Hilbert n of the space n D(L ) with respect to the norm kxk = kL xk for some densely de-

ned, unbounded strictly positive operator L. In the case of ill-posed problems mainly di erential operators have been used for computational purposes [23, 25]. In [18, 15] Hilbert scales are used for convergence analysis with L = (AA)? . In this section we generalize these concepts and use L = kAk (AA)? as norm generating operator. First we de ne functions of the symmetric compact operator AA as in the last section. The corresponding operators may not be de ned for all x 2 X . However, any function which is de ned on the spectrum (A A) determines an operator on the dense subset n X (18) D = f iui j i 2 R; n 2 Ng  X: 1

2

1

i=1

by

Consequently any function : [1; 1) ! (0; 1) establishes a scalar product on D (x ; x ) = ( (kAk (A A)? )x ; x ) 1 X (( =i) )(x ; vi)(vi; x ); x ; x 2 D: = 1

(19)

2

2

i=1

1

1

2

1

1

2

2

1

2

This makes D a pre-Hilbert space and the completion of this space will be denoted by X . We call the set fX j : [1; 1) ! (0; 1)g a variable Hilbert scale. An ordinary Hilbert scale is the subset of a variable Hilbert scale obtained by restricting to power functions. We call index functions (in the case of ordinary Hilbert scales the index of the space in the scale is  = log( ( ))= log( )). The indices in ordinary Hilbert scales are just the real numbers which form an ordered additive group. The index functions of variable Hilbert scales constitute an ordered group with respect to multiplication. Furthermore the set of index functions is closed with respect to addition and composition. It also has a lattice structure de ned by (20) (21)

( _ )(t) = maxf(t); (t)g ( ^ )(t) = minf(t); (t)g:

An important point of the ordinary Hilbert scales is the existence of embeddings. This is also valid for variable Hilbert scales. Obviously there always is a mapping E : X ! X with X  D(E )  D and E restricted to D is the identity. If this mapping is continuously extendible to the whole space X we call this extension an embedding. Definition 3.1. An embedding E : X ! X is a continuous linear mapping such that Ex = x; x 2 D. In the next theorem necessary and sucient conditions are given such that an embedding exists. First we characterize E as a (formal) limit of operators En with 6

nite rank: (22)

En x =

n X i=1

(x; vi)vi 2 D; x 2 X

Theorem 3.1. There exists an embedding E : X ! X if and only if

(23)

sup (( =i) ) = C < 1 i (( =i ) ) 2

1

2

1

for some C 2 R. Proof. If an embedding exists we get from the de nition of the scalar product and the embedding: =i)2 ) : kE k  ((vvi;; vvi)) = ((((1= i i  1 i )2 ) This proves necessity. For suciency we rst remark that kEnxk  C kxk. Thus the En are uniformly bounded. Furthermore Enx converges to x for every x 2 D. Thus an embedding exists by the principle of uniform boundedness (see e.g. Theorem II.3.6 in [1]). In the case of ordinary Hilbert scales and compact A the embeddings are compact. The next theorem gives necessary and sucient conditions for the compactness of the embedding in the case of variable Hilbert scales. Theorem 3.2. There exists a compact embedding E : X ! X if and only if ((1 =i)2 ) = 0: lim (24) i ((1 =i )2 ) q Proof. Assume a compact embedding exists. Then zi = vi= ((1=i )2) form a Hilbert base. Thus they weakly converge to zero. As E is compact we get limi kEzik = 0 which is just the condition we wish to prove. For suciency we rst use the last theorem which guarantees existence of E . Furthermore from the de nitions of the norms: =i )2) kxk2 : k(En ? E )xk2  sup ((((1=  i>n 1 i )2 ) Thus En converges in the operator norm to E . As the set of compact operators is closed E is also compact. We conclude this section with some important inequalities related to variable Hilbert scales. For ordinary Hilbert scales the following inequality holds: (25) kxk+(1?)  kxk kxk1? This means that the norm is a logarithmically convex function of the indices. This inequality is essential in the convergence theory and is often called interpolation inequality. The next theorem generalizes this inequality for variable Hilbert scales. First we de ne the -norm in a X: (26) kxk = kExk ; x 2 X 7

if the embedding E : X ! X exists. Theorem 3.3. If ;  and  are three continuous index functions such that = and = are strictly monotonically increasing and ( =)  (=)? is convex. Then !? !?  (27) (kxk=kxk)  (kxk =kxk ); x 2 X _ : 1

1

1

2

2

2

2

  Equality holds for x = vi; i 2 N. Proof. Equality for x = vi follows directly from the de nition of the norms. If x 2 D, the formula i ) )(x; vi ) i = P1((((=  =k ) )(x; vk ) k 2

1

1

=1

2

2

2

de nes a nonnegative sequence with only nitely many nonzero elements which sums up to one. As all functions are continuous by the intermediate value theorem 1 (( = ) ) kxk = X i kxk i i (( =i ) ) is in the range of =. Thus from the convexity of f = ( =)  (=)? we get f (kxk=kxk)  kxk =kxk and by the monotonicity of = we get the inequality. The inequality for x 2 X _ follows by continuity. The interpolation of the ordinary Hilbert scales is obtained from the last theorem by setting (t) = t ; (t) = t and (t) = t ?  . Finally we will also need a variant of Schwarz's inequality: Theorem 3.4. For any index functions and  the following holds: (28) (x; y)  kxk2= kyk ; x 2 X_ 2 = ; y 2 X _: 2

2

1

2

2

1

=1

1

2

2

2

+(1

2

)

(

)

Proof. The inequality is obtained from Schwarz's inequality in Rn if x 2 D. A continuity argument proves the theorem. For the convergence theory of suboptimal methods we need the following consequence of the last theorem: Corollary 3.5. For any two order functions  and and any x 2 X_ , e 2 X_(2 = ) with kx + ek  kxk the following inequality holds: (29) kek2  2kek2 = kxk : Proof. From Schwarz's inequality we get kek2 = kx + ek2 ? kxk2 ? 2(e; x)  2kek2= kxk : 8

4. Convergence. The worst case error of the problem of reconstructing the solu-

tion of Ax = y from y is equal to the in mum of the worst case errors taken over all possible reconstruction algorithms. It is well known [20, 18] to be

e (r; ) = supfkxk j kxk  r; kAxk  g: Let t (t) = t (t), r = =kAk and r = kxk . Then we get the following approximation for e (r; ) from the interpolation inequality if t is strictly monotonically increasing

(30)

and convex.

e (r; )  r

(31)

2

?1 (r2 = 2 )

2

t

r

As in [18] we de ne any family of approximations fx g to be converging with optimal order to the exact solution x if

kx ? xk = O(r

(32)

2

2

?1 (r2 = 2 )):

t

r

But the worst case error is also known exactly for this problem. Theorem 4.1 (Ivanov and Korolyuk [16]). Let 1 1 1 X X X ! (; R; f ig; f ig) = max f ii j i  R ; ii   g: (33) f g 2

2

2

i=1

i

2

i=1

2

2

i=1

Then ! 2 (; r)=R2 = s( 2 =R2 ) is a linear spline in  2=R2 and s( i ) = i ; i 2 N: By scaling we get:

e (r; ) = ! (r ; r; f1= (( =i) )g; f1= t(( =i ) )g): The knots of the spline are r =r = 1= t (( =i ) ) and the corresponding function values s(r =r ) = ( =i ) = t (( =i) ). From this we get for the knots e (r; ) = r t? (r =r ). This is just the estimate we got from the interpolation inequality and

(34)

2

2

2

2

2

2

1

2

1

2

2

1

2

1

1

2

1

2

2

2

2

con rms our de nition of optimal convergence order. Now we call a family of approximations x to be consistent, if there exists a cc  0 such that for all  > 0 and y 2 range(A); y 2 Y such that ky ? y k   the following holds: (35)

kAx ? y k  cc:

Furthermore we call x a -stable family of approximations if there exists a cs  0 such that for all x 2 X : (36)

kx k  cskxk :

Then we have the following convergence theorem for general reconstruction methods: Theorem 4.2. Let t be strictly monotonically increasing and convex and x be consistent and -stable. Then x converges with optimal order for any x 2 X and q kx ? xk  (cc _ cs + 1)=kAk t? (kxk kAk = ): (37) 1

9

2

2

2

Proof. With the triangle inequality we get from the stability condition kx ? xk  kx k + kxk  (cc _ cs + 1)kxk and from the consistency kA(x ? x)k  kAx ? y k + ky ? y k  (cc _ cs + 1): The theorem is then a consequence of the interpolation inequality. Now we apply this theorem to our discrepancy method. From Equation (7) we get consistency with cc = 3. It remains to prove stability. To this end we rst prove a lemma with a lower bound for the regularization parameter. It is similar to Lemma 2.5 of [5]. Lemma 4.3. Let ky k  2 and be de ned as in Equation (7). Then the following (implicit) bound holds for the regularization parameter : q ((1=i)2 )i= ((1 =i)2 )   supf 2 + (( = )2) (38) gkxk i 1 i i Proof. First let

(39) x ; = (AA + (kAk (A A)? ))? A y be the approximation for x obtained by applying regularization to the exact right-hand side y with the parameter computed from  and y . Then we get kAx ; ? yk  kAx ; ? y k ? kA(x ; ? x ; ) ? (y ? y )k 1 X i ) )(y ? y ; ui)ui  2 ? k (( = k i + (( =i ) ) i 2

1

2

1

2

=1

 :

1

1

2

Now we proceed in getting an upper bound for kAx ; ? yk: ! 1 X  ((  = i) ) kAx ; ? yk = i (x; vi) i 8i + (( =i ) ) 9 = < (( =i ) )i !  sup :  + (( = ) ) = (( =i) ); kxk : i i i 2

=1

2

2

1

2

2

2

1

1

2

2

2

2

2

2

1

2

1

2

2

Combining the two bounds completes the proof. The next lemma gives a simple but useful inequality for sequences which are nonincreasing "at in nity": Lemma 4.4. Let fi g and fi g be two positive sequences which are nonincreasing for all i  n for some n 2 N. Then ! ! C; > 0   j i  (40) ii + j j + 10

and

C = maxf max 1= max  ; max 1= max  ; 1g: i n. The proof is completed by application of Theorem 4.6. We conjecture that for a Tikhonov method to be optimal for all  > 0 it is also necessary that Sequence (43) is a null sequence. However, this remains to be proved. Note that no assumption is made about  which has any connection with the value of . Theorem 2.2 is a direct consequence of the last theorem, in this case we have log((1 =i)2) = 1= log(i): (44) log ((1 =i)2 ) Now it may happen that > , i.e. the smoothness of the solution is larger than the possible smoothness from the regularization method. This is called the suboptimal case. The theory for this case was developed in [19] for the original Tikhonov regularization (see also [14]). The case of Hilbert scales is treated in [23, 15]. Our theorems proved so far don't give any convergence rate whatsoever for the case  = 1. For increasing  however we may get convergence rates. The remaining part of this section shows how to improve these estimates. We will prove two theorems on suboptimal convergence, the rst one with still smooth x and the second one with extremely unsmooth x. Let t(t) = t(t). Theorem 4.8. Let  and be two index functions such that t is strictly monotonically increasing. Furthermore t = shall be bounded by C > 0. Then if x ; is de ned as in Equation (14) we get q (45) kx ; ? xk  2r ?t 1(C=(2r )) i

i

Proof. In the following e = x ; ? x. Then from the interpolation inequality we get q kek  2r ?t 1((kek=(2r ))2)): From the corollary of Schwarz's inequality we get kek2  2kek2= kxk 12

and from the embedding theorem we get

kek =  Cr : 2

The theorem follows if all these inequalities are combined. An application of this theorem for  = 1 and (t) = t yields kx ; ? xk = O( = ) which is well known [14]. Finally we prove the theorem on suboptimal methods for very unsmooth data. Theorem 4.9. Let  and be two index functions such that t and t = are strictly monotonically increasing and   (t = )? is convex. Then if x ; is de ned as in Equation (14) we get v ! u q u k x k ? ?  (t= )  t (kxk=r ) (46) kx ; ? xk  2r tt 1 2

1

1

1

r

2

2

Proof. First we apply the interpolation inequality and Schwarz's inequality as in the preceding theorem. Then we apply the interpolation inequality a second time to get: q kek2=  2r (t= )  ?t (kxk=r ) 1

2

2

This completes the proof. An application of this theorem for  = 1 and = t with  < 1 yields the convergence rate kx ; ? xk = O(= ). In this simple case the convergence rate may be improved to O(=  ) [33]. In general the convergence rates of suboptimal methods are rather complicated to compute. However, the evaluation of formula (46) is no problem if a program for symbolic computation like Maple or Mathematica is used. 5. Examples. In four examples we compare the convergence of our Tikhonov regularization with  de ned as in Equation (4) and (5) with the ordinary Tikhonov regularization with  = 1. The examples are in principle the same as in [25]. We assume for our computations that we know the singular value decomposition of A. As we do nite computations we assume that only a nite number of i are nonzero. In our cases we choose 2

( +1)

i = 0; i > n = 40: The perturbation of the right hand side y is done such that p (y ; ui) = (y; ui) + rand(i)= n; i = 1; : : : ; n (y ; ui) = (y; ui) = 0; i > n: The sequence rand(i) is chosen by a random number generator from [?1; 1] with uniform distribution. All computations are done with Matlab on a Sun 3/60. In all the tables, em is always the error of the method with  = 1 (Morozov) and e is the error of the method with the optimal . log

13

Table 1

Results of Example 5.1

=kyk kemk= = ke k= = 1 5

10? 10? 10? 10? 1

0.73 1.01 1.12 1.15 1.27

4

3 2 1

1 5

log

0.93 1.17 1.25 1.21 1.27

Table 2

Results of Example 5.2

=kyk kemk= = kemk= 1 2

10? 10? 10? 10? 10? 1

5

4 3 2 1

0.13 0.12 0.10 0.20 0.58 1.00

ke k=

=

13 17

2.82 1.37 0.63 0.68 1.07 1.00

log

=

13 17

0.21 0.30 0.44 0.67 1.08 1.00

Example 5.1. Let i = i?2 and (x; vi ) = (?1)i =i; i = 1; : : : ; n. Then x

R((A A)=2)

2

with  < 0:25. The optimal convergence rate for this case is  . Table 1 shows the errors from the simulation. For this case of very unsmooth data and an operator which gives rise to a very mildly ill-posed problem both methods do equally well. The regularizing function  is here for the optimal method =

1 5

(t) = t

log

t=4 :

Example 5.2. If i = 1=i2 and (x; vi ) = 1=i7 ; i = 1; : : : ; n we get from theory

convergence rates which are :

kemk = O( = ) ke k  O( = ) 1 2

13 17

log

As the data are very smooth the optimal convergence rates are high. Tikhonov regularization with  = 1 is suboptimal. The theoretical predictions are con rmed by the experiment as can be seen in Table 2. Example 5.3. Here we set i = i? and (x; vi ) = i? . In this example the operator A is smoothing much stronger and although the data is smooth if compared with the operator of the last example it is not smooth compared with A. In this case optimal convergence order is slightly lower than O( = ). Both methods show similar performance in the experiments, see Table 3. The function  for this case is given by 4

3

5 13

(t) = t 14

log

t=8 :

Table 3

Results of Example 5.3

=kyk kem k= =

ke k= =

5 13

10? 10? 10? 10? 10? 1

5

4

3

2

1

log

0.64 0.66 0.63 0.64 0.53 1.00

5 13

0.87 0.83 0.72 0.75 0.53 1.00

Table 4

Results of Example 5.4

=kyk kem k= = kem k= ke k= 1 2

10? 10? 10? 10? 10? 1

5

4 3 2 1

0.057 0.055 0.065 0.19 0.63 1.00

18.0 5.5 2.1 1.9 2.0 1.0

log

3.0 2.7 1.9 1.9 2.0 1.0

Example 5.4. This example uses the same singular values as the last example.

However we have the case of extremely smooth data namely x = v . The ordinary discrepancy method can only converge with order  = but the optimal method converges with a convergence rate very near . This is con rmed in Table 4. Acknowledgment. This research was started at the Seminar fur Angewandte Mathematik and completed at the Interdisciplinary Project Center. I would like to thank Professor J. Marti for many helpful suggestions and encouragement and the head of the Project Center for Supercomputing, PD Dr. M. Gutknecht for his interest. Thanks also go to Prof. A. Louis and R. Plato and the reviewers for many helpful suggestions. 1

1 2

REFERENCES [1] N. Dunford and J. T. Schwartz, Linear Operators. Part I: General Theory, Wiley, 1988. [2] H. W. Engl, Necessary and sucient conditions for convergence of regularization methods for solving linear operator equations of the rst kind, Numer. Funct. Anal. & Optimiz., 3 (1981), pp. 201{222. , Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal [3] convergence rates, J. Opt. Th. Appl., 52 (1987), pp. 209{215. , On the choice of the regularization parameter for iterated Tikhonov regularization of ill[4] posed problems, J. Appr. Th., 49 (1987), pp. 55{63. [5] H. W. Engl and H. Gfrerer, A posteriori parameter choice for general regularization methods for solving linear ill-posed problems, Appl. Num. Math., 4 (1988), pp. 395{417. 15

[6] H. W. Engl and A. Neubauer, An improved version of Marti's method for solving ill-posed linear integral equations, Math. Comp., 45 (1985), pp. 405{416. , Optimal discrepancy principles for the Tikhonov regularization of integral equations of the [7] rst kind, in Constructive Methods for the Practical Treatment of Integral Equations, 1985, pp. 120{141. [8] , Eine Variante der Marti-Methode zur Losung inkorrekt gestellter linearer Integralgleichungen, die optimale Konvergenzraten liefert, ZAMM, 66 (1986), pp. T 406 {T 408. , Optimal parameter choice for ordinary and iterated Tikhonov regularization, in Inverse and [9] Ill-Posed Problems, H. W. Engl and C. W. Groetsch, eds., 1987, pp. 97{125. [10] J. N. Franklin, On Tikhonov's method for ill-posed problems, Math. Comp., 28 (1974), pp. 889{ 907. [11] H. Gfrerer, Parameter choice for Tikhonov regularization of ill-posed problems, in Inverse and Ill-Posed Problems, H. W. Engl and C. W. Groetsch, eds., 1987, pp. 127{149. [12] C. W. Groetsch, On a class of regularization methods, Boll. Un. Mat. Ital., Ser. 17-B (1980), pp. 1411{1419. , Comments on Morozov's principle, in Improperly Posed Problems and Their Numerical [13] Treatment, 1982, pp. 97{104. [14] , The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Pitman, 1984. [15] M. Hegland, Numerische Losung von Fredholmschen Integralgleichungen erster Art bei ungenauen Daten, PhD thesis, ETHZ, 1988. [16] V. Ivanov and T. Korolyuk, Error estimates for solutions of incorrectly posed linear problems, USSR Comp. Maths. Math. Phys., 9 (1967), pp. 35{49. [17] S. Krein and Y. I. Petunin, Scales of Banach spaces, Russian Mathematical Surveys, 21 (1966), pp. 85{159. [18] A. Louis, Inverse und schlecht gestellte Probleme., Teubner, 1989. [19] J. T. Marti, An algorithm for computing the minimum norm solutions of Fredholm integral equations of the rst kind, SIAM J. Numer. Anal., 15 (1978), pp. 1071{1076. [20] A. A. Melkmann and C. A. Michelli, Optimal estimation of linear operators in Hilbert spaces from inaccurate data, SIAM J. Numer. Anal., 16 (1979), pp. 87{105. [21] V. Morozov, The error principle in the solution of operational equations by the regularization method, USSR Comput. Math. and Math. Phys., 8 (1968), pp. 63{87. [22] F. Natterer, On the order of regularization methods, in Improperly Posed Problems and Their Numerical Treatment, 1982, pp. 189{203. , Error bounds for Tikhonov regularization in Hilbert scales, Applicable Analysis, 18 (1984), [23] pp. 29{37. [24] A. Neubauer, Numerical realization of an optimal discrepancy principle for Tikhonovregularization in Hilbert scales, Computing, 39 (1987), pp. 43{55. , An a posteriori parameter choice for Tikhonov regularization in Hilbert scales leading to [25] optimal convergence rates, SIAM J. Numer. Anal., 25 (1988), pp. 1313{1326. [26] D. L. Phillips, A technique for the numerical solution of certain integral equations of the rst kind, J. Ass. Comp. Mach., 9 (1962). [27] E. Schock, On the asymptotic order of accuracy of Tikhonov regularization, J. Opt. Th. Appl., 44 (1984), pp. 95{104. [28] , Parameter choice by discrepancy principles for the approximate solution of ill-posed problems, Integral Equations, 7 (1984), pp. 895{898. , Approximate solution of ill-posed equations: Arbitrarly slow convergence vs. superconver[29] gence, in Constructive Methods for the Practical Treatment of Integral Equations, 1985, pp. 234{243. [30] A. Tikhonov, Regularization of incorrectly posed problems, Soviet Math. Doklady, 4 (1963), pp. 1624{1627. , Solution of incorrectly formulated problems and the regularization method, Soviet Math. [31] Doklady, 4 (1963), pp. 1035{1038. [32] G. M. Vainikko, The discrepancy principle for a class of regularization methods, USSR Comput. 16

[33] [34]

Maths. Math. Phys., 22 (1982), pp. 1{19. , The critical level of discrepancy in regularization methods, USSR Comput. Maths. Math. Phys., 23 (1983), pp. 1{6. , On the optimality of regularization methods, in Inverse and Ill-Posed Problems, H. W. Engl and C. W. Groetsch, eds., 1987, pp. 77{95.

17