COMPUTING SOLUTIONS TO MEDICAL PROBLEMS via SINC CONVOLUTION Frank Stenger

Department of Computer Science University of Utah Salt Lake City, Utah 84112

Mike O'Reilly

Member of Technical Sta CeTech, Inc. 8196 SW Hall Blvd, Ste 304 Beaverton, Oregon 97008

Abstract

In this article we illustrate some novel procedures of using Sinc methods to compute solutions to three types of medical problems. The rst of these is a novel way to solve optimal control problems. the second is a novel way to reconstruct images for X-ray tomography, and the third is a novel way to do ultrasonic tomography inversion. Each of these procedures uses Sinc convolution, which is a novel computational procedure for obtaining accurate approximations to inde nite convolutions.

1 Introduction and Summary In this paper we discuss the use of Sinc methods as tools of the following: 1. Solving optimal control problems. Such problems are becoming increasingly important, especially with respect to the use of robots to Supported by NSF grant # CCR-9307602 1

do surgery, where the robot is controlled remotely by e.g., a physician in another city. 2. Carrying out X{ray tomography inversion. The development of accurate X{ray imaging took many years of \ ne tuning" and alterations of the original procedure, which is based on a dicult to approximate Fourier transform. A recent method based on Sinc approximation [5] yielded a highly ecient algorithm which without ne tuning was almost as good as those of the best existing algorithms, and which has the potential of being more ecient and producing more accurate pictures than existing algorithms. 3. Solving inverse problems in ultrasonic tomography. The procedure of carrying out inversion based on data obtained by ring a single transducer while all of the others \listen" involves a computationally intensive, ill posed computational problem. The appropriate choice of a set of sources which are red at once can transform this problem into a well posed one, requiring almost no computation. In Section 2 below, we brie y review the Sinc methods which we require for solution of the above problems. It is perhaps interesting to mention, at the outset the connection of Sinc methods with wavelets. It is well known that the original wavelet are based on Sinc functions, although Sinc methods have been studied longer, and much more extensively than wavelets. For example, engineers are now discovering that wavelets provide accurate and ecient tools for solving partial dierential equations, although one rarely nds a paper that shows why wavelets are so accurate. In the text [8] we nd clear explanations of why Sinc methods are accurate for solving partial dierential equations. It is also easy to show that wavelet methods are accurate for solving partial dierential equations if and only if Sinc methods are accurate for solving such problems. Indeed, it may be shown that for every wavelet method there is a corresponding Sinc method with the exact same complexity. In Section 3, we illustrate the use of Sinc convolution to solve the following medical problems: some optimal control problems some X-ray tomography problems, and ultrasonic tomography problems, i.e., the inversion of the Helmholtz equation.

2

2 Sinc Methods This section contains a summary of some currently existing Sinc methods that are applicable to the solution of computational problems in medicine. Most of these results and their proofs may be found in [3]; we include these results (but without their proofs here for sake of completeness. Our manner of description of the methods is in symbolic form. We include methods for collocation, function interpolation and approximation, for approximate definite and inde nite integration, for approximation and inversion of Fourier and Laplace transforms, for the approximation of de nite and inde nite convolutions, and for the approximate solution of integral equations.

2.1 Sinc Spaces of Approximation

Sinc spaces are motivated by the premise that most scientists and engineers use calculus to model dierential and integral equation problems, and under this premise the solution to these problems are (at least piecewise) analytic. The Sinc spaces which we shall describe below house nearly all solutions to such problems, including solutions with singularities at end points of ( nite or in nite) intervals (or at boundaries of nite or in nite domains in more than one dimension). Although these spaces also house singularities, they are not as large as Sobolev spaces which assume the existence of only a nite number of derivatives in a solution, and consequently when Sinc methods are used to approximate solutions of dierential or integral equations, they are usually more ecient than nite dierence or nite element methods. In addition, Sinc methods are replete with interconnecting simple identities, including DFT (which is one of the Sinc methods, enabling the use of FFT), making it possible to use a Sinc approximation for nearly every type of operation arising in the solution of dierential and integral equations. Let D denote simply{connected domain in the complex plane C, let 1 p 1, and let Hp (D) denote the family of all functions f that are analytic in D, such that

8 Z 1=p > pjdz j > < 1 if 1 p < 1; j f ( z ) j < @D Np(f; D) > > : sup jf (z)j < 1 if p = 1:

(2.1)

z2D

For purposes of Sinc approximation, let ? = (a; b) be a nite, semi{ 3

in nite interval, or the real line R, let be a conformal mapping of a simply connected domain D onto Dd = fz 2 C : j=z j < dg, where d is a positive number, and C denotes the complex plane, such that is also a one-to-one map of ? onto R. Letting Z denote the integers, we de ne Sinc points for h > 0 and k 2 Z by zk = ?1 (kh), and we de ne by = e. Note that (z) increases from 0 to 1 as z traverses ? from a to b. Let , and d denote arbitrary, xed positive numbers. We denote by M; (D) the family of all functions that are analytic and uniformly bounded in D, which have nite limits (taken from within D) at a and b, and such that

f (z) ? f (a) = O(j(z)j); uniformly as z ! a from within D; f (z) ? f (b) = O(j(z)j? ); uniformly as z ! b from within D:

(2.2) For complete de nition of the class M; (D), we shall furthermore add the restriction that 2 (0; 1], 2 (0; 1] and d 2 (0; ). The class L; (D), is the subset of those functions belonging to M; (D) which vanish at a and also at b. For complete de nition of this class, we furthermore allow , and d to be arbitrary xed positive numbers (with unrestricted range). It thus follows that if for a given function g 2 M; (D), we de ne a \linear form" Lg by Lg(z) = f (a)1++((zz))f (b) ; = e; (2.3) then f de ned by

f = g ? Lg

(2.4)

belongs to L; (D). The main reason for restricting the range of , and d in the de nition of the class M; (D) is the resulting simple and suitable form of the \linear interpolant" Lg de ned above. We would have to alter this linear form if these constants were left unrestricted. Note that if 0 < d < , then L(g) is uniformly bounded in D, the closure of D, and moreover, L(g)(z) ? g(a) = O(j(z)j) as z ! a, and L(g)(z) ? g(b) = O(1=j(z)j) as z ! b, i.e., L(g ) 2 M1;1(D). In addition, M1;1(D) M; (D) for any 2 (0; 1], 2 (0; 1], and d 2 (0; ), and moreover for these restrictions on , , and d. the class L; (D) is contained in the class M; (D). 4

For example, for the case of a nite interval, (a; b), we can take (z ) = log[(z ? a)=(b ? z )]; this function provides a conformal transformation of the \eye{shaped" region D = fz 2 C : j arg[(z ? a)=(b ? z )j < dg onto the strip Dd . The same function also provided a one-to-one transformation of (a; b) onto the real line R. The Sinc points are de ned by zk = ?1 (kh) = (a + b ekh )=(1 + ekh ). In this case, the function g is given explictly by Lg(z) = (b ? z) g(a) + (z ? a) g(b) ; (2.5)

b?a and M; (D) includes all those functions g 2 Hol(D) which are of class Lip in that part of D within a distance R (b ? a)=2 from a, and which are of class Lip in that part of D within a distance R from b. The class M; (D) thus includes functions that are analytic in D, but which may have singularities at the end points of (a; b). The spaces L; (D) and M; (D) are invariant, in the sense that if for j = 1; 2 we have conformal mappings j : Dj ! Dd , and if f 2 L; (D1) (resp., f 2 M; (D1)), then f ?1 1 2 2 L; (D2 ) (resp., f ?1 1 2 2 M; (D2)). We may note that if the same function provides the conformal mappings : D0 ! Dd0 , : D ! Dd , with 0 < d < d0, then D D0 .

Let us next summarize some important properties about the spaces L; (D) and M; (D). A proof of the following theorem may be found in [1, pp. 119{121]. Theorem 2.1 Let 2 (0; 1], 2 (0; 1], d0 2 (0; ), let Dd0 be de ned as above, let D0 = ?1 (Dd0 ) and for some xed d 2 (0; d0), let D = ?1 (Dd ). Let f 2 Hol(D), and let I f denote the inde nite integral of f . Then: 1. If f 2 H1 (D0 ), then f 0=0 2 H1 (D); 2. If f 2 H1 (D0 ), and if (1=0)0 is uniformly bounded in D0 , then | f (n) =(0)n 2 H1 (D), n = 1; 2; 3; ; 3. If f 2 M; (D0 ), then f 0 =0 2 L; (D); 4. If f 2 M; (D0 ), and if (1=0)0 is uniformly bounded in D0 then | f (n) =(0)n 2 L; (D), n = 1; 2; 3; ; 5. If f 2 H1(D), then I f 2 H1 (D); 6. If f 0 =0 2 L; (D), then f 2 M; (D); and 7. If f 2 L; (D), then 0 f 2 H1(D). Let us describe some important speci c spaces for Sinc approximation. Example 2.1: If ? = (0; 1), and if D is the \eye{shaped" region, D = fz 2 C : j arg[z=(1 ? z)]j < dg, then (z) = log[z=(1 ? z)], the relation (2.3) 5

reduces to f (z ) = g (z ) ? (1 ? z )g (0) ? zg (1), and L; (D) is the class of all functions f 2 Hol(D), such that for all z 2 D, jf (z )j < cjz jj1 ? z j . In this case, if e.g. = maxf; g, and a function w is such that w 2 Hol(D), and w 2 Lip (D), then w 2 M; (D). The Sinc points zj are zj = ejh =(1+ ejh ), and 1=0(zj ) = ejh =(1 + ejh )2 . Example 2.2: If ? = (0; 1), and if D is the \sector" D = fz 2 C : j arg(z)j < dg, then (z) = log(z), the relation (2.3) reduces to f (z) = g(z) ? [g(0) + z g(1)]=(1 + z), and the class L; (D) is the class of all functions f 2 Hol(D) such that if z 2 D and jz j 1 then jf (z )j cjz j, while if z 2 D and jz j 1, then jf (z )j cjz j? . This map thus allows for algebraic decay at both x = 0 and x = 1. The Sinc points zj are de ned by zj = ejh , and 1=0(zj ) = ejh . Example 2.3: If ? = (0; 1), and if D is the \bullet{shaped" region D = fz 2 C : j arg(sinh(z))j < dg, then (z) = log(sinh(z)). The relation (2.3) then reduces to f (z ) = g (z ) ? [g (0) + sinh(z ) g (1)]=(1+ sinh(z )), and L; (D) is the class of all functions f 2 Hol(D) such that if z 2 D and jzj 1 then jf (z )j cjz j, while if z 2 D and jz j 1, then jf (z )j c expf? jz jg. This map thus allows for algebraic decay at x = 0 and exponential decay at x = 1. The Sinc points zj are de ned by zj = log[ejh + (1 + e2jh )1=2], and 1=0(zj ) = (1 + e?2jh )?1=2. Example 2.4: If ? = R, and if D is the above de ned \strip", D = Dd, take (z ) = z . The relation (2.3) then reduces to f (z ) = g (z ) ? [g (?1) + ez g(1)]=(1+ ez). The class L; (D) is the class of all functions f 2 Hol(D) such that if z 2 D and < i x x 6= 0 ev(x) = > : hM2 x = 0: The complex numbers, sm;j which approximate the eigenvalues of h I (?1), can be chosen by evaluating ev (x) at equally spaced points x 2 (?; ), i.e.,

sm;j = ev(jh);

(3.14) where h is chosen so that we have m (with m = 2N + 1 an odd integer) numbers equally spaced in (?; ). We can then diagonalize the square matrices of order m, Am h I (?1) and ATm using the same eigenvectors, according to the scheme, Am = F diag[sm;?N ; ; sm;N ] F; ATm = F diag[sm;?N ; ; sm;N ] F; where F is the matrix used in computing the DFT (Discrete Fourier transform). The backprojected image, b(x; y ) is related to original image i(x; y ) by the equation

Z Z

f (x ? ; y ? ) i(; ) dd R R where, in the notation of (2.20), the \kernel" f (x; y ) is given by b(x; y) =

f (x; y) = (x2 + y 2 )? :

(3.15)

(3.16) The de nite convolution in (3.15) can be split into four inde nite convolutions, viz. 1 2

19

b(x; y) = =

Z Z

ZR1 RZ x ?1

y

+

Z 1Z 1 y

x

+

Zy Zx ?1 ?1

+

Zy Z1 ?1 x

:

We apply the procedure of Example 2.7 to the integral

b1 (x; y) =

Z 1Z x

?1

y

f (x ? ; y ? ) i(; ) dd:

To this end, we de ne

A1 = hI (?1) = F S F T A2 = h I (?1) = F S F where F is the Fourier matrix used in computing a DFT and x + iy = x ? iy . The last two expressions on each line follow since I (?1) is a Toeplitz matrix. S is a diagonal matrix S = diag(sm;?N ; ; sm;N ) whose values are de ned in Equation (3.14). In this case it is possible to explicitly evaluate the Fourier transform as de ned in (2.25), i.e., with f de ned as in (3.15), we get

F (s(1); s(2))

=

Z 1Z 1 0

0

f (x; y) e?x=s ?y=s dx dy (1)

(

(2)

(1) (2) (2) (1) = s 2s` log (` + s(2))(` + s(1)) (` ? s )(` ? s )

` =

)

(3.17)

q

(s(1))2 + (s(2))2

The use of this DFT procedure enables a simpli ed version of Algorithm in Example 2.7, namely,

b1 ' F F G F F i;

where \ " refers to the tensor product. Similarly, it is easily seen that: 20

(3.18)

For b2 , we only need use G(s(1); s(2)). For b3 , we only need use G(s(1); s(2)). For b4 , we only need use G(s(1); s(2)). Letting K = G(s(1); s(2)) + G(s(1); s(2)) + G(s(1); s(2)) + G(s(1); s(2)) and b = b1 + b2 + b3 + b4 we can write b ' F F K F F i; where K is real. Letting be a regularizing parameter, we can reconstruct our image using

i ' F F +KK 2 F F b;

In addition, we point out that for each of the matrices multiplying our image the matrix multiplication can be implemented by applying m Fast Fourier transforms in parallel.

3.3 Solving Ill{Posed Problems via Sinc Sources

Many consider the solution of the ultrasonic tomography problem to be an ill{posed problem. We however take the view that ill{posedness is procedure dependent, since inversion involves the construction of an approximating inversion operator using a particular basis, and while the condition number of the resulting operator may be large for a particular basis (i.e., the ill{ posed situation) it can be relatively small for another basis. We illustrate this point in this section, with two inverse problem examples: that of the mathematically simplest to state moment problem; and that of the inversion of the Helmholtz equation, which is a frequently used model in ultrasonic tomography inversion.

Example 3.4: Moment Problems. We use a simple moment problem

to illustrate the idea of the above paragraph. The problem we consider is that of reconstructing the function w given the moments

Z

?

tk (x) dw(x) = k ; (k 2 N or k 2 Z );

(3.19)

where N (resp., Z ) denotes the set of non-negative integers (resp., the set of all integers). If tk (x) = xk , for every non{negative inter 21

k, then the problem is computationally dicult. For example for the case of the function w0(x) = 1 + arcsin(x), one approach is to use the matrix A = [i+j ?1 ], where i+j ?1 denotes the (i; j )th element of the matrix. When the order of A is 8, the condition number of A is already approximately 3 1010. On the other hand, by using the orthogonal Sinc moments tk (x) = S (k; h) (x), the situation is considerably dierent. If we assume, or example, that w 2 M; (D), and set u = (?N ; ; N )T , we nd, in the notation of (2.8), (2.9) and (2.15), that

kw ? mI (?1)uk C"N :

(3.20)

This stable and ecient procedure of construction of such a w requires O(N log(N )) amount of work to achieve a result accurate to within " = O("N ), and hence the complexity is O((log("))2 log(log(1="))).

Thus, a novel method of approximating the solution to other moment problems of the form of (3.18) is to rst create close to orthogonal \source" functions such as, e.g., S (k; h) using linear combinations of the usual sources (that lead to ill{posed problems). Example 3.5: Ultrasonic Tomography Inversion The Helmholtz equation, is a frequently used model for imaging a part of the interior of a human being via ultrasound. This equation takes the form

r2U + 2(1 + f )U = 0; (3.21) where f is a function of r 2 R3 that has support in the half{space H = fr = (x; y; z) 2 R3 : (x; y) 2 R2; z > 0g, and where we shall assume that

= jj ei ;

0 :

(3.22)

As an aid for describing our inversion procedure, we let K denote the exterior of H, i.e., K = R3 n H. The problem of inverting (3.21) is fo reconstruct f on H using sources u in K. These sources can be expressed in the form 22

u(r) =

ZZZ

GI (r ? r0 ) w(r0) dr;

(3.23) K where w is at this stage some generalized function, e.g., a sum of delta functions, with with support on K, and where GI denotes the free{ space Green's function, ir GI (r) = 4er ; r = jrj:

(3.24)

We remark here, that considered as a function of r0, this function GI satis es the equation

r2GI (r00 ? r0) + 2 GI (r00 ? r0) = ?3(r00 ? r0):

(3.25)

The inversion procedure we shall describe is based on two integral equations, whose derivation we now sketch. At the outset, we mention that the input eld u de ned as in (3.23) above, satis es in H, the equation

r2u + 2u = 0:

(3.26)

Then, letting v denote the scattered eld, the total eld, U , as de ned in (3.21) is related to u and v by the equation U = u + v . Moreover, it then follows from (3.21) and (3.26), that

r2v + 2v = ?2 fu (3.27) in H. Next, for r and r0 2 R3 , let GS = GS (r; r0) be the outward going Green's function de ned by the equation

r2GS + 2(1 + f )GS = ?3(r ? r0);

(3.28)

where we assume that this equation is considered as a function of r0, with r xed, and where 3 (r ? r0 ) denotes the three dimensional delta function. Then GS (r; r0) ! 0 as jr ? r0 j ! 1, and indeed, the rate of approach may be shown to be exponential, if = > 0. We can then deduce the integral expression 23

v(r) = 2

ZZZ

GS (r; r0) f (r0) u(r0) dr0:

(3.29) H We remark here that GS is unknown at this point, and hence the integral (3.29) is, in fact, an integral equation. By applying Green's theorem to GI and GS , and using (3.25) and (3.28) above, we arrive at the integral equation

GS (r; r00) = GI (r ? r00) + 2

ZZZ

G (r00 ? r0) f (r0) GS (r; r0) dr0: H I (3.30)

These last two equations form the basis of our inversion scheme. Suppose, for example, that r is a xed point on the boundary of H, and that the function u in (3.29) is a delta function, (r0 ? r00), so that, measuring v (r) gives us the product, GS (r; r00) f (r00). (Of course u as de ned in (3.23) above cannot be an exact delta function, since u must, in fact, satisfy (3.26) in H. On the other hand, using (3.26), it is possible [1] to obtain an accurate delta function, in the absence of noise.) By repeating such measurements v for a xed family of Sinc points r00 = r0 in H, we are able to use these numbers in the second equation, (3.30) above, to simultaneously evaluate the block of all numbers GS (r; r00) at the Sinc points r00 via ecient Sinc inde nite integral convolution. We thus obtain both sets of values GS (r; r00) f (r00) and GS (r; r00) at the set of Sinc points in H, and from these, we get the values f (r00) via a simple division at each point r00. Next, we sketch how we can select the generalized function w in (3.31) above so the resulting function u as de ned by (3.31) is an approximate delta function corresponding to a xed point in H. While it has already been established that such sources do, in fact, exist [1], we mention, at this point, that functions w which give rise to stabel and ecient inversion procedures have at this point not been determined. Along lines that are more constructive than that of [1], we now describe two families of sources for the case of equation (3.21), when the body is a half space: 24

1. A linear combination of point sources, of the form

u(r) =

ZZZ

GI (r ? r0 ) w(r0) dr;

(3.31) K where in w is a sum of delta functions with with support on K = R3 n H; and 2. A linear combination of plane wave sources

u(r) =

ZZ

with S R2 .

S

!()ei(?0 )+i

p2 2 0 ? (z?z ) d

(3.32)

Both sources, (3.31) and (3.32), satisfy (3.26) in H.. It was shown in [2] for the two dimensional case, that the one dimensional integral equivalent of (3.32) does indeed reduce to a Sinc function source for the case when z = z 0, S is suitably selected nite interval, and ! () 1, and moreover, it was shown in that same article, that the resulting input eld was a reasonably accurate approximation to a Sinc function source in the region of practical use, even when z 6= z 0 . It thus follows that for a suitably selected square S, the source (3.32), with, e.g., !() 1, is an accurate approximation to a product of Sinc functions, i.e.,

u(x; y; z) = sinc

x ? x0 h

sinc

y ? y0 h

W (z ? z 0)

(3.33)

where W (z ? z 0 ) 1 in a neighborhood of z = z 0. Of course, our reason for wanting Sinc function sources are that they are highly accurate approximations to delta functions, i.e. (see [8, Eq. (3.2.11)], or [9]), for an arbitrary integrable function g de ned on R,

Z

x ? x0

g(x) sinc h dx = h[g(x0) + "]; (3.34) R with " a very small number relative to 1. Similarly, we can select the generalized function w in (3.31) above so the resulting source function u as de ned by (3.31) is an approximate delta function corresponding to a xed point in H (see [5,6,12]). 25

We thus describe the following procedure, which, while requiring large coecients cm , illustrates at least one constructive method for determining the functions w and the corresponding sources u. At the outset, we could represent w via a nite sum,

w(r) = v()

X

cm e?i zm (z + zm);

m

(3.35)

where = (x; y ), where the cm are constants, and where the zm are a xed set of discrete positive numbers. In this case, the function u takes the form

u(r) = where

ZZ

R

2

v(0) Kmd0;

n p

(3.36)

o

exp i j ? 0j2 + (z + zm )2 p Km = cm e?i zm : (3.37) 4 j ? 0 j2 + (z + zm )2 m We may note, that when 0 = , then Km in this last expression reduces to

X

Kmj=0 = ei z

X m

cm : 4 (z + zm )

(3.38)

It has been shown that the constants cm and zm in the expression on the right hand side of this equation may selected to arbitrarily closely approximate a Sinc function, S (k; h) log(z ) { see Sections 5.3 and 5.4, and especially Problem 5.4.4 of [8] { and where this function is an arbitrary close approximation of the one dimensional delta function in the z {direction (see Corollary 4.2.15 of [8]). Once the cm have been selected in this manner, we can next turn to the selection of v (), and while this selection is somewhat more complicated, we can nevertheless suitably select the function v such that, e.g., u is an approximate two{dimensional delta function. The three{dimensional recovery is thus somewhat more complicated, since we need to sample the source along a line in the plane fr = (x; y; z ) : 26

(x; y ) 2 R2 ; z = 0g. Nevertheless it reduces, in eect to the simple procedure outlined above. We mention, that once we have found an explicit function w for determining a source u via (3.31), we can determine the same source via a two dimensional expression of the form

u(r) =

ZZ

GI ( ? 0; z) !(0) d0:

(3.39) R For, if w^(; ; ) denotes the two dimensional Fourier transform of w(x; y; z), and if !~ (; ) denotes the two dimensional Fourier transform of ! (x; y ), then we need merely make the identi cation, 2

q

!~ (; ) = w^(; ; ? 2 ? 2 ? 2)

(3.40)

where we take that square root of 2 ? 2 ? 2 for which the imaginary part of this quantity is non{negative.

4 References 1. Belishev, M.I. On an Approach to Multidimensional Inverse Problems for the Wave Equation, Soviet Math. Dokl., Vol. 36 (1988) No. 3 , pp. 481{484. 2. Mok Keun Jeong, Tae Kynong Song, Song Bai Park, and Jong Beom Ra, Generation of Sinc Plane Wave by One Dimensional Array for Application in Ultrasonic Imaging, IEEE: TRANSACTIONS IN ULTRASONICS, AND FREQUENCY CONTROL, V. 43, No. 2 (1996) 285{295. 3. M. Kowalski, K. Sikorski and F. Stenger, Selected Topics in Approximation and Computation, Oxford University Press (1993). 4. J. Lund and K.L. Bowers, Sinc Methods for Quadrature and Dierential Equations, SIAM (1992). 5. M. O'Reilly, A Backprojection Filter Algorithm by Solving the Convolution Equation, submitted for publication. 27

6. O'Reilly, M. and Stenger, F., A New Approach to Inverse Problems Using Sinc Approximation, accepted for publication in Inverse Problems. 7. S.W. Rowland, Computer Implementation of Image Reconstruction formulas, in Image Reconstructions from Projections, Springer{Verlag, Berlin, 1979, Chapter 2. 8. F. Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer{Verlag, N.Y. (1993). 9. F. Stenger, Collocating Convolutions, Math. Comp., 64 (1995) 211{ 235. 10. F. Stenger, B. Barkey and R. Vakili, Sinc Convolution Method of Solution of Burgers' Equation, pp. 341{354 of \Proceedings of Computation and Control III", edited by K. Bowers and J. Lund, Birkhauser, Basel (1993). 11. F. Stenger, B. Keyes, M. O'Reilly, and K. Parker, ODE{IVP|PACK, via Sinc Inde nite Integration and Newton Iteration, to appear in Numerical Algorithms. 12. Stenger, F., Sinc Inversion of the Helmholtz Equation without Computing the Forward Solution, pp. 149{157 of \Proceedings of the International Workshop on Inverse Problems", HoChiMinh City (1995). 13. S. Suzuki and S. Yamaguchi, Comparison Between an Image Reconstruction Method of Filtering Backprojection and the Filtered Backprojection Method, Appl. Optics, 27, no 14 (1988) 2867-2870.

28

Department of Computer Science University of Utah Salt Lake City, Utah 84112

Mike O'Reilly

Member of Technical Sta CeTech, Inc. 8196 SW Hall Blvd, Ste 304 Beaverton, Oregon 97008

Abstract

In this article we illustrate some novel procedures of using Sinc methods to compute solutions to three types of medical problems. The rst of these is a novel way to solve optimal control problems. the second is a novel way to reconstruct images for X-ray tomography, and the third is a novel way to do ultrasonic tomography inversion. Each of these procedures uses Sinc convolution, which is a novel computational procedure for obtaining accurate approximations to inde nite convolutions.

1 Introduction and Summary In this paper we discuss the use of Sinc methods as tools of the following: 1. Solving optimal control problems. Such problems are becoming increasingly important, especially with respect to the use of robots to Supported by NSF grant # CCR-9307602 1

do surgery, where the robot is controlled remotely by e.g., a physician in another city. 2. Carrying out X{ray tomography inversion. The development of accurate X{ray imaging took many years of \ ne tuning" and alterations of the original procedure, which is based on a dicult to approximate Fourier transform. A recent method based on Sinc approximation [5] yielded a highly ecient algorithm which without ne tuning was almost as good as those of the best existing algorithms, and which has the potential of being more ecient and producing more accurate pictures than existing algorithms. 3. Solving inverse problems in ultrasonic tomography. The procedure of carrying out inversion based on data obtained by ring a single transducer while all of the others \listen" involves a computationally intensive, ill posed computational problem. The appropriate choice of a set of sources which are red at once can transform this problem into a well posed one, requiring almost no computation. In Section 2 below, we brie y review the Sinc methods which we require for solution of the above problems. It is perhaps interesting to mention, at the outset the connection of Sinc methods with wavelets. It is well known that the original wavelet are based on Sinc functions, although Sinc methods have been studied longer, and much more extensively than wavelets. For example, engineers are now discovering that wavelets provide accurate and ecient tools for solving partial dierential equations, although one rarely nds a paper that shows why wavelets are so accurate. In the text [8] we nd clear explanations of why Sinc methods are accurate for solving partial dierential equations. It is also easy to show that wavelet methods are accurate for solving partial dierential equations if and only if Sinc methods are accurate for solving such problems. Indeed, it may be shown that for every wavelet method there is a corresponding Sinc method with the exact same complexity. In Section 3, we illustrate the use of Sinc convolution to solve the following medical problems: some optimal control problems some X-ray tomography problems, and ultrasonic tomography problems, i.e., the inversion of the Helmholtz equation.

2

2 Sinc Methods This section contains a summary of some currently existing Sinc methods that are applicable to the solution of computational problems in medicine. Most of these results and their proofs may be found in [3]; we include these results (but without their proofs here for sake of completeness. Our manner of description of the methods is in symbolic form. We include methods for collocation, function interpolation and approximation, for approximate definite and inde nite integration, for approximation and inversion of Fourier and Laplace transforms, for the approximation of de nite and inde nite convolutions, and for the approximate solution of integral equations.

2.1 Sinc Spaces of Approximation

Sinc spaces are motivated by the premise that most scientists and engineers use calculus to model dierential and integral equation problems, and under this premise the solution to these problems are (at least piecewise) analytic. The Sinc spaces which we shall describe below house nearly all solutions to such problems, including solutions with singularities at end points of ( nite or in nite) intervals (or at boundaries of nite or in nite domains in more than one dimension). Although these spaces also house singularities, they are not as large as Sobolev spaces which assume the existence of only a nite number of derivatives in a solution, and consequently when Sinc methods are used to approximate solutions of dierential or integral equations, they are usually more ecient than nite dierence or nite element methods. In addition, Sinc methods are replete with interconnecting simple identities, including DFT (which is one of the Sinc methods, enabling the use of FFT), making it possible to use a Sinc approximation for nearly every type of operation arising in the solution of dierential and integral equations. Let D denote simply{connected domain in the complex plane C, let 1 p 1, and let Hp (D) denote the family of all functions f that are analytic in D, such that

8 Z 1=p > pjdz j > < 1 if 1 p < 1; j f ( z ) j < @D Np(f; D) > > : sup jf (z)j < 1 if p = 1:

(2.1)

z2D

For purposes of Sinc approximation, let ? = (a; b) be a nite, semi{ 3

in nite interval, or the real line R, let be a conformal mapping of a simply connected domain D onto Dd = fz 2 C : j=z j < dg, where d is a positive number, and C denotes the complex plane, such that is also a one-to-one map of ? onto R. Letting Z denote the integers, we de ne Sinc points for h > 0 and k 2 Z by zk = ?1 (kh), and we de ne by = e. Note that (z) increases from 0 to 1 as z traverses ? from a to b. Let , and d denote arbitrary, xed positive numbers. We denote by M; (D) the family of all functions that are analytic and uniformly bounded in D, which have nite limits (taken from within D) at a and b, and such that

f (z) ? f (a) = O(j(z)j); uniformly as z ! a from within D; f (z) ? f (b) = O(j(z)j? ); uniformly as z ! b from within D:

(2.2) For complete de nition of the class M; (D), we shall furthermore add the restriction that 2 (0; 1], 2 (0; 1] and d 2 (0; ). The class L; (D), is the subset of those functions belonging to M; (D) which vanish at a and also at b. For complete de nition of this class, we furthermore allow , and d to be arbitrary xed positive numbers (with unrestricted range). It thus follows that if for a given function g 2 M; (D), we de ne a \linear form" Lg by Lg(z) = f (a)1++((zz))f (b) ; = e; (2.3) then f de ned by

f = g ? Lg

(2.4)

belongs to L; (D). The main reason for restricting the range of , and d in the de nition of the class M; (D) is the resulting simple and suitable form of the \linear interpolant" Lg de ned above. We would have to alter this linear form if these constants were left unrestricted. Note that if 0 < d < , then L(g) is uniformly bounded in D, the closure of D, and moreover, L(g)(z) ? g(a) = O(j(z)j) as z ! a, and L(g)(z) ? g(b) = O(1=j(z)j) as z ! b, i.e., L(g ) 2 M1;1(D). In addition, M1;1(D) M; (D) for any 2 (0; 1], 2 (0; 1], and d 2 (0; ), and moreover for these restrictions on , , and d. the class L; (D) is contained in the class M; (D). 4

For example, for the case of a nite interval, (a; b), we can take (z ) = log[(z ? a)=(b ? z )]; this function provides a conformal transformation of the \eye{shaped" region D = fz 2 C : j arg[(z ? a)=(b ? z )j < dg onto the strip Dd . The same function also provided a one-to-one transformation of (a; b) onto the real line R. The Sinc points are de ned by zk = ?1 (kh) = (a + b ekh )=(1 + ekh ). In this case, the function g is given explictly by Lg(z) = (b ? z) g(a) + (z ? a) g(b) ; (2.5)

b?a and M; (D) includes all those functions g 2 Hol(D) which are of class Lip in that part of D within a distance R (b ? a)=2 from a, and which are of class Lip in that part of D within a distance R from b. The class M; (D) thus includes functions that are analytic in D, but which may have singularities at the end points of (a; b). The spaces L; (D) and M; (D) are invariant, in the sense that if for j = 1; 2 we have conformal mappings j : Dj ! Dd , and if f 2 L; (D1) (resp., f 2 M; (D1)), then f ?1 1 2 2 L; (D2 ) (resp., f ?1 1 2 2 M; (D2)). We may note that if the same function provides the conformal mappings : D0 ! Dd0 , : D ! Dd , with 0 < d < d0, then D D0 .

Let us next summarize some important properties about the spaces L; (D) and M; (D). A proof of the following theorem may be found in [1, pp. 119{121]. Theorem 2.1 Let 2 (0; 1], 2 (0; 1], d0 2 (0; ), let Dd0 be de ned as above, let D0 = ?1 (Dd0 ) and for some xed d 2 (0; d0), let D = ?1 (Dd ). Let f 2 Hol(D), and let I f denote the inde nite integral of f . Then: 1. If f 2 H1 (D0 ), then f 0=0 2 H1 (D); 2. If f 2 H1 (D0 ), and if (1=0)0 is uniformly bounded in D0 , then | f (n) =(0)n 2 H1 (D), n = 1; 2; 3; ; 3. If f 2 M; (D0 ), then f 0 =0 2 L; (D); 4. If f 2 M; (D0 ), and if (1=0)0 is uniformly bounded in D0 then | f (n) =(0)n 2 L; (D), n = 1; 2; 3; ; 5. If f 2 H1(D), then I f 2 H1 (D); 6. If f 0 =0 2 L; (D), then f 2 M; (D); and 7. If f 2 L; (D), then 0 f 2 H1(D). Let us describe some important speci c spaces for Sinc approximation. Example 2.1: If ? = (0; 1), and if D is the \eye{shaped" region, D = fz 2 C : j arg[z=(1 ? z)]j < dg, then (z) = log[z=(1 ? z)], the relation (2.3) 5

reduces to f (z ) = g (z ) ? (1 ? z )g (0) ? zg (1), and L; (D) is the class of all functions f 2 Hol(D), such that for all z 2 D, jf (z )j < cjz jj1 ? z j . In this case, if e.g. = maxf; g, and a function w is such that w 2 Hol(D), and w 2 Lip (D), then w 2 M; (D). The Sinc points zj are zj = ejh =(1+ ejh ), and 1=0(zj ) = ejh =(1 + ejh )2 . Example 2.2: If ? = (0; 1), and if D is the \sector" D = fz 2 C : j arg(z)j < dg, then (z) = log(z), the relation (2.3) reduces to f (z) = g(z) ? [g(0) + z g(1)]=(1 + z), and the class L; (D) is the class of all functions f 2 Hol(D) such that if z 2 D and jz j 1 then jf (z )j cjz j, while if z 2 D and jz j 1, then jf (z )j cjz j? . This map thus allows for algebraic decay at both x = 0 and x = 1. The Sinc points zj are de ned by zj = ejh , and 1=0(zj ) = ejh . Example 2.3: If ? = (0; 1), and if D is the \bullet{shaped" region D = fz 2 C : j arg(sinh(z))j < dg, then (z) = log(sinh(z)). The relation (2.3) then reduces to f (z ) = g (z ) ? [g (0) + sinh(z ) g (1)]=(1+ sinh(z )), and L; (D) is the class of all functions f 2 Hol(D) such that if z 2 D and jzj 1 then jf (z )j cjz j, while if z 2 D and jz j 1, then jf (z )j c expf? jz jg. This map thus allows for algebraic decay at x = 0 and exponential decay at x = 1. The Sinc points zj are de ned by zj = log[ejh + (1 + e2jh )1=2], and 1=0(zj ) = (1 + e?2jh )?1=2. Example 2.4: If ? = R, and if D is the above de ned \strip", D = Dd, take (z ) = z . The relation (2.3) then reduces to f (z ) = g (z ) ? [g (?1) + ez g(1)]=(1+ ez). The class L; (D) is the class of all functions f 2 Hol(D) such that if z 2 D and < i x x 6= 0 ev(x) = > : hM2 x = 0: The complex numbers, sm;j which approximate the eigenvalues of h I (?1), can be chosen by evaluating ev (x) at equally spaced points x 2 (?; ), i.e.,

sm;j = ev(jh);

(3.14) where h is chosen so that we have m (with m = 2N + 1 an odd integer) numbers equally spaced in (?; ). We can then diagonalize the square matrices of order m, Am h I (?1) and ATm using the same eigenvectors, according to the scheme, Am = F diag[sm;?N ; ; sm;N ] F; ATm = F diag[sm;?N ; ; sm;N ] F; where F is the matrix used in computing the DFT (Discrete Fourier transform). The backprojected image, b(x; y ) is related to original image i(x; y ) by the equation

Z Z

f (x ? ; y ? ) i(; ) dd R R where, in the notation of (2.20), the \kernel" f (x; y ) is given by b(x; y) =

f (x; y) = (x2 + y 2 )? :

(3.15)

(3.16) The de nite convolution in (3.15) can be split into four inde nite convolutions, viz. 1 2

19

b(x; y) = =

Z Z

ZR1 RZ x ?1

y

+

Z 1Z 1 y

x

+

Zy Zx ?1 ?1

+

Zy Z1 ?1 x

:

We apply the procedure of Example 2.7 to the integral

b1 (x; y) =

Z 1Z x

?1

y

f (x ? ; y ? ) i(; ) dd:

To this end, we de ne

A1 = hI (?1) = F S F T A2 = h I (?1) = F S F where F is the Fourier matrix used in computing a DFT and x + iy = x ? iy . The last two expressions on each line follow since I (?1) is a Toeplitz matrix. S is a diagonal matrix S = diag(sm;?N ; ; sm;N ) whose values are de ned in Equation (3.14). In this case it is possible to explicitly evaluate the Fourier transform as de ned in (2.25), i.e., with f de ned as in (3.15), we get

F (s(1); s(2))

=

Z 1Z 1 0

0

f (x; y) e?x=s ?y=s dx dy (1)

(

(2)

(1) (2) (2) (1) = s 2s` log (` + s(2))(` + s(1)) (` ? s )(` ? s )

` =

)

(3.17)

q

(s(1))2 + (s(2))2

The use of this DFT procedure enables a simpli ed version of Algorithm in Example 2.7, namely,

b1 ' F F G F F i;

where \ " refers to the tensor product. Similarly, it is easily seen that: 20

(3.18)

For b2 , we only need use G(s(1); s(2)). For b3 , we only need use G(s(1); s(2)). For b4 , we only need use G(s(1); s(2)). Letting K = G(s(1); s(2)) + G(s(1); s(2)) + G(s(1); s(2)) + G(s(1); s(2)) and b = b1 + b2 + b3 + b4 we can write b ' F F K F F i; where K is real. Letting be a regularizing parameter, we can reconstruct our image using

i ' F F +KK 2 F F b;

In addition, we point out that for each of the matrices multiplying our image the matrix multiplication can be implemented by applying m Fast Fourier transforms in parallel.

3.3 Solving Ill{Posed Problems via Sinc Sources

Many consider the solution of the ultrasonic tomography problem to be an ill{posed problem. We however take the view that ill{posedness is procedure dependent, since inversion involves the construction of an approximating inversion operator using a particular basis, and while the condition number of the resulting operator may be large for a particular basis (i.e., the ill{ posed situation) it can be relatively small for another basis. We illustrate this point in this section, with two inverse problem examples: that of the mathematically simplest to state moment problem; and that of the inversion of the Helmholtz equation, which is a frequently used model in ultrasonic tomography inversion.

Example 3.4: Moment Problems. We use a simple moment problem

to illustrate the idea of the above paragraph. The problem we consider is that of reconstructing the function w given the moments

Z

?

tk (x) dw(x) = k ; (k 2 N or k 2 Z );

(3.19)

where N (resp., Z ) denotes the set of non-negative integers (resp., the set of all integers). If tk (x) = xk , for every non{negative inter 21

k, then the problem is computationally dicult. For example for the case of the function w0(x) = 1 + arcsin(x), one approach is to use the matrix A = [i+j ?1 ], where i+j ?1 denotes the (i; j )th element of the matrix. When the order of A is 8, the condition number of A is already approximately 3 1010. On the other hand, by using the orthogonal Sinc moments tk (x) = S (k; h) (x), the situation is considerably dierent. If we assume, or example, that w 2 M; (D), and set u = (?N ; ; N )T , we nd, in the notation of (2.8), (2.9) and (2.15), that

kw ? mI (?1)uk C"N :

(3.20)

This stable and ecient procedure of construction of such a w requires O(N log(N )) amount of work to achieve a result accurate to within " = O("N ), and hence the complexity is O((log("))2 log(log(1="))).

Thus, a novel method of approximating the solution to other moment problems of the form of (3.18) is to rst create close to orthogonal \source" functions such as, e.g., S (k; h) using linear combinations of the usual sources (that lead to ill{posed problems). Example 3.5: Ultrasonic Tomography Inversion The Helmholtz equation, is a frequently used model for imaging a part of the interior of a human being via ultrasound. This equation takes the form

r2U + 2(1 + f )U = 0; (3.21) where f is a function of r 2 R3 that has support in the half{space H = fr = (x; y; z) 2 R3 : (x; y) 2 R2; z > 0g, and where we shall assume that

= jj ei ;

0 :

(3.22)

As an aid for describing our inversion procedure, we let K denote the exterior of H, i.e., K = R3 n H. The problem of inverting (3.21) is fo reconstruct f on H using sources u in K. These sources can be expressed in the form 22

u(r) =

ZZZ

GI (r ? r0 ) w(r0) dr;

(3.23) K where w is at this stage some generalized function, e.g., a sum of delta functions, with with support on K, and where GI denotes the free{ space Green's function, ir GI (r) = 4er ; r = jrj:

(3.24)

We remark here, that considered as a function of r0, this function GI satis es the equation

r2GI (r00 ? r0) + 2 GI (r00 ? r0) = ?3(r00 ? r0):

(3.25)

The inversion procedure we shall describe is based on two integral equations, whose derivation we now sketch. At the outset, we mention that the input eld u de ned as in (3.23) above, satis es in H, the equation

r2u + 2u = 0:

(3.26)

Then, letting v denote the scattered eld, the total eld, U , as de ned in (3.21) is related to u and v by the equation U = u + v . Moreover, it then follows from (3.21) and (3.26), that

r2v + 2v = ?2 fu (3.27) in H. Next, for r and r0 2 R3 , let GS = GS (r; r0) be the outward going Green's function de ned by the equation

r2GS + 2(1 + f )GS = ?3(r ? r0);

(3.28)

where we assume that this equation is considered as a function of r0, with r xed, and where 3 (r ? r0 ) denotes the three dimensional delta function. Then GS (r; r0) ! 0 as jr ? r0 j ! 1, and indeed, the rate of approach may be shown to be exponential, if = > 0. We can then deduce the integral expression 23

v(r) = 2

ZZZ

GS (r; r0) f (r0) u(r0) dr0:

(3.29) H We remark here that GS is unknown at this point, and hence the integral (3.29) is, in fact, an integral equation. By applying Green's theorem to GI and GS , and using (3.25) and (3.28) above, we arrive at the integral equation

GS (r; r00) = GI (r ? r00) + 2

ZZZ

G (r00 ? r0) f (r0) GS (r; r0) dr0: H I (3.30)

These last two equations form the basis of our inversion scheme. Suppose, for example, that r is a xed point on the boundary of H, and that the function u in (3.29) is a delta function, (r0 ? r00), so that, measuring v (r) gives us the product, GS (r; r00) f (r00). (Of course u as de ned in (3.23) above cannot be an exact delta function, since u must, in fact, satisfy (3.26) in H. On the other hand, using (3.26), it is possible [1] to obtain an accurate delta function, in the absence of noise.) By repeating such measurements v for a xed family of Sinc points r00 = r0 in H, we are able to use these numbers in the second equation, (3.30) above, to simultaneously evaluate the block of all numbers GS (r; r00) at the Sinc points r00 via ecient Sinc inde nite integral convolution. We thus obtain both sets of values GS (r; r00) f (r00) and GS (r; r00) at the set of Sinc points in H, and from these, we get the values f (r00) via a simple division at each point r00. Next, we sketch how we can select the generalized function w in (3.31) above so the resulting function u as de ned by (3.31) is an approximate delta function corresponding to a xed point in H. While it has already been established that such sources do, in fact, exist [1], we mention, at this point, that functions w which give rise to stabel and ecient inversion procedures have at this point not been determined. Along lines that are more constructive than that of [1], we now describe two families of sources for the case of equation (3.21), when the body is a half space: 24

1. A linear combination of point sources, of the form

u(r) =

ZZZ

GI (r ? r0 ) w(r0) dr;

(3.31) K where in w is a sum of delta functions with with support on K = R3 n H; and 2. A linear combination of plane wave sources

u(r) =

ZZ

with S R2 .

S

!()ei(?0 )+i

p2 2 0 ? (z?z ) d

(3.32)

Both sources, (3.31) and (3.32), satisfy (3.26) in H.. It was shown in [2] for the two dimensional case, that the one dimensional integral equivalent of (3.32) does indeed reduce to a Sinc function source for the case when z = z 0, S is suitably selected nite interval, and ! () 1, and moreover, it was shown in that same article, that the resulting input eld was a reasonably accurate approximation to a Sinc function source in the region of practical use, even when z 6= z 0 . It thus follows that for a suitably selected square S, the source (3.32), with, e.g., !() 1, is an accurate approximation to a product of Sinc functions, i.e.,

u(x; y; z) = sinc

x ? x0 h

sinc

y ? y0 h

W (z ? z 0)

(3.33)

where W (z ? z 0 ) 1 in a neighborhood of z = z 0. Of course, our reason for wanting Sinc function sources are that they are highly accurate approximations to delta functions, i.e. (see [8, Eq. (3.2.11)], or [9]), for an arbitrary integrable function g de ned on R,

Z

x ? x0

g(x) sinc h dx = h[g(x0) + "]; (3.34) R with " a very small number relative to 1. Similarly, we can select the generalized function w in (3.31) above so the resulting source function u as de ned by (3.31) is an approximate delta function corresponding to a xed point in H (see [5,6,12]). 25

We thus describe the following procedure, which, while requiring large coecients cm , illustrates at least one constructive method for determining the functions w and the corresponding sources u. At the outset, we could represent w via a nite sum,

w(r) = v()

X

cm e?i zm (z + zm);

m

(3.35)

where = (x; y ), where the cm are constants, and where the zm are a xed set of discrete positive numbers. In this case, the function u takes the form

u(r) = where

ZZ

R

2

v(0) Kmd0;

n p

(3.36)

o

exp i j ? 0j2 + (z + zm )2 p Km = cm e?i zm : (3.37) 4 j ? 0 j2 + (z + zm )2 m We may note, that when 0 = , then Km in this last expression reduces to

X

Kmj=0 = ei z

X m

cm : 4 (z + zm )

(3.38)

It has been shown that the constants cm and zm in the expression on the right hand side of this equation may selected to arbitrarily closely approximate a Sinc function, S (k; h) log(z ) { see Sections 5.3 and 5.4, and especially Problem 5.4.4 of [8] { and where this function is an arbitrary close approximation of the one dimensional delta function in the z {direction (see Corollary 4.2.15 of [8]). Once the cm have been selected in this manner, we can next turn to the selection of v (), and while this selection is somewhat more complicated, we can nevertheless suitably select the function v such that, e.g., u is an approximate two{dimensional delta function. The three{dimensional recovery is thus somewhat more complicated, since we need to sample the source along a line in the plane fr = (x; y; z ) : 26

(x; y ) 2 R2 ; z = 0g. Nevertheless it reduces, in eect to the simple procedure outlined above. We mention, that once we have found an explicit function w for determining a source u via (3.31), we can determine the same source via a two dimensional expression of the form

u(r) =

ZZ

GI ( ? 0; z) !(0) d0:

(3.39) R For, if w^(; ; ) denotes the two dimensional Fourier transform of w(x; y; z), and if !~ (; ) denotes the two dimensional Fourier transform of ! (x; y ), then we need merely make the identi cation, 2

q

!~ (; ) = w^(; ; ? 2 ? 2 ? 2)

(3.40)

where we take that square root of 2 ? 2 ? 2 for which the imaginary part of this quantity is non{negative.

4 References 1. Belishev, M.I. On an Approach to Multidimensional Inverse Problems for the Wave Equation, Soviet Math. Dokl., Vol. 36 (1988) No. 3 , pp. 481{484. 2. Mok Keun Jeong, Tae Kynong Song, Song Bai Park, and Jong Beom Ra, Generation of Sinc Plane Wave by One Dimensional Array for Application in Ultrasonic Imaging, IEEE: TRANSACTIONS IN ULTRASONICS, AND FREQUENCY CONTROL, V. 43, No. 2 (1996) 285{295. 3. M. Kowalski, K. Sikorski and F. Stenger, Selected Topics in Approximation and Computation, Oxford University Press (1993). 4. J. Lund and K.L. Bowers, Sinc Methods for Quadrature and Dierential Equations, SIAM (1992). 5. M. O'Reilly, A Backprojection Filter Algorithm by Solving the Convolution Equation, submitted for publication. 27

6. O'Reilly, M. and Stenger, F., A New Approach to Inverse Problems Using Sinc Approximation, accepted for publication in Inverse Problems. 7. S.W. Rowland, Computer Implementation of Image Reconstruction formulas, in Image Reconstructions from Projections, Springer{Verlag, Berlin, 1979, Chapter 2. 8. F. Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer{Verlag, N.Y. (1993). 9. F. Stenger, Collocating Convolutions, Math. Comp., 64 (1995) 211{ 235. 10. F. Stenger, B. Barkey and R. Vakili, Sinc Convolution Method of Solution of Burgers' Equation, pp. 341{354 of \Proceedings of Computation and Control III", edited by K. Bowers and J. Lund, Birkhauser, Basel (1993). 11. F. Stenger, B. Keyes, M. O'Reilly, and K. Parker, ODE{IVP|PACK, via Sinc Inde nite Integration and Newton Iteration, to appear in Numerical Algorithms. 12. Stenger, F., Sinc Inversion of the Helmholtz Equation without Computing the Forward Solution, pp. 149{157 of \Proceedings of the International Workshop on Inverse Problems", HoChiMinh City (1995). 13. S. Suzuki and S. Yamaguchi, Comparison Between an Image Reconstruction Method of Filtering Backprojection and the Filtered Backprojection Method, Appl. Optics, 27, no 14 (1988) 2867-2870.

28