Pn XX - CiteSeerX

2 downloads 0 Views 236KB Size Report
kz ?uzk. Let = dE(@; K) > 0 be the Euclidean distance between K and the boundary @ of . Since h ?1. dE(z; h ?1K) for z 2 Z dnh ?1 , we have kzk kz ?uzk+ kuzk.
On the Accuracy of Surface Spline Approximation and Interpolation to Bump Functions Aurelian Bejancu February 2000 Abstract

Let be the closure of a bounded open set in R d and, for a suciently large integer , let f 2 C  ( ) be a real-valued \bump" function, i.e. supp(f )  int( ). First, for each h > 0, we construct a surface spline function h whose centres are the vertices of the grid Vh = \ hZd , such that h approximates f uniformly over with the maximal asymptotic accuracy rate for h ! 0. Second, if `1 ; `2 ; : : : ; `n are the Lagrange functionsPfor surface spline interpolation on the grid Vh , we prove that maxx2 nj=1 `2j (x) is bounded above independently of the mesh-size h. An interesting consequence of these two results for the case of interpolation on Vh to the values of a bump data function f is obtained by means of the Lebesgue inequality. 2000 Mathematics subject classi cation: 41A05, 41A15, 41A25, 41A63.

1 Introduction Surface spline interpolation belongs to the class of radial basis function methods for multivariable approximation. Let d be a positive integer and let be the closure of a bounded open set in R d . In order to describe the surface spline method for interpolation at the vertices of the uniform d-dimensional grid Vh =

\ hZd of mesh-size h > 0, we denote the elements of Vh by hz1 ; hz2; : : : ; hzn, where fz1 ; z2; : : : ; zn g  Zd . Notice that n = O(h?d ), as h ! 0. For any real parameter > 0, de ne the basis function  : [0; 1) ! R by the formula

(r) =



if 62 2N ; if 2 2N ;

r ; r ln r ;

(1.1)

and let m be the integer part of =2. Further, let dm be the space of polynomials on R d of total degree not exceeding m, let N = dim dm = (d + m)!=(d!m!), and denote by fP1 ; P2 ; : : : ; PN g the monomial basis of dm . We also let Sh be the linear space of functions s of the form

s(x) =  Work

n X j =1

cj (kx ? hzj k) +

N X l=1

cn+l Pl (x) ;

x 2 Rd ;

supported by Trinity College and Clare College, Cambridge.

1

(1.2)

where kk is the Euclidean norm on R d and the rst n of the real coecients c1 ; c2 ; : : : ; cn+N satisfy the constraints n X j =1

cj Pl (hzj ) = 0 ;

l = 1; 2; : : : ; N :

(1.3)

Given any arbitrary function f : ! R , it is known that, for each suciently small h, there exists a unique sh 2 Sh satisfying sh (hzk ) = f (hzk ) ; k = 1; 2; : : : ; n : (1.4) Moreover, this statement remains true if, in the de nition of Sh , the place of dm is taken by any other polynomial space dk with k  m. It has become customary to call sh the surface spline interpolant to the values of f at the vertices of the nite uniform grid Vh . For every parameter > 0, the existence and uniqueness of the surface spline interpolant has been established theoretically by Duchon [6] in the general case of scattered interpolation points, using a variational approach. (In the scattered data case, h is replaced by the Hausdor distance between the set of interpolation points and the domain .) The same result also follows from the work of Micchelli [18], whose arguments apply to any basis p function  for which the derivative of order m + 1 of the function := (  ) is strictly completely monotonic, i.e. (?1)k (k+m+1) (r) > 0, 8 r > 0, 8 k 2 N . As a consequence of this existence and uniqueness result, for each j = 1; 2; : : : ; n, there exists a unique function `j 2 Sh satisfying the Lagrange conditions `j (hzk ) = kj ; k = 1; 2; : : :; n ; (1.5) where kj is the Kronecker delta. Thus we have the following Lagrange representation formula for the surface spline interpolant sh to the values of f at the vertices of Vh : n X (1.6) sh (x) = f (hzj )`j (x) ; x 2 R d : j =1

Note that each `j depends on , and h. A basic problem from the point of view of approximation theory is to study the accuracy to which sh approximates f over when h ! 0, under various smoothness assumptions on f . This problem and its version for scattered interpolation points have been investigated by Duchon [7], Arcangeli and Rabut [1], Madych and Nelson [14], Wu and Schaback [27], Powell [22], Matveev [16], Light and Wayne [13], Schaback [24, 25] and Johnson [9]{[12], who estimated the dependence on h of the error (or of some of its derivatives) in the uniform or Lp -norm (1  p < 1) over the domain . Further, Matveev [17] and Bejancu [2, 3] proved that the decay of the error as h ! 0 is signi cantly faster over a compact subset K of the interior of . Speci cally, for any suciently di erentiable function f , we have max jf (x) ? sh (x)j = O(h +d ) ; as h ! 0 ; (1.7) x2K which matches the maximal convergence rate over R d obtained by Buhmann [5] and Powell [21, Theorem 8.5] in the ideal case of interpolation on the in nite grid hZd . 2

h?1

16 32 64 128 256

p=4 0.00052648 0.00021351 0.00007959 0.00002879 0.00001029

p=5 0.00029267 0.00011937 0.00004445 0.00001607 0.00000574

Table 1: Values of the error j(gp ? sh )( 12 h)j for d = 1, = 2. In the present paper, we assume that f is a \bump" function, i.e. f is nonzero only on a compact subset of the interior of . Under this hypothesis, a natural question is whether the decay rate h +d of (1.7) can also be attained by the uniform error maxx2 jf (x) ? sh (x)j. We can check numerically that the answer to this question is negative for d = 1 and = 2 (in which case (r) = r2 ln r, m = 1 and N = dim 11 = 2). Indeed, let = [0; 1], h := 1=(n ? 1), n 2 f2; 3; : : :g, and, for each positive integer p, de ne the product

gp (x) = 10p+1 [maxf0; x ? 1=4g]p [maxf0; 3=4 ? xg]p ; x 2 [0; 1] : (1.8) Thus gp 2 C p?1 ( ) and supp(gp ) = [1=4; 3=4]. We choose the data function f := gp and we estimate the magnitude of the uniform error over by evaluating the error function eh := jgp ?sh j at x = 21 h. The coecients of the representation of type (1.2) of sh are computed by solving the (n + 2)  (n + 2) system given by (1.3) and (1.4). For p p2 f4; 5g, Table 1 shows that eh ( 21 h) is reduced by a factor of approximately 2 2 when h?1 doubles, which corresponds to a decay of magnitude h3=2 . It can also be checked that the same rate of decay is usual for larger values of p or even for C 1 bump functions f . However, the univariate natural spline case (d = 1, 2 2N + 1) shows that a positive answer to the question above is sometimes possible. Much insight has been obtained recently by Johnson [11] for the multivariable case in which is a positive integer such that + d is even, and for the version of the surface spline interpolation method that uses d( +d)=2?1 instead of dm in the de nition of Sh . His approach is based on the best approximation property that characterizes the surface spline interpolant in the variational framework of Duchon. When applied to the uniform norm of the error and to the grid of interpolation points \ hZd, Johnson's results imply max jf (x) ? sh (x)j = O(h +d=2 ) ; x2

as h ! 0 ;

(1.9)

for a suciently di erentiable bump function f . In Section 2 we prove, under the same restriction on and d, that the maximal rate O(h +d ) can hold uniformly over for bump data functions if interpolation is replaced by approximation with a suitably constructed element of Sh (cf. Theorem 1). On the other hand, there are no restrictions on P or d in the main result of Section 3, which states that the expression maxx2 nj=1 `2j (x) is bounded from above independently of the mesh-size h (cf. Theorem 2). As a consequence, we obtain a new proof of the convergence rate (1.9) via the Lebesgue inequality (cf. Corollary 1). Furthermore, our approach shows that 3

the order of convergence (1.9) may be improved to the maximal one O(h +d ), provided that the Lebesgue constant of the surface spline interpolation operator admits an upper bound that is independent of h. This conjecture, which is a topic of current research for the author, is based on encouraging numerical evidence (cf. Remark 6). Notation: kk is the Euclidean norm on R d , xT y denotes the dot product of d two vectors x and y in R , i is the upper-half plane root of ?1, and exp() is the usual complex exponential function of base e. Also, const( ; ; : : :) is a generic notation for various constants which depend only on the indicated arguments ; , etc.

2 Approximation of Maximal Order In this section, we work under the assumption that + d is a positive even integer and we relate the error of surface spline interpolation on the nite grid

\ hZd to the error of interpolation on the in nite grid hZd . Multivariable interpolation on the cardinal grid Zd by means of a basis function of the form (1.1) in the case when + d is even has been considered by Madych and Nelson [15], extending the univariate cardinal spline theory of Schoenberg [26]. In the following, we need a basic result from [15], namely the existence of a unique set fz : z 2 Zdg of real coecients that satisfy jz j  A exp(?akz k) ; 8 z 2 Zd ; (2.1) for some positive constants A and a, such that the function  : R d ! R , X (x) = z (kx ? z k) ; x 2 R d ; (2.2) z2Zd

which is de ned by an absolutely and uniformly convergent series on any compact subset of R d , achieves the Lagrange conditions (0) = 1 and (z ) = 0 for z 2 Zdn0. In addition, the cardinal function  has the property j(x)j  B exp(?bkxk), 8 x 2 R d , for some positive constants B and b. A comprehensive treatment of multivariable cardinal interpolation with radial basis functions has been given by Buhmann [4, 5], whose theory applies to virtually all of the radial basis functions that are in current use. For example, if the parameter > 0 in (1.1) is not a positive integer of the parity of d, Buhmann established that the corresponding coecients z , z 2 Zd , of the cardinal function  decay at least as fast as O(kz k?( +2d)), for large kz k. He went further to consider interpolation at the vertices of the scaled grid hZd (h > 0) and, based on polynomial reproduction properties, to derive convergence orders for such a scheme when h ! 0. In the particular case when the parameter of (1.1) is a positive integer of the same parity as d, Buhmann's results imply that, for any function f  2 C +d (R d ) whose partial derivatives of order + d are bounded, the sum X f  (h )(h?1 x ?  ) ; x 2 R d ; (2.3) Ih f (x) :=  2Zd

which is absolutely and uniformly convergent in every compact subset of R d, provides not only the interpolation conditions Ih f  (hz ) = f  (hz ) ; 8 z 2 Zd ; (2.4) 4

but also the approximation property max jf (x) ? Ih f  (x)j  const(f  ; ) h +d ;

x2Rd

as h ! 0 :

(2.5)

To return to the case of surface spline interpolation on a nite grid, we let

be the closure of a bounded open set in R d (note that no boundary conditions are imposed on the domain in this section). For a xed parameter > 0 and for each h > 0, recall that Sh is the linear space of surface spline functions of the form (1.2){(1.3) associated with the grid \ hZd . Theorem 1 Assume that + d is a positive even integer and let f 2 C +d ( ) be a bump data function, i.e. supp(f ) = K , for some compact set K  int( ). Then, for every suciently small h, there exists a surface spline approximant h 2 Sh , such that max jf (x) ? h (x)j  const(f; ; ) h +d ; x2

as h ! 0 :

(2.6)

Proof. We construct h by using the above interpolant Ih f  to f  on the d in nite grid hZ , where f  2 C +d(R d) is the trivial extension of f to R d which

takes the constant value zero outside . Since supp(f ) = K , equation (2.3) becomes

Ih f  (x) =

X

 2 \h?1 K Zd

f (h )(h?1 x ?  ) ;

x 2 Rd :

(2.7)

Before using (2.2) to substitute the corresponding series for (h?1 x ?  ) into (2.7), we observe that, for any z 2 Zd , we have

(kh?1x ? z k) h? (kx ? hz k) ; if is odd; = h? (kx ? hz k) ? (h? ln h)kx ? hz k ; if is even: (2.8) Further, when is even, we use the \moment" properties of the coecients z , z 2 Zd (cf. Buhmann [5, p. 245]), namely X

z2

Zd

to deduce

z p(z ) = 0 ; X

z2

Zd

p 2 dd+ ?1 ;

(2.9)

z kx ? hz k = 0 :

(2.10)

Consequently, for any positive integers , d which have the same parity and for any  2 Zd , (2.2) provides

(h?1 x ?  ) = =

X

z2

Zd

X

z2

Zd

z (kh?1 x ?  ? z k) h? z? (kx ? hz k) ;

5

x 2 Rd :

(2.11)

Since the index set Zd \ h?1 K of the sum in (2.7) is nite, we may make the change in the order of summation that provides the formula X

Ih f  (x) =

 2 \h?1 K Zd

X

= where

z2

Zd

z := h?

f (h )

X

z2

Zd

h? z? (kx ? hz k)

z (kx ? hz k) ; X

 2 \h?1 K Zd

(2.12)

z 2 Zd :

f (h )z? ;

(2.13)

The series on the last line of (2.12) is absolutely convergent for any x 2 R d , because it is a nite sum of absolutely convergent series. Note that the coecients z , z 2 Zd , depend on h. We now write Ih f  (x) as

Ih f  (x) = Ih f  (x) +

X

z2Zd nh?1

z (kx ? hz k) ;

(2.14)

where the truncation operator Ih is de ned by

Ih f  (x) :=

X

z (kx ? hz k) ;

x 2 Rd ;

(2.15)

z2Zd \h?1

and we seek to modify Ih f  in order to obtain the required approximant h . First, however, we estimate the di erence Ih f  ? Ih f .

Lemma 1 The hypotheses of Theorem 1 imply the condition max jI f (x) ? Ih f  (x)j  const(f; ; ) h +d ; as h ! 0 : (2.16) x2 h Proof. The essential ingredient is the exponential decay (2.1) of the coecients z , z 2 Zd , of the cardinal function , which holds only when and d are positive integers of the same parity. As a consequence of this property, for any positive integer p, we have

jz j  const(d; ; p) kz k?p ;

8 z 2 Zd n0 :

(2.17)

We will choose a suitable value of p later in the proof, and until then we shall work with some large enough value of p. Let z be any element of Zd nh?1 . Using (2.13), (2.17) and the fact that, asymptotically for h ! 0, the set Zd \ h?1 K has O(h?d ) elements, we nd the estimate

jz j  h?

X

 2Zd \h?1 K

jf (h )j jz? j

 const(d; ; p) h? max jf (y)j y2K

X

 2Zd \h?1 K

kz ?  k?p

 const(f; d; ; p) h?d? max kz ?  k?p  2Zd \h?1 K  const(f; d; ; p) h?d? dE (z; h?1 K )?p ; 6

(2.18)

where dE (z; h?1K ) denotes the Euclidean distance from z to the set h?1 K . Therefore the sum of expression (2.14) satis es X z2Zd nh?1

)

z (kx ? hz k

X

 const(f; d; ; p) 

k)j h?d? dj((kz;x h??hz 1 K )p

z2Zd nh?1

X const(f; d; ; p) h?d?

E

1 + j(kxk)j + j(khz k)j ; (2.19) dE (z; h?1 K )p z2Zd nh?1

where we have used the inequality (kx ? yk)  const(d; ) (1 + j(kxk)j + j(kyk)j) ; 8 x; y 2 R d : (2.20) The term j(kxk)j that appears in the numerators of (2.19) is bounded above for x 2 by a constant that depends on . To estimate the term j(khz k)j of (2.19), we assume h < 1 without loss of generality. Consider rst the case when is even. Since h < 1, we have hjln hj < 1. Further, ln h and ln kz k have opposite signs and ln kz k < kz k for z 2 Zd n0. Therefore j(khz k)j = h kz k j ln h + ln kz kj  h ?1kz k +1 ; 8 z 2 Zd : (2.21) When is odd, we also have j(khz k)j = h kz k  h ?1kz k +1, for all z 2 Zd . It follows that, irrespective of the parity of , the term j(khz k)j of each numerator of (2.19) is bounded by j(khz k)j  h ?1 kz k +1 : (2.22) Moreover, for any z 2 Zdnh?1 , there exists uz 2 h?1K such that dE (z; h?1 K ) = kz ? uz k. Let  = dE (@ ; K ) > 0 be the Euclidean distance between K and the boundary @ of . Since h?1   dE (z; h?1 K ) for z 2 Zd nh?1 , we have kz k  kz ? uz k + kuz k  d? E (z; h?1K ) + const( K ) h?1   1 + const(K ) ?1 dE (z; h?1 K ) ; 8 z 2 Zd nh?1 : (2.23) Using (2.22) and (2.23) to bound the last sum in (2.19), we obtain X z2Zd nh?1

)

z (kx ? hz k

1 + h ?1 dE (z; h?1K ) +1 dE (z; h?1K )p z2Zd nh?1

X 1  const(f; ; ; p) h?d? ?1K )p d ( z; h E z2Zd nh?1

X 1 + const(f; ; ; p) h?d?1 ? 1 d ( z; h K )p? ?1 : (2.24) z2Zd nh?1 E

 const(f; ; ; p) h?d?

X

7

Furthermore, since (2.23) and the inequality h?1   dE (z; h?1 K ) imply kz k + h?1   const( ; K ) dE (z; h?1 K ) for z 2 Zd nh?1 , we have X X 1 1  const(

; K ) ? 1 p d (z; h K ) (kz k + h?1 )p z2Zd z2Zd nh?1 E

 const( ; K; p) = const( ; K; p)

Z

Rd

Z

1

dt

(ktk + h?1 )p

ds

s=h?1  sp?d+1

= const( ; K; p) hp?d : (2.25) From (2.14), (2.24) and (2.25), we obtain max jI f  (x) ? Ih f (x)j  const(f; ; ; p) (hp?2d? + chp?2d? ?2) x2 h

 const(f; ; ) hd+ ;

(2.26) by choosing the value p = 3d + 2 + 2 in (2.17). The proof of Lemma 1 is complete. 2 We return to the construction of a suitable approximant h 2 Sh . We recall that N denotes the dimension of the space dm , fP1 ; P2 ; : : : ; PN g is the monomial basis of dm , and \ hZd = fhz1; hz2 ; : : : ; hzn g, where n depends on h. Let V = fy1 ; y2 ; : : : ; yN g be a xed subset of such that interpolation on V from the linear space dm has a unique solution (for example, V may be the principal lattice grid of order m in any simplex that is included in , cf. [19]). Then, as in [2, Proof of Proposition 1], there are positive constants h0 , 0 and !0 , which depend only on and , such that, for every h  h0 , there exists a set Jh with N elements, Jh = ft(1); t(2); : : : ; t(N )g  f1; 2; : : :; ng, that has the properties khzt(j) ? yj k < 0 , j = 1; 2; : : :; N , and j det(Pk (hzt(j) ))1j;kN j  !0 :

(2.27)

The last inequality guarantees the existence of a unique solution f j : j 2 Jh g of the system X

j 2Jh

j Pl (hzj ) = ?

n X k=1

zk Pl (hzk ) ;

l = 1; 2; : : : ; N ;

(2.28)

where the coecients z , z 2 Zd, are de ned by formula (2.13). Thus the function

h (x):=

n X k=1

zk (kx ? hzk k) +

X

j 2Jh

j (kx ? hzj k) ;

x 2 Rd ;

(2.29)

belongs to Sh . On the other hand, the \moment" conditions (2.9), and the fact that Zd \ ? 1 h K is a nite set in (2.13), imply that similar \moment" conditions are satis ed by the coecients z , z 2 Zd , namely X

z2Zd

p 2 dd+ ?1 :

z p(z ) = 0 ; 8

(2.30)

Thus the right-hand side entries of the system (2.28) can be written as

?

n X k=1

zk Pl (hzk ) =

X

z2 nh?1

Zd

z Pl (hz ) ;

l = 1; 2; : : : ; N :

(2.31)

Further, for each l = 1; 2; : : : ; N , the method of proof of Lemma 1 shows that X z2Zd nh?1

)

z Pl (hz  const(f; ; ) hd+ ;

as h ! 0 ;

(2.32)

for a suciently large choice of p in (2.17). Now the properties (2.27), (2.28), (2.31) and (2.32) imply j j j  const(f; ; ) hd+ , as h ! 0, for all j 2 Jh . Therefore the de nitions (2.15) and (2.29) give X

max jI f  (x) ? h (x)j  max j j j(kx ? hzj k)j x2 h x2 j 2J j h

 x;y max j(kx ? yk)j 2



X

j 2Jh d +

const(f; ; ) h :

j j j

(2.33)

Finally, (2.5), (2.16) and (2.33) imply max jf (x) ? h (x)j = max jf (x) ? h (x)j x2

x2

 max jf  (x) ? Ih f (x)j + max jI f  (x) ? Ih f (x)j x2

x2 h + max jI f  (x) ? h (x)j  const(f; ; ) hd+ ; x2 h

(2.34)

2

which completes the proof of Theorem 1.

Remark 1 The exponent + d in the approximation order (2.6) is maximal, in the sense that there exists a suciently smooth bump data function f for which the left-hand side of (2.6) does not tend to zero faster than O(h +d ), as h ! 0 (cf. Bejancu [2, 3]). Remark 2 In the case d = 3 and = 1, but under di erent hypotheses on the data function, the maximal convergence order O(h4 ) for approximation with the corresponding type of surface splines has also been obtained by Hardy and Nelson [8].

3 The Lebesgue Inequality and Kriging Functions For the purpose of the next theorem, we consider the general case > 0, d  1. Let  R d be a bounded closed domain with nonempty interior and, for any suciently small h > 0, recall that \ hZd = fhz1; hz2 ; : : : ; hzn g, where

n  const( ) h?d ; 9

as h ! 0 :

(3.1)

Denote by Th the linear operator that associates to each continuous function f : ! R the unique surface spline Thf := sh 2 Sh , which satis es the interpolation conditions (1.4). The induced 1-norm kThk1 of this operator has the value 



kThk1 = sup max jT f (x)j : f 2 C ( ); max jf (x)j  1 ; x2 h x2

(3.2)

and is called the Lebesgue constant of Th. Using the Lagrange representation formula (1.6), a standard argument shows that kThk1 is nite and that

kTh k1 = max x2

n X j =1

j`j (x)j ;

(3.3)

where f`1; `2 ; : : : ; `n g  Sh is the set of surface spline functions that are de ned by the Lagrange conditions (1.5) (recall that each function `j depends on ,

and h). Moreover, since the interpolation operator Th is a linear, bounded and idempotent map with domain C ( ) and range Sh , we have the Lebesgue inequality (cf. [20, Theorem 3.1]) max jf (x) ? sh (x)j  (1 + kThk1) d1 (f; Sh ) ; x2

(3.4)

where d1 (f; Sh ) is the least distance from f to an element of Sh , in the uniform norm over . The following result will provide an upper estimate on kThk1 . Theorem 2 Let  R d be the closure of a connected, open and bounded set, which satis es a cone property (see Duchon [7] for a suitable de nition of the latter condition). Then, for any parameter > 0, there exists a constant h0 > 0 such that the surface spline functions `1 ; `2 ; : : : ; `n, which satisfy the Lagrange equations (1.5) on the grid \ hZd , have the property max x2

n X j =1

`2j (x)  const( ; ) ;

8 h  h0 :

(3.5)

Proof. We use a well-known property of the so-called Kriging function associated with the grid \ hZd . For a xed parameter > 0 and for each suciently small h > 0, the Kriging function Ph : R d ! [0; 1) is given by (cf. Wu and Schaback [27]) Ph2 (x) := where

Z

Rd

jx(t)j2 ktk? ?d dt ;

x(t) = exp(ixT t) ?

n X j =1

`j (x) exp(ihzjT t) ;

x 2 Rd ; t 2 Rd :

(3.6) (3.7)

In order to show that the above integral is nite for each x 2 R d , we establish the conditions  O(ktkm+1 ) ; for ktk ! 0 ; jx(t)j = O (3.8) (1) ; for ktk ! 1 : 10

Indeed, x is bounded for ktk ! 1, being a trigonometric polynomial. Further, the uniqueness of the surface spline interpolation method and (1.6) imply that, for any p 2 dm , we have

p(x) =

n X j =1

p(hzj )`j (x) ;

x 2 Rd :

(3.9)

Thus, the Taylor expansion of the exponential and (3.9) provide the bound (3.8) for ktk near zero. Consequently, the function g := jx ()j2 kk?d? , de ned a.e. on R d (everywhere except the origin), satis es

g(t) =



O(ktk2m+2? ?d) ; O(ktk?d? ) ;

for ktk ! 0 ; for ktk ! 1 :

(3.10)

Since m +1 > =2, we have g 2 L1 (R d ), so the integral (3.6) is nite, as required. Using the change of variables v = ht in (3.6), we nd

Ph2 (x) = h

Z

Rd

jx(h?1 v)j2 kvk?d? dv :

(3.11)

Thus the cone condition on implies the existence of h0 > 0 such that the following estimate holds (cf. Wu and Schaback [27], Light and Wayne [13]): max P 2 (x)  const( ; ) h ; x2 h

8 h  h0 :

(3.12)

Note that, if 2 (0; 2), then (3.12) can be established without assuming the cone condition for , as demonstrated by the author in [3, Section 5.3]. The last two displays imply Z

jx (h?1 v)j2 kvk?d? dv  const( ; ) ; 8 h  h0 : (3.13) p Further, since 1  ( d)kvk?1 holds for any v 2 [?; ]d , v 6= 0, we deduce Z Z p j x(h?1 v)j2 dv  ( d) +d jx(h?1 v)j2 kvk?d? dv [?;]d [?;]d Z p  ( d) +d d jx(h?1 v)j2 kvk?d? dv ; (3.14) max x2

Rd

R

for all h  h0 . From (3.13), (3.14) and the triangle inequality for L2 -norms, we obtain (Z

[?;]d



P n j =1

(Z

[?;]d



2 `j (x) exp(izjT v) dv

jx(h?1 v)j2 dv

 const( ; ) + (2)d=2 ;

)1=2

)1=2

+

(Z

[?;]d

8 h  h0 : 11

ih?1 xT v) 2 dv

exp(

)1=2

(3.15)

On the other hand, since zj 2 Zd, j = 1; 2; : : : ; n, the orthogonality of the trigonometric polynomials provides Z

[?;]d

= = =

P n j =1

2

`j (x) exp(izjT v) dv n X n X

Z

`j (x)`k (x) exp (i(zj ? zk )T v) dv [?;]d j =1 k=1 Z n X n X exp (i(zj ? zk )T v) dv `j (x)`k (x) d [ ? ; ] j =1 k=1 n X (2)d `2j (x) : j =1

Therefore (3.15) and (3.16) imply the required conclusion (3.5).

(3.16)

2

Remark 3 Since PPnj=1 `2j (hzk ) = 1, 8 k 2 f1; 2; : : :; ng, we have the lower 2 (x). It follows that (3.5) captures the true asympbound 1  maxx2 nj=1 `P j totic behaviour of maxx2 nj=1 `2j (x), as h ! 0. Remark 4 Bounds on the expression Pnj=1 `2j (x) have also been considered by Schaback [23] for more general sets of interpolation points. For the case of interpolation at the vertices of the grid \ hZd , the upper bounds of [23] can be made independent of h only if the minimum distance from x to any one of the interpolation points is greater than a constant times h. The advantage of the bound (3.5) is that it holds uniformly for x 2 .

The following error estimate is an application of Theorems 1 and 2 and the Lebesgue inequality. Corollary 1 Let 2 N n0 be such that + d is even and let  R d be the closure of a connected, open and bounded set, which satis es a cone condition. Let f 2 C +d( ) and assume that supp(f )  int( ). Further, let sh 2 Sh be the surface spline that interpolates f on \ hZd. Then max jf (x) ? sh(x)j  const(f; ; ) h +d=2 ; x2

as h ! 0 :

(3.17)

Proof. Combining the discrete Cauchy{Schwarz inequality 8 n