non-overlapping domain decomposition methods

4 downloads 0 Views 586KB Size Report
Email: zou@math.cuhk.edu.hk http://www.math.cuhk.edu.hk/ezou/. This paper has been submitted for possible publication. For further revisions of the paper, the.
NON-OVERLAPPING DOMAIN DECOMPOSITION METHODS  JINCHAO XU

y

AND JUN ZOU

z

Abstract. Our intention in this paper is to give a uni ed investigation on a class of non-overlapping domain decomposition methods for solving second order elliptic problems in two and three dimensions. The methods under scrutiny fall into two major categories: the substructuring type methods and the Neumann-Neumann type methods. The basic framework used for analysis is the parallel subspace correction method or additive Schwarz method, and other technical tools include local-global and globallocal techniques. The analyses for both two and three dimensional cases are carried out simultaneously. Some internal relationships between various algorithms are observed and several new variants of the algorithms are also derived. Key Words. Non-overlapping domain decomposition, Schur complement, local-global and globallocal techniques, jumps in coecients, substructuring, Neumann-Neumann, balancing methods.

Department of Mathematics Pennsylvania State University University Park, PA 16802 Email: [email protected] http://www.math.psu.edu/xu/ Department of Mathematics The Chinese University of Hong Kong Shatin, New Territories, Hong Kong Email: [email protected] http://www.math.cuhk.edu.hk/ezou/

This paper has been submitted for possible publication. For further revisions of the paper, the authors would very much appreciate any comments on technical details and especially on possible relevant references that are not cited or inproperly cited. y The work of this author was partially supported by NSF DMS94-03915-1 through Penn State and by NSF ASC-92-01266 and ONR-N00014-92-J-1890 through UCLA. z The work of this author was partially supported by Hong Kong RGC Grant No. CUHK 338/96E and the Direct Grant of CUHK, and NSF under contract ASC 92-01266 and ARO under contract DAAL03-91-C-0047 through UCLA. 1 

Contents. 1 Introduction 2 Algebraic aspects of preconditioning techniques 2.1 Preconditioned conjugate gradient method . 2.2 Framework of parallel subspace correction . 2.3 Two special techniques . . . . . . . . . . . . 2.3.1 Global-local technique . . . . . . . . 2.3.2 Local-global technique . . . . . . . . 2.4 Auxiliary and ctitious space methods . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

3 A model problem and outline 4 Preliminaries of Sobolev spaces and nite element spaces 4.1 Some Sobolev spaces . . . . . . . . . . . . . . . . . . 4.2 Properties of the nite element space . . . . . . . . . 4.2.1 Interpolation operators . . . . . . . . . . . . 4.2.2 Discrete harmonic functions . . . . . . . . . . 4.2.3 Discrete Sobolev norms . . . . . . . . . . . . 4.2.4 Discrete Sobolev inequalities . . . . . . . . . 4.3 Vertex-edge-face lemmas for nite element functions

5 Substructuring methods

5.1 Outline . . . . . . . . . . . . . . . 5.2 Joint-set coarse subspaces . . . . . 5.2.1 Two dimensional case . . . 5.2.2 Three dimensional case . . 5.3 Substructuring method-I . . . . . . 5.3.1 Basic space decomposition . 5.3.2 Three variants . . . . . . . 5.4 Substructuring method-II . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . .

. . . . . . .

. . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . . . . . . . .

4 5

. 6 . 7 . 9 . 9 . 10 . 11

. . . . . . . . . . . . . . .

12 16 17 20 20 21 22 24 26

30 30 33 34 35 37 37 38 42

6 Neumann-Neumann methods

44

7 Some other interface preconditioners

54

8 Methods with inexact subdomain solvers

58

6.1 Basic ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.2 Neumann-Neumann methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 6.3 Balancing DD method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

7.1 Multilevel preconditioner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 7.2 Interface overlapping additive Schwarz method . . . . . . . . . . . . . . . . . . 56 8.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

9 Implementation issues

9.1 Substructuring algorithms . . . . . . . . . . . 9.1.1 Substructuring-I: S-implementation . . 9.1.2 Substructuring-I: BPS-implementation 9.1.3 Substructuring-I: DW-implementation 9.1.4 Substructuring method-II . . . . . . . 2

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

60 60 62 62 63 64

9.2 Neumman-Neumann algorithms . . . . . . . . . . . . . . . . . . . . 9.2.1 Neumman-Neumann algorithm with weighted coarse space 9.2.2 Balancing DD algorithm . . . . . . . . . . . . . . . . . . . . 9.3 Some other interface algorithms . . . . . . . . . . . . . . . . . . . . 9.3.1 Hierarchical basis algorithm . . . . . . . . . . . . . . . . . . 9.3.2 Interface BPX algorithm . . . . . . . . . . . . . . . . . . . . 9.3.3 Interface overlapping additive Schwarz algorithms . . . . .

3

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

66 66 68 69 69 70 70

1. Introduction. In this paper, an overview will be given to several major non-

overlapping domain decomposition methods for solving large sparse linear systems of equations which arise from nite element discretizations of second order self-adjoint elliptic problems de ned on a bounded Lipschitz domain  Rn (n = 2; 3). There are a vast literature on this topic and most often theoretical presentations on these methods are quite technical. A primary goal of this paper is to sort out the existing techniques and give a uni ed presentation on the theoretical aspects of these methods. As a result, some internal connections between various di erent algorithms will be observed by means of some elementary technical tools presented in the paper. Non-overlapping domain decomposition methods refer to methods de ned on a decomposition of domain consisting of a collection of mutually disjoint subdomains. Such methods are related to the more traditional block Gaussian elimination methods with the blocks given by subdomains. Loosely speaking, the domain decomposition methods discussed in this paper are algorithms for preconditioning the Schur complement resulting from the block Gaussian eliminations, thus excluded some other interesting non-overlapping domain decomposition methods, for example, the methods based on Lagrangian multiplier techniques, for which we refer to Liang [48], Farhat, Roux [36, 37], Le Tallec and Sassi [68] and the references therein. Non-overlapping methods are obviously well suited for parallel computing architectures and they also have some advantages over the overlapping methods. We are not attempting to present a detailed history and literature of the concerned methods. For more thorough references, the readers are referred to recent survey papers and books by Chan and Mathew [21], Le Tallec [66], Smith, Bjorstad and Gropp [64], and the domain decomposition proceedings [17, 18, 20, 39, 40, 46, 47]. We also note that the references [21] and [64] use the matrix representation while the reference [66] and the current survey use the operator representation. The techniques for analyzing DD methods consist of, roughly speaking, two aspects: algebraic techniques and analytic techniques. These techniques will be summarized in two separate sections. The algebraic techniques will be presented in x2. This section gives a brief account for some basic facts on conjugate gradient methods and preconditioning techniques. In particular, the framework of the subspace correction method is summarized. Also some very simple techniques, known as global-local and local-global techniques, will be presented; the global-local technique is for the construction of a subspace preconditioner based on a preconditioner on the whole space whereas the local-global technique is for constructing a preconditioner on the whole space based on a subspace preconditioner. 4

These techniques prove to be instrumental in understanding the close relationship between di erent algorithms. The analytic techniques, on the other hand, will be presented in x4. This section gives a summary of some basic properties of Sobolev spaces and nite element spaces. In particular, special discussions will be devoted to continuous and discrete harmonic functions, the norm equivalence for nite element functions restricted on the boundary of a domain. Major technical results are included in this section. The major idea of non-overlapping DD methods will be introduced in x3. With a model elliptic boundary value problem, discussions therein lead to two major types of DD methods to be studied in this paper: the substructuring type methods and the Neumann-Neumann type methods. The substructuring methods are presented in x5. Discussions begin with the simpler two-subdomain case. Then major di erent methods will be presented for the general multi-subdomain case. The main algorithms are originated from the papers of Bramble, Pasciak, and Schatz [8, 10] and other types of substructuring methods are presented as natural variants of the methods [8, 10]. Nevertheless, our presentations are somewhat di erent from the existing literature and hopefully a little easier for the readers to comprehend. The Neumann-Neumann methods, including the so-called balancing DD method are addressed in x6. We attempt to present the Neumann-Neumann method in an intuitively natural way. With the early-mentioned global-local and local-global techniques, the so-called balancing method is derived in a simple and straightforward fashion. In particular, the technical estimates of this method are derived as a direct consequence of those for the Neumann-Neumann methods. x7 is devoted to those DD methods derived from other existing methods such as hierarchical basis multigrid method, BPX multigrid preconditioner and additive overlapping Schwarz methods. Some of these methods are treated again by the global-local and local-global technique. x8 gives a brief discussion on non-overlapping DD methods based on inexact subdomain solvers. Finally, x9 discusses the implementation issues for some major algorithms in the paper. = will be used in this paper. and  For convenience, following [71], the symbols  = y3; mean that x1  C1y1; x2  c2 y2 and c3 x3  y3  < y1 ; x 2  > y2 and x3  That x1  C3x3 for some constants C1; c2; c3 and C3 that are independent of mesh parameters.

2. Algebraic aspects of preconditioning techniques. All the domain de-

composition methods discussed in this paper are based on a very important algebraic method: preconditioned conjugate gradient method (PCG in short). This section con5

tains some basic facts for the PCG method and some basic techniques for constructing and analyzing preconditioners. The presentation in this section is purely algebraic. Given a nite dimensional linear vector space V with an inner product (; ) and a linear operator A de ned on V . Consider the following linear equation on V :

Au = f:

(2.1)

We assume that A is symmetric positive de nite (SPD) with respect to the inner product (; ) on V . Next, we shall give a brief overview of the conjugate gradient method (CG) or preconditioned conjugate gradient method (PCG), and methods of constructing and analyzing preconditioners for the above linear equation. For convenience, we are going to use the following convention: if B1 and B2 are two SPD operators on V such that

1 (B1v; v)  (B2 v; v)  2(B1 v; v) 8v 2 V; we then write

1B1  B2  2 B1; if furthermore 1 and 2 are constants independent of any parameters associated with V , then we write = B2 : B1 

2.1. Preconditioned conjugate gradient method. The well-known conjugate

gradient method is the basis of all the preconditioning techniques to be studied in this paper. The preconditioned conjugate gradient (PCG) method can be viewed as a conjugate gradient method applied to the preconditioned system:

BAu = Bf:

(2.2)

Here B : V 7! V is another SPD operator and known as a preconditioner for A. Note that BA is symmetric with respect to the inner product (B ?1 ; ). Let (BA) be the condition number of BA, i.e. the ratio between the largest and the smallest eigenvalue of BA. It is well-known that (2.3)

ku ? uk kA  2



q

(BA) ? 1 k ku ? u0kA; (BA) + 1

q

which implies that PCG will generally converge faster with smaller condition number (BA). 6

Observing the formulae in the PCG method and the convergence estimate (2.3), one sees that the eciency of a PCG method depends on two main factors: the action of B and the size of (BA). Hence, a good preconditioner should have the properties that the action of B is relatively easy to compute and that (BA) is relatively small (at least smaller than (A)).

2.2. Framework of parallel subspace correction. All the domain decompo-

sition preconditioners considered in this paper will be interpreted and analyzed within the framework of parallel subspace correction method (PSC in short) (see Xu [71]) or additive Schwarz method (see Dryja and Widlund [32]). This framework is based on a sequence of subspaces Vi, 0  i  p, of V such that

V=

p X i=0

Vi:

The above space decomposition is understood in the way that for any v 2 V , there exist p X vi 2 Vi such that v = vi. i=0

For each i, we de ne two orthogonal projections Qi ; Pi : V 7! Vi by (Qi u; vi) = (u; vi); (APiu; vi) = (Au; vi) 8 u 2 V; vi 2 Vi and the restriction Ai of A on Vi by (Ai ui; vi) = (Aui; vi) 8 ui; vi 2 Vi: It follows from the de nition that AiPi = QiA: Let Ri : Vi 7! Vi be an SPD operator that approximates the inverse of Ai in some sense. The PSC (parallel subspace correction) preconditioner for A is formulated as follows:

B=

(2.4)

p X i=0

RiQi :

Set Ti = RiQi A, 0  i  p. Note that Ti : V 7! Vi is symmetric with respect to (A; ) and non-negative. We obtain from above that

BA =

p X i=0

Ri QiA =

p X i=0

RiAi Pi =

p X i=0

Ti :

In implementation, the action of B on an element g 2 V can be realized by assembling p X the local contributions wi: Bg = wi, where wi solves the inexact subproblem i=0

(2.5)

(Ri?1 wi; vi) = (g; vi) 8 vi 2 Vi: 7

If each subproblem solver is exact, i.e. Ri = A?i 1 for each i, then Ti = Pi and we obtain the so-called additive Schwarz method:

BA =

p X i=0

Pi:

And in this case, the subproblem (2.5) becomes (Ai wi; vi) = (g; vi) 8 vi 2 Vi : The preconditioner (2.4) can be analyzed by the following theorem. Theorem 2.1. Let K0 be a positive constant satisfying, for any v 2 V , there exists p X a decomposition v = vi such that vi 2 Vi and i=0

(2.6)

p X i=0

(Ri?1vi ; vi)  K0 (Av; v)

and K1 a positive constant given by

(2.7)

K1 = 1max j p

p X i=1

"ij

where, for 1  i; j  p, "ij = 0 if PiPj = 0 (namely Vi ? Vj ), "ij = 1 otherwise. Then the preconditioner (2.4) admits the following estimate

(BA)  !1 K0 (1 + K1) where !1 = max0ip max(Ri Ai ): For a proof of the above theorem and a more general theory, we refer to Xu [71]. For related theory, we refer to Dryja and Widlund [33, 30], Bramble, Pasciak, Wang and Xu [12, 11], and Griebel and Oswald [42]. Remark 2.2. The parameter !1 measures the resolution of Ri for the upper spectrum of Ai and !1 = 1 if exact solvers are used on subspaces, namely Ri = A?i 1 for all 0  i  p. In many domain decomposition methods, the boundedness of !1 comes as an assumption. The estimate for K0 often dominates the analysis in domain decomposition methods. Remark 2.3. In domain decomposition methods, K1 measures the degree of overlapping among the subspaces Vi for 1  i  p. The space Vi is said to overlap with Vj if Vi is not orthogonal with Vj with respect to (A; ). Hence K1 is the largest number 8

of subspaces that a subspace can overlap with (with the exclusion of the subspace V0). The term \overlapping" comes from the fact that, in domain decompositions, Vi overlaps with Vj if and only if the two subdomains that de ne these two subspaces overlap with each other. As in all the domain decomposition methods considered in this paper, each subdomain overlaps with only a xed number of other subdomains, hence K1 is always bounded by a xed constant. Remark 2.4. For other applications such as multigrid methods, the parameter K1 should be de ned in a more precise way than here, and for example, K1 can be de ned as in (2.7) but with "ij (1  i; j  p) de ned by (ATiu; Tj v)  !1 "ij (ATiu; u)1=2(ATj v; v)1=2 8 u; v 2 V: Remark 2.5. The parallel subspace correction method as presented here is in fact

essentially the same as the so-called additive Schwarz method with inexact local solvers. The terminology additive Schwarz method was attributed to Dryja and Widlund [32] and it re ects the fact that this method is a variant of an alternating algorithm proposed by Schwarz [59] in 1870. The terminology parallel subspace correction method was given in Xu [71] to more directly re ect the nature of this type of methods within the general framework studied in [71]. The basic idea of this type of methods can be found, for example, in Nepomnyaschikh [55] and Matsokin and Nepomnyaschikh [54]. Some history of the theoretical development of this method can be traced in Matsokin and Nepomnyaschikh [54], Dryja and Widlund [32, 34], Lions [50], Bramble, Pasciak, and Xu [13], Bramble, Pasciak, Wang and Xu [12, 11] and Xu [71].

2.3. Two special techniques. For investigating the relationship between di er-

ent domain decomposition preconditioners, two special techniques formulated here will prove to be instrumental (see also Xu [74]). These techniques are based on a single subspace V^ of V . Let A^ be the restriction of A on V^ de ned by (2.8)

(A^u^; v^) = (Au^; v^) 8u^; v^ 2 V^ ;

^ P^ : V 7! V^ be two orthogonal projections with respect to (; ) and (A; ) and Q; respectively. Let P^ t be the adjoint of P^ with respect to (; ).

2.3.1. Global-local technique. This simple technique, known as global-local tech-

nique, is for constructing a preconditioner on a subspace V^ of V from a known preconditioner on the space V . 9

^ P^ t. Then Theorem 2.6. Given that B is an SPD operator on V , de ne B^ = PB

^ B^ is SPD on V^ and on the subspace V^ , B^ A^ = PBA: As a consequence, (B^ A^)  (BA): ^ , we have on the subspace V^ , Proof. By noting A^P^ = QA

^ P^ tA^ = PB ^ (A^P^ )t = PB ^ (QA ^ )t = PBA ^ Q^ t = PBA; ^ B^ A^ = PB where we have used the fact that Q^ t : V^ 7! V , the adjoint of Q^ , is an injection.

2.3.2. Local-global technique. This simple technique, known as local-global

technique, is for constructing a preconditioner on the space V by using some known local preconditioner on a subspace V^ of V . Given a preconditioner B^ : V^ 7! V^ for the operator A^ on V^ , which is assumed to be spectrally equivalent to A^ in the sense that there exist two constants 0 and 1 such that

0(A^u^; u^)  (B^ A^u^; A^u^)  1(A^u^; u^) 8 u^ 2 V^ : Let V^ ? be the orthogonal complement of V^ in V , then the following algorithm provides a preconditioner B for A on the space V . Algorithm 2.1. Given g 2 V , u = Bg = uP + uR is computed as follows: 1. Solve the local problem: uP 2 V^ ? satisfying (AuP ; v) = (g; v) 8 v 2 V^ ?:

(2.9)

2. Compute uR by uR = B^ Q^ (g ? AuP ): Theorem 2.7. For Algorithm 2.1 de ned above, we have

B = (P^ ? + P^ B^ A^P^ )A?1 where P^ ?  I ? P^ : V 7! V^ ? is an orthogonal projection with respect to (A; ), and

; 1 ) : (BA)  max(1 min(1; ) 0

Proof. By Step 1 of Algorithm 2.1, uP = P^ ?A?1 g. Substituting uP into uR in Step 2 gives

^ PA ^ ?1 g: uR = B^ Q^ (AA?1 g ? AP^ ?A?1g) = B^ QA 10

Theorem 2.7 then follows from (2.9) and the expressions of uP and uR . Remark 2.8. We remark that the subproblem in Step 1 of Algorithm 2.1 is solved exactly. When V = V^ , we set uP = 0, and the preconditioner B degerates to the original B^ on V^ .

2.4. Auxiliary and ctitious space methods. The method to be introduced in

this section may be viewed as a two-level nonnested multigrid preconditioner or parallel subspace correction method associated with two nonnested spaces. Assume that these two nonnested linear vector spaces are V and W with inner products (; ) and [; ] respectively. Let A and S be two given SPD operators on V and W respectively with respect to their inner products. Assume that T is a good preconditioner for S on W in the sense that (2.10)

0 [w; w]S  [TSw; w]S  1[w; w]S 8 w 2 W:

The technique given here is for constructing a preconditioner B for A on V from the known preconditioner T for S on W . For the purpose, we introduce a \prolongation" operator  : W 7! V and a \restriction" operator Q : V 7! W . For a good preconditioner, these two operators are assumed to be bounded in their \energy" norms, i.e. (2.11)

kQvk2S  0?1kvk2A; kwk2A  1 kwk2S 8 v 2 V; w 2 W

and the action of Q is not far from the one of the identity, i.e. (2.12)

kv ? Qvk2  0?1 ?A1kvk2A 8 v 2 V

where A is the spectral radius of A. Moreover, let R be an SPD operator in V , spectrally equivalent to the scaling of the identity, namely (2.13)

0 ?A1(v; v)  (Rv; v)  1 ?A1(v; v) 8 v 2 V:

Then we have the following Theorem 2.9. Under the assumptions (2.10)-(2.13), the SPD operator de ned by

B = R + T t can be chosen to be a preconditioner for A, and the condition number of BA is bounded by

(BA)  ( 1 + 1 1)(( 0 0)?1 + ( 00)?1 ): 11

In particular, if Q is a right inverse of , namely Q = I , then

(2.14)

((T t )A)  1 1 : 0 0

Theorem 2.9 was presented in Xu [73] to analyze an optimal preconditioner for general unstructured grids by using some structured grids. The last estimate in the theorem corresponds to the \Fictitious Space Lemma" (see Nepomnyaschikh [56]). In this case, the space W has to be at least as rich as the original space V and hence the construction of  needs more caution.

3. A model problem and outline. For clarity of the presentation, we devote

this section to explaining the basic ideas and motivation of the main algorithms to be discussed in the paper. The style of presentation of this section is informal. Technical details and relevant literature are to be given in subsequent sections. The following boundary value problem is the model problem to be studied: (3.1)

?r  ((x)rU (x)) = F (x) in ; U (x) = 0

on @ :

Here  Rn is a polygon for n = 2 or a polyhedron for n = 3 and (x) is piecewise constant in or (x)  1. Without loss of generality, we assume that is of unit diameter. From a preconditioning point of view, the above model problem is appropriate for the study of a large class of second order self-adjoint elliptic boundary value problems. Problems with slowly changing variable coecients may be preconditioned by the Poisson equation, namely (x)  1 in (3.1); problems with large jumps in coecients between subdomains may also be preconditioned by the above problem with  being piecewise constant. Let T h = fi g be a quasi-uniform triangulation of with i 's being nonoverlapping simplexes of size h, with h 2 (0; 1], namely there exist constants C0 and C1 independent of h such that each simplex i contains (resp. is contained in) a ball of radius C0 h (resp. C1h). We then de ne V h to be the piecewise linear nite element subspace of H01( ) associated with T h, i.e.

V h = fv 2 H01( ) : vji 2 P1 (i) 8i 2 T hg; where P1 is the space of linear polynomials. Then the nite element approximation for (3.1) is to nd u 2 V h such that (3.2)

A(u; v) = (F; v) 8 v 2 V h; 12

where (; ) is the scalar product in L2 ( ), and

A(u; v) =

Z

(x)ru  rv dx:

Later on, we shall also use A to denote the operator on V h de ned by (Au; v) = A(u; v) 8 u; v 2 V h; and this operator A may be called the sti ness operator. A domain decomposition without overlapping consists of a number of mutually disjoint open subdomains i such that p

[

 =  i :

(3.3)

i=1

When the coecient (x) is piecewise constant, each subdomain i is chosen in such a way that (x) equals to a constant i in i . We assume that the triangulation T h is consistent with (3.3) in the sense that each @ i can be written as a union of boundaries of elements in T h (see Fig. 1). Moreover, we assume that all subdomains

i are of size h0 (> h) in the sense that there exist constants C0 and C1 independent of h and h0 such that each i contains (resp. is contained in) a ball of radius C0h0 (resp. C1h0 ). Corresponding to each subdomain i, we shall use the notation

Ai(u; v) =

Z

i

iru  rvdx 8 u; v 2 H 1( ):

Obviously A(u; v) = Ppi=1 Ai (u; v). ?i ?i

i ?j ?i ?j j ?j

Th . Fine mesh T h and subdomains f i gpi=1 : some are simplices and some quadrilaterals

Fig. 1

13

Non-overlapping domain decomposition methods discussed in this paper correspond to constructing preconditioners for the system (3.2) by solving certain small problems associated with the subdomains from the domain decomposition, possibly plus a small scale global problem (often called the coarse problem). Apparently the rst natural step is to solve for uP;i 2 V0h( i ) on each subdomain i the following local homogeneous Dirichlet problem:

A(uP;i; v) = (F; v) 8 v 2 V0h( i ); where the local subspace V0h( i )  V h is de ned by (with a slight abuse of notation)

V0h( i ) = fv 2 V h : v(x) = 0 8 x 2 n ig: Notice that the computation of uP;i can be carried out on each subdomain concurrently. This is typically how the parallelization is realized in non-overlapping domain decomposition methods. Let uP 2 V h be the function that is equal to uP;i on the subdomain i , then uP is clearly a nite element function in the subspace:

VP = fv 2 V h : v(x) = 0 8 x 2 ?g:

(3.4) Here ? =

p [ i=1

?i, with ?i = @ i n @ , is the interface among all the subdomains f ig

(cf. Fig. 1). Obviously, uP 2 VP is the solution of the following problem:

A(uP ; v) = (F; v) 8 v 2 VP :

(3.5)

Of course, uP is only \part" of the entire nite element solution u that we are seeking for. The remaining part of the solution uH = u ? uP lies in the orthogonal complement of VP in V h: (3.6)

VH = fv 2 V h : A(v; ) = 0 8  2 VP g;

and uH obviously satis es:

A(uH ; v) = (F; v) ? A(uP ; v) 8 v 2 V h; or equivalently, (3.7)

A(uH ; vH ) = (F; v) ? A(uP ; v) 8 v 2 V h: 14

Here vH 2 VH is understood similarly as uH . We note that (3.8)

A(u; u) = A(uP ; uP ) + A(uH ; uH ):

The function uH 2 VH is called a piecewise discrete harmonic function, since it satis es on each subdomain i,

Ai(uH ; ) = i (ruH ; r)0; i = 0 8 2 V0h( i): From this, we also know that the value of uH in is uniquely determined by its value on the interface ?. Therefore, the previously conducted process actually reduces the global nite element resolution (3.2) to a much smaller problem (3.7) on the interface. The main concern of this paper is to construct preconditioners for the system (3.7). Note that this system is on the functions de ned on the interface ?, thus it is convenient to consider only those functions on ?, namely the nite element space V h(?). Given u; v 2 V h(?), let uH ; vH 2 VH be the discrete harmonic extensions of u; v. The relation between interface functions and their discrete harmonic extensions can be established through the following bilinear form on ? (3.9)

S (u; v) = A(uH ; vH ) 8 u; v 2 V h(?);

which may be analysed using its equivalent H 1=2-norm on the boundaries of all subdomains (cf. (4.13) late): (3.10)

p

X S (v; v) =  i jvj21=2;@ i :

i=1

We shall also use S : V h(?) 7! V h(?) to denote the interface operator induced by (3.11)

hSu; vi = S (u; v) 8u; v 2 V h(?):

Here and in the sequel, h; i denotes the L2 scalar product on ?. Our task is therefore to construct good preconditioners for the interface operator S to improve its condition number. The following lemma shows how the condition number of S depends on the jumps in the coecients of the model problem, the subdomain size h0 and the nite element mesh size h. The proof of the lemma is given in the appendix. Lemma 3.1. For the interface operator S , we have < maxi i (h0 h)?1: (S )  mini i 15

We end this section with a presentation of the matrix form of the interface operator S . Let S be the sti ness matrix associated with the bilinear form S (; ) under the standard nodal basis functions in V h(?). One important observation is that S is a Schur complement of the sti ness matrix A associated with A(; ) under the nodal basis functions in V h. More speci cally, if we write the sti ness matrix A 2 Rnn block-wise 11 A12 A= A At12 A22

!

where A11 2 Rn1n1 is the sti ness matrix associated with the nodes in n ? and A22 2 Rn2 n2 the sti ness matrix associated with the nodes on ?, then

S = A22 ? At12A?111A12: It is well-known that the condition number of the Schur compliment S is always less than the one of the matrix A, namely (S )  (A). But using the conditioning estimate of the operator S in Lemma 3.1, we have < maxi i (h0 h)?1: (S )  mini i This indicates the condition number (S ) deteriorates with respect to the subdomain size h0 , the nite element mesh size h and the coecients i of the model problem. The concern of the paper is to construct preconditioners T such that (TS ) is weakly depending on h0 and h and independent of i , namely, we shall prove the following kind of estimate < log h0 (TS )  h which holds uniformly with respect to i , for some  0.

4. Preliminaries of Sobolev spaces and nite element spaces. In this sec-

tion, we shall discuss the analytical tools for studying domain decomposition methods, namely Sobolev spaces and nite element spaces. We consider a bounded Lipschitz domain  Rn (n = 2; 3), which plays the role of a general subdomain, e.g. i in Section 3 and in subsequent sections where we apply the results of this section to domain decompositions. We assume that is a polygonal (n = 2) or polyhedral (n = 3) domain with each edge length of size d, and with boundary @ consisting of faces fF g, edges fE g and vertices fvg (e.g., see 16

Fig. 2 and Fig. 3). Most of the inequalities of this section will be given the explicit dependence on the diameter d of . It is implicitly assumed here that the constants in certain Sobolev inequalities for a unit size domain will be uniform for a reasonable class of such domains. V

V

E

E F

. left: 2-simplex with its vertices, edges;

right: 3-simplex with its vertices, edges, faces.

Fig. 2

V

V

E F

E

. left: 2-cube and its vertices and edges;

right: 3-cube and its vertices, edges and faces.

Fig. 3

4.1. Some Sobolev spaces. Let H 1( ) be the standard Sobolev space consisting

of square integrable functions with square integral rst order weak derivative, equipped with the usual semi-norm j  j1; and the scaled full-norm k  k1; : Z

j j = jruj2dx; kuk21; = d?2kuk20; + juj21; ; u 21;

R

where kuk0; = ( u2dx)1=2 , ru = (@1 u;    ; @nu), and the derivatives @i u, i = 1; 2;    ; n, are understood in the sense of distributions, and jruj is the Euclidean norm of ru in Rn. We remark that the scaling factor d?2 in the norm kk1; is to make the two terms appeared in the de nition have the same scaling with respect to d. W 1;1( ) will denote the Sobolev space consisting of essentially bounded functions with essentially bounded rst order weak derivatives, equipped with the norm

kuk1;1; = max(d?1kuk0;1; ; kruk0;1; ); kuk0;1; = ess sup ju(x)j: x2

The Sobolev space H01( ), a subspace of H 1( ), is de ned to be the closure, with respect to the norm k  k1; , of C01( ) (in nitely di erentiable functions with compact 17

support in ). In other words, H01( ) consists of functions in H 1( ) that vanish on @

(in the sense of trace). Sobolev spaces H 1=2 and H001=2 are most often used on the boundary of a domain. Let   @ . The space H 1=2 () can be de ned as follows

H 1=2 () = fu 2 L2 () : juj1=2; < 1g equipped with a norm 

kuk1=2; = d?1kuk20; + juj21=2;

1=2

;

where, with ds denoting the surface element on ,

juj21=2; =

Z Z 

(u(x) ? u(y))2 ds(x)ds(y); kuk2 = Z u2ds: 0; jx ? yjn  

For any face F of , the space H001=2 (F ) is de ned, with v~ being the zero extension of v into @ n F , as follows

H001=2(F ) = fv 2 L2 (F ) : v~ 2 H 1=2 (@ )g: By a direction calculation (cf. Grisvard [43]), for any v 2 H001=2 (F ),

jv~j1=2;@ = jvj2H001=2(F )

(4.1) with (4.2)

jvj

2 1=2 (F ) H00

=

v2 (x) ds(x): (v(x) ? v(y))2 ds(x)ds(y) + Z jx ? yjn F dist (x; @F ) F

Z Z

F

The space H001=2 (F ) is then a Hilbert space with a norm given by

kvk2H001=2(F ) = d?1kvk20;F + jvj2H001=2(F ): The space H001=2 can be obtained by Hilbert scaling between the spaces L2 and H01. Let ?F : H01(F ) 7! H ?1(F ) be the Laplacian operator. The following equivalence holds: (4.3)

kvkH001=2 (F ) = k(?F )1=4 vk0;F 8v 2 H001=2 (F ):

For proof of the above results, we refer to Lions and Magenes [49] for smooth domains and Bramble [7] for Lipschitz domains. By the equivalence between Hilbert scale and 18

real method of interpolation (cf. Lions and Magenes [49]), (4.3) is equivalent to the statement that H001=2(F ) is the interpolated space halfway between H01(F ) and L2(F ) spaces. We are now in a position to introduce some Sobolev inequalities. The following relations are well-known: = juj1; ; (4.4) inf ku + k1;  2R1

and = juj1=2;@ : inf ku + k1=2;@ 

(4.5)

2R1

We have the well-known Poincare inequality:

kuk0; < d juj1; 8u 2 H01( );

(4.6) and Friedrichs' inequality: (4.7)

ku ? (u)k0; 0, namely Z Z 1 1

(u) = j j udx; ?0 (u) = j? j uds:

0 ?0 We shall have occasions to use the following elementary inequality (cf. Grisvard [43]) (4.8)

kuk0;@ < "?1 kuk0; + " juj1; 8u 2 H 1( ); " 2 (0; 1):

The following theorem is an important case of the so-called trace theorem (cf. Necas [57]). Theorem 4.1. The mapping u 7! uj@ which is de ned for u 2 C 1 (  ), has a unique continuous extension as an operator from H 1 ( ) onto H 1=2 (@ ), namely (4.9)

kvk1=2;@ ku~k2 > h2 X ku~k2 (min  ) i 1; i  1;  0 i i=1

p

p

X 2 X ku 2 > h ~ k  h kuk20;@ i = h0hu; ui; 0 1=2;@ i  0

i=1

71

i=1

> h0 mini i . On the other hand, by (3.10) and (4.28), that implies min (S )  p

p

< h?1 X kuk20;@ i = h?1hu; ui: 1=2;@ i 

X (max  )?1hSu; ui < juj2

i

i

 i=1

i=1

< h?1 maxi i. The desired estimate then follows. This shows max (S )  Acknowledgment. The authors wish to thank Tony Chan and Barry Smith for their

helpful discussions on the paper, and also to thank the anonymous referees for their constructive comments.

72

REFERENCES [1] R. Bank and T. Dupont. An optimal order process for solving elliptic nite element equations. Math. Comp., 36:35{51, 1981. [2] R. Bank, T. Dupont, and H. Yserentant. The hierarchical basis multigrid method. Numer. Math., 52:427{458, 1988. [3] R. Bank and J. Xu. A hierarchical basis multigrid method for unstructured meshes. In W. Hackbusch and G. Wittum, editors, Tenth GAMM-Seminar Kiel on Fast Solvers for Flow Problems. Vieweg-Verlag, Braunschweig, 1995. [4] P. Bjorstad and O. Widlund. Iterative methods for the solution of elliptic problems on regions partitioned into substructures. SIAM J. Numer. Anal., 23:1097{1120, 1986. [5] J. Bourgat, R. Glowinski, P. Le Tallec, and M. Vidrascu. Variational formulation and algorithm for trace operator in domain decomposition calculations. In Tony Chan, Roland Glowinski, Jacques Periaux, and Olof Widlund, editors, Second International Symposium on Domain Decomposition Methods, Philadelphia, PA, 1989. SIAM. [6] J. Bramble. A second order nite di erence analogue of the rst biharmonic boundary value problem. Numer. Math., 9:236{ 249, 1966. [7] J. Bramble. Interpolation between Sobolev spaces in Lipschitz domains with an application to multigrid theory. Math. Comp., 64(212):1359{1366, 1995. [8] J. Bramble, J. Pasciak, and A. Schatz. The construction of preconditioners for elliptic problems by substructuring, I. Math. Comp., 47:103{134, 1986. [9] J. Bramble, J. Pasciak, and A. Schatz. The construction of preconditioners for elliptic problems by substructuring, III. Math. Comp., 51:415{430, 1988. [10] J. Bramble, J. Pasciak, and A. Schatz. The construction of preconditioners for elliptic problems by substructuring, IV. Math. Comp., 53:1{24, 1989. [11] J. Bramble, J. Pasciak, J. Wang, and J. Xu. Convergence estimates for multigrid algorithms without regularity assumptions. Math. Comp., 57(195):23{45, 1991. [12] J. Bramble, J. Pasciak, J. Wang, and J. Xu. Convergence estimates for product iterative methods with applications to domain decomposition. Math. Comp., 57(195):1{21, 1991. [13] J. Bramble, J. Pasciak, and J. Xu. Parallel multilevel preconditioners. Math. Comp., 55:1 {21, 1990. [14] J. Bramble, J. Pasciak, and J. Xu. The analysis of multigrid algorithms with nonnested spaces or noninherited quadratic forms. Math. Comp., 56(193):1 { 34, 1991. [15] J. Bramble, J. Pasciak, and J. Xu. A domain decomposition method for elliptic boundary value problems: Appications to unsteady incompressible uid ow. In Proceedings of 10th international conference on computing methods in applied sciences and engineering, New York, 1992. Nova Sciences. [16] X. Cai. The use of pointwise interpolation in domain decomposition methods with nonnested meshes. SIAM J. Sci. Comp., 16(1), 1995. [17] T. Chan, R. Glowinski, J. Periaux, and O. Widlund, editors. Second International Symposium on Domain Decomposition Methods for Partial Di erential Equations. SIAM, Philadelphia,PA, 1989. [18] T. Chan, R. Glowinski, J. Periaux, and O. Widlund, editors. Third International Symposium on Domain Decomposition Methods for Partial Di erential Equations. SIAM, Philadelphia,PA, 1990. [19] T. Chan and T. Hou. Eigendecompositions of domain decomposition interface operators for constant coecient elliptic problems. SIAM J. Sci. Stat. Comp., 12(6):1471{1479, 1991. [20] T. Chan, D. Keyes, G. Meurant, J. Scroggs, and R. Voigt, editors. Fifth International Symposium on Domain Decomposition Methods for Partial Di erential Equations. SIAM, Philadelphia,PA, 1992. 73

[21] T. Chan and T. Mathew. Domain decomposition algorithms. Acta Numerica, pages 61{143, 1994. [22] T. Chan, T. Mathew, and J.-P Shao. Ecient variants of the vertex space domain decomposition algorithm. SIAM J. Sci. Comp., 15:1349{1374, 1994. [23] T. Chan and J. Zou. Additive Schwarz domain decomposition methods for elliptic problems on unstructured meshes. Numerical Algorithms, 8:329{346, 1994. [24] P. Ciarlet. The Finite Element Method for Elliptic Problems. North-Holland, 1978. [25] P. Clement. Approximation by nite element functions using local regularization. R.A.I.R.O. Numer. Anal., R-2:77{84, 1975. [26] L. Cowsar, J. Mandel, and M. Wheeler. Balancing domain decomposition for mixed nite elements. Math. Comp., 64:989{1015, 1995. [27] L. Cowsar and M. Wheeler. Parallel domain decomposition method for mixed nite elements for elliptic partial di erential equations. In R. Glowinski, Y. Kuznetsov, G. Meurant, J. Periaux, and O. Widlund, editors, Fourth International Symposium on Domain Decomposition Methods for Partial Di erential Equations, Philadelphia, PA, 1991. SIAM. [28] Q. Dinh, R. Glowinski, and J. Periaux. Solving elliptic problems by domain decomposition methods with applications. In G. Birkho and A. Schoenstadt, editors, Elliptic Problem Solvers II, pages 395{426, New York, 1984. Academic Press. [29] M. Dryja. A method of domain decomposition for 3-D nite element problems. In R. Glowinski, G. Golub, G. Meurant, and J. Periaux, editors, First International Symposium on Domain Decomposition Methods for Partial Di erential Equations, Philadelphia, PA, 1988. SIAM. [30] M. Dryja. An additive Schwarz algorithm for two- and three-dimensional nite element elliptic problems. In Tony Chan, Roland Glowinski, Jacques Periaux, and Olof Widlund, editors, Second International Symposium on Domain Decomposition Methods, Philadelphia, PA, 1989. SIAM. [31] M. Dryja, B. Smith, and O. Widlund. Schwarz analysis of iterative substructuring algorithms for elliptic problems in three dimensions. SIAM J. Numer. Anal., 31:1662{1694, 1994. [32] M. Dryja and O. Widlund. An additive variant of the Schwarz alternating method for the case of many subregions. Technical Report 339, also Ultracomputer Note 131, Department of Computer Science, Courant Institute, 1987. [33] M. Dryja and O. Widlund. Some domain decomposition algorithms for elliptic problems. In L. Hayes and D. Kincaid, editors, Iterative methods for large linear systems, pages 273{291, San Diego, 1989. Academic Press. [34] M. Dryja and O. Widlund. Towards a uni ed theory of domain decomposition algorithms for elliptic problems. In T. Chan, R. Glowinski, J. Periaux, and O. Widlund, editors, Third International Symposium on Domain Decomposition Methods for Partial Di erential Equations. SIAM, Philadelphia, PA, 1990. [35] M. Dryja and O. Widlund. Schwarz methods of Neumann-Neumann type for three-dimensional elliptic nite element problems. Comm. Pure Appl. Math., 48:121{155, 1995. [36] C. Farhat. A Lagrange multiplier based divide and conquer nite element algorithm. J. Comput. Sys. Engrg., 2:149{156, 1991. [37] C. Farhat and F. Roux. A method of nite element tearing and interconnecting and its parallel solution algorithm. Int. J. Numer. Meth. Engrg., 32:1205{1227, 1991. [38] C. Farhat and F. Roux. An unconventional domain decomposition method for an ecient parallel solution of large scale nite element systems. SIAM J. Sci. Stat. Comp., 13:379{396, 1992. [39] R. Glowinski, G. Golub, G. Meurant, and editors J. Periaux, editors. First International Symposium on Domain Decomposition Methods for Partial Di erential Equations. SIAM, Philadelphia,PA, 1988. [40] R. Glowinski, Y. Kuznetsov, G. Meurant, J. Periaux, and O. Widlund, editors. Fourth International Symposium on Domain Decomposition Methods for Partial Di erential Equations. 74

SIAM, Philadelphia,PA, 1991. [41] R. Glowinski and M. Wheeler. Domain decomposition and mixed nite element methods for elliptic problems. In R. Glowinski, G. Golub, G. Meurant, and J. Periaux, editors, First International Symposium on Domain Decomposition Methods for Partial Di erential Equations, Philadelphia, PA, 1988. SIAM. [42] M. Griebel and P. Oswald. On the abstract theory of additive and multiplicative Schwarz algorithms. Numer. Math., 70:163{180, 1995. [43] P. Grisvard. Elliptic problems in nonsmooth domains. Pitman Advanced Publishing Program, Boston, 1985. [44] G. Haase, U. Langer, and A. Meyer. A new approach to the Dirichlet domain decomposition method. In S. Hengst, editor, Fifth Multigrid Seminar, pages 1{59. Eberswalde, 1990. [45] G. Haase, U. Langer, A. Meyer, and S. Nepomnyaschikh. Hierarchical extension operators and local multigrid methods in domain decomposition preconditioners. East-West J. Numer. Math., 2(3), 1994. [46] D. Keyes and J. Xu, editors. Seventh International Symposium on Domain Decomposition Methods in Science and Engineering. American Mathematical Society, Providence, R.I., 1995. [47] Y. Kuznetsov, J. Periaux, A. Quarteroni, and O. Widlund, editors. Sixth International Symposium on Domain Decomposition Methods in Science and Engineering, Providence, R.I., 1994. American Mathematical Society. [48] G. Liang and J-H. He. The non-conforming domain decomposition method for elliptic problems with Lagrangian multiplier. Chinese J. Numer. Meth. Appl., 15:8{19, 1993. [49] J.-L. Lions and E. Magenes. Nonhomogeneous Boundary Value Problems and Applications, volume I. Springer, New York, Heidelberg, Berlin, 1972. [50] P. Lions. On the Schwarz alternating method. I. In R. Glowinski, G. Golub, G. Meurant, and J. Periaux, editors, First International Symposium on Domain Decomposition Methods for Partial Di erential Equations, SIAM, Philadelphia, 1988. [51] J. Mandel. Balancing domain decomposition. Comm. in Numer. Meth. in Engr., 9:233{241, 1993. [52] J. Mandel and M. Brezina. Balancing domain decomposition: Theory and performance in two and three dimensions. Technical report, Computational Mathematics Group, University of Colorado at Denver, March 1993. [53] J. Mandel and M. Brezina. Balancing domain decomposition for problems with large jumps in coecients. Math. Comp., 65:1387{1401, 1996. [54] A. Matsokin and S. Nepomnyaschikh. A Schwarz alternating method in a subspace. Soviet Mathematics, 29(10):78{84, 1985. [55] S. Nepomnyaschikh. Domain Decomposition and Schwarz Methods in a Subspace for the Approximate Solution of Elliptic Boundary Value Problems. PhD thesis, Computing Center of the Siberian Branch of the USSR Academy of Sciences, Novosibirsk, USSR, 1986. [56] S. Nepomnyaschikh. Decomposition and ctitious domains methods for elliptic boundary value problems. In D. E. Keyes, T. F. Chan, G. Meurant, J. S. Scroggs, and R. G. Voigt, editors, Fifth International Symposium on Domain Decomposition Methods for Partial Di erential Equations, pages 62{72, Philadelphia, 1992. SIAM. [57] J. Necas. Les methodes directes en theorie des equations elliptiques. Academia, Prague, 1967. [58] Y.-H. De Roeck and P. Le Tallec. Analysis and test of a local domain decomposition preconditioner. In R. Glowinski, Y. Kuznetsov, G. Meurant, J. Periaux, and O. Widlund, editors, Fourth International Symposium on Domain Decomposition Methods for Partial Di erential Equations. SIAM, Philadelphia, PA, 1991. [59] H. Schwarz. Gesammelte Mathematische Abhandlungen, volume 2, pages 133{143. Springer, Berlin, 1890. First published in Vierteljahrsschrift der Naturforschenden Gesellschaft in Zurich, volume 15, 1870, pp. 272{286. 75

[60] L. Scott and S. Zhang. Finite element interpolation of nonsmooth functions satisfying boundary conditions. Math. Comp., 54:483{493, 1990. [61] B. Smith. Domain Decomposition Algorithms for the Partial Di erential Equations of Linear Elasticity. PhD thesis, Courant Institute of Mathematical Sciences, September 1990. Tech. Rep. 517, Department of Computer Science, Courant Institute. [62] B. Smith. A domain decomposition algorithm for elliptic problems in three dimensions. Numer. Math., 60(2):219{234, 1991. [63] B. Smith. An optimal domain decomposition preconditioner for the nite element solution of linear elasticity problems. SIAM J. Sci. Stat. Comput., 13(1):364{378, January 1992. [64] B. Smith, P. Bjorstad, and W. Gropp. Domain decomposition: Parallel multilevel methods for elliptic partial di erential equations. Cambridge University Press, 1996. [65] B. Smith and O. Widlund. A domain decomposition algorithm using a hierarchical basis. SIAM J. Sci. Stat. Comput., 11(6):1212{1220, 1990. [66] P. Le Tallec. Domain decomposition methods in computational mechanics. Comput. Mech. Adv., 2:121{220, 1994. [67] P. Le Tallec, Y. De Roeck, and M. Vidrascu. Domain-decomposition methods for large linearly elliptic three dimensional problems. J. of Comp. and Appl. Math., 34, 1991. Elsevier Science Publishers, Amsterdam. [68] P. Le Tallec and T. Sassi. Domain decomposiiton with nonmatching grids: Augmented Lagrangian approach. Math. Comp., 64:1367{1396, 1995. [69] C. Tong, T. Chan, and C. Kuo. A domain decomposition preconditioner based on a change to a multilevel nodal basis. SIAM J. Sci. Stat. Comput., 12(6), 1991. [70] J. Xu. Theory of multilevel methods. PhD thesis, Cornell University, May 1989. [71] J. Xu. Iterative methods by space decomposition and subspace correction. SIAM Review, 34:581{ 613, December 1992. [72] J. Xu. Multigrid and domain decomposition methods. Department of Mathematics, Penn State University, 1994. Lecture notes. [73] J. Xu. An auxiliary space preconditioning technique with application to unstructured grids. Computing, 56:215{235, 1996. [74] J. Xu. Iterative methods by multigrid and domain decomposition. In S. D. Margenov and P. S. Vassilevski, editors, IMACS Series in Computational and Appled Mathematics Volume 3,Iterative Methods in Linear Algebra, II, pages 229{253. IMACS, 1996. [75] J. Xu and S. Zhang. Preconditioning the Steklov-Poincare operator by using Green's function. Math. Comp., 66:125{138, 1997. [76] H. Yserentant. On the multi-level splitting of nite element spaces. Numer. Math., 49:379{412, 1986. [77] H. Yserentant. Old and new convergence proofs for multigrid methods. Acta Numerica, 1992.

76