Rome (tutorial) - CiteSeerX

2 downloads 31 Views 284KB Size Report
Tutorial session: "Numerical ... individual objects (like vector fields), whereas capitals denote sets of such objects. ...... of-the-art for such computations. As it has ...


EMC'96, Roma Tutorial session: "Numerical methods in EMC"

Weak formulations and edge-elements

1 Maxwell equations: Overview 1.1 Maxwell's model 1.2 Constitutive laws 1.3 Models derived from Maxwell's

Électricité de France, 1 Av. du Gal de Gaulle, 92141 Clamart, et Laboratoire de Génie Électrique de Paris (CNRS), Plateau du Moulon, 91192 Gif-sur-Yvette. Fax 33 1 4765 4118 A l a i n . B o s s a v i t @ d e r . e d f . f r.

1 1 2

2 The conduction problem: "complementary" formulations


2.1 The model 2.2 Complementary variational formulations 2.2.1 The scalar potential approach 2.2.2 The vector potential approach 2.3 "Standard" discretization (in scalar potential)

4 4 5 6 7

3 Whitney elements 3.1 Definitions and first properties 3.2 The Whitney complex 3.3 Metric properties of the complex Bibliographical comments

4 Conduction: vector potential approach

A. Bossavit


4.1 Approximation by edge elements 4.1.1 A "constrained linear system"" 4.1.2 Removing the constraints 4.2 Why not classical Lagrangian elements? 4.2.1 Boundary conditions are not easily enforced 4.2.2 On the same mesh, accuracy is poorer 4.2.3 The condition number of the final matrix is larger 4.3 Conclusion Bibliographical comments


8 8 9 11 11

12 12 12 13 14 14 15 16 17 18



One defines electric charge by



q = div d,

a scalar field. According to (1), one has thus

Models derived from Maxwell equations


which is the local expression of charge conservation. Note that if j is given, one gets the charge by integrating in time, provided the charge is known at some instant. So if one assumes that j and q are zero before time 0, then q(t, x) = − ∫0t (div j)(s, x) ds. There is a substantial difference between equations (1) and (2) on the one hand, and the constitutive laws (3) and (4) on the other hand. The first two are universal, a l w a y s valid, without any exception, whatever the nature of the matter in which the field develops. In contrast, eqs. (3) and (4), the form of which, as given here, is far from being the most general one, account for material properties, inasmuch as terms µ and ε may depend on position. Treatises of electromagnetism often add two equations to (1—4), namely (5) and div b = 0. But the latter stems from Faraday's law (2), if one assumes a null b (or even just a null div b) before initial time, so there is little justification in according to these relations the same status as to (1) and (2). A (rightful) concern for formal symmetry might suggest to write (2) as ∂t b + rot e = − k, where k would be a given field, the magnetic current, and to define magnetic charge as qm = div b (electric charge q would then be denoted by qe), hence the equation ∂ t qm + div k = 0, which would express magnetic charge conservation. Since k and qm are null in all known physical situations, this generalization seems pointless. But the symmetry in the equations thus revealed is worth noticing, and we shall observe some of its consequences.

1.1 MAXWELL'S MODEL Numerical EMC is part of a more general subject, the numerical study of Maxwell equations, (1)

− ∂t d + rot h = j,


∂t b + rot e = 0,

as completed by constitutive laws, like these: (3)

d = ε e,


∂tq + div j = 0

b=µ h,

and more to come. The notation for the vector fields e, h, d, b (electric f i e l d1, magnetic field, magnetic induction and electric induction respectively) departs from the traditional E, H, D, B, capital and boldface, because of the "functional" viewpoint adopted in these notes: small case symbols denote individual objects (like vector fields), whereas capitals denote sets of such objects. Other notational conventions (SMALL CAPITALS for complex-valued fields and boldface for vectors of degrees of freedom) will be commented upon in due time. Given the current density j, as well as initial values (at time t = 0, for instance) for e and h, eqs. (1—4) do determine e, h, d, b for t ≥ 0. These four vector fields, taken together, should be construed as the mathematical representation of a physical phenomenon, that we shall call the electromagnetic f i e l d. The distinction thus made between the physical reality one wants to model, on the one hand, and the mathematical structure thanks to which this modelling is done, on the other hand, is important. The above mathematical structure (eqs. (1)(4) together with the mathematical framework in which they make sense) is what one may call Maxwell's m o d e l. It accounts for situations in which one wishes to determine the field created by a known current distribution: wave propagation, radar problems, etc. Obviously, currents are not all known in most engineering problems, so this is only the head in a series of models, all based on (1—4), but with added features.

1.2 CONSTITUTIVE LAWS In all concrete problems, one deals with composite systems, analyzable into subsystems, or compartments: electromagnetical, mechanical, thermal, chemical, etc. Each compartment is subject to its own equations (partial differential equations, most often), whose right-hand sides are obtained by solving for equations relative to o t h e r compartments. Equations (1—4) govern the electromagnetic compartment. But the latter is rarely the only one concerned. There is at least another one in most situations, which means that one has to deal with a coupled system of partial differential equations, in general. For instance, the given j in (1—4) may come from the resolution of a problem of dynamics of the material particles that bear the charges. To solve

1 Italics, besides their standard use for emphasis, signal notions which are implicitely defined by the context.


this problem, one would apply the laws of dynamics (f = m a, etc.) to these particles. Given j, system (1—4) would determine e and b, hence Lorentz forces, hence the movement of charged matter, hence j again, and since this j must be the same we started from, we have there a "fixed point condition" that could in principle determine j, hence the field by solving (1—4) with this j. So the coupled problem may prove to be well-posed, though overwhelmingly difficult to solve. Needless to say, this is rarely done, and one take shortcuts that consist in assuming simple solutions of the problem of charge dynamics. One is Ohm's law, (9)

In the case of generators, it is up to sign the power needed to push charges up the electric field. (In the case of moving conductors, j · e would be in part Joule heating and for the other part, mechanical work.) The total yielded power is thus P = ∫E 3 p(x) dx, that is P = ∫E 3 j · e. 2 After (1) and (2), integrating by parts, P = − ∫E 3 (h · ∂tb + e · ∂ td). If (3) and (4) also hold, then P = − ∂ tW , where (13)

a quantity that may then legitimately be called electromagnetic energy: indeed, it appears to be the energy stored in the electromagnetic compartment of the system.

j = σ e,

deemed valid in non-moving passive conductors (and insulators, for which σ = 0). Another, valid in generators, or "active" conductors, consists simply in assuming that the current density (then denoted by jg, g for "given") is imposed, independently of the local electromagnetic field, in the corresponding region of space. It is then convenient to set σ = 0 in such regions, which allows one to write a generalized Ohm's law, valid for generators, conductors and insulators alike, and thus, most often, uniformly valid in all space: (10)

W(t) = 1/2 ∫E 3 (µ |h(t)|2 + ε |e(t)|2) ,

1.3 MODELS DERIVED FROM MAXWELL'S Concrete problems in electromagnetism rarely require the solution of Maxwell equations in full generality, because of various simplifications due to the smallness of some terms. The displacement currents term ε ∂te, for instance, is often negligible (not so often in EMC modelling, but often enough), hence an important submodel, eddy-currents theory:

j = σ e + jg.


It all goes then as if the charge dynamics problem was solved in advance, the result being given by (10). One may then append (10) to (1—4). The system of equations thus obtained (or "Maxwell's model with Ohm's law") thus subsumes the theory of nonmoving (active and passive) conductors. Most constitutive laws in electromagnetism have thus the character of a simple summary of what is in reality a complex interaction, a coupled problem with several compartments. Actually, model (1—4) itself is not "purely electromagnetic" unless ε = ε 0 and µ = µ0 : otherwise, it's already the simplified description of a coupled system, with polarization (hence ε ≠ ε 0) and magnetization (hence µ ≠ µ0) phenomena. There would be much to say about these interactions between the electromagnetic and other compartments. Let's just mention this, which is implied by the expression of Lorentz forces, but which one can as well consider as a postulate as regards energetic balance between compartments: Proposition 1.1. The power density yielded by the electromagnetic compartment of a system to other compartments is at each time given by

∂tb + rot e = 0, rot h = j,

j = σe + jg,

with in particular, in passive conductors (where one may eliminate e from (9) after division by σ), ∂t(µh) + rot (σ−1 rot h) = 0. Another frequent simplification is the passage to complex numbers representations. If the source-current j g is sinusoidal in time3, that is, of the form jg(t, x) = Re[ Jg(x) exp(iω t)], where Jg is a complex-valued vector-field, and if all constitutive laws are linear, one may look for the electromagnetic field in similar form, h(x) = Re[H(x) exp(iω t)], etc., the unknowns now being the complex fields H, E, etc., independent of time. Maxwell's model with Ohm's law then assumes the following form: (15)

− iω

D + rot H

= Jg + σ E, iω B + rot E = 0,

D = ε E,

B = µ H.

It is convenient there to redefine ε by assigning to this symbol the complex value ε + σ/iω , which allows the incorporation of the term σ E into iω D, whence the model 2 E 3 (for "three dimensional Euclidean affine space") denotes ordinary space, equipped with the dot-product here denoted by " · ", and therefore with a measure of distances, areas, volumes, etc. One denotes by dx the volume element, or the area element, according to whether the integral is over a volume or a surface. Each time this does not promote confusion, I'll omit this symbol, and write e.g. ∫ p rather than ∫ p(x) dx.

(12) p=j·e (that is, the power density p(x) is j(x) · e(x) at all points x, at all times). In the case of a passive conductor, this is Joule loss, and therefore, thermal power.


One often says "harmonic", but beware of this use, not always devoid of ambiguity.



− iω

D + rot H

= Jg, iω B + rot E = 0,

D = ε E,

B = µ H,

and problem (17) then appears as an intermediate in calculations (one step in an iterative process, for instance), with then in general a position-dependent permeability µ. Still under the hypothesis of stationarity, one has ∂ tq = 0, and thus div j = 0, after (6), hence

which is, with appropriate boundary conditions, the microwave oven problem. In (15'), ε is now complex, and one often writes it as ε = ε' – iε", where the real coefficients ε' and ε", of same physical dimension as ε 0, are nonnegative. (They often depend on temperature.) Nothing forbids to accept complex µ's as well, and not only for the sake of symmetry. This case really occurs about ferrites, and also in some modellings, a bit simplistic perhaps, of hysteresis. An even more drastic simplification obtains when one may consider the phenomena as independent of time (steady direct current at the terminals), or with slow enough variations. Let us review these models, dubbed stationary, derived from Maxwell's model by assuming that all fields are independent of time. In this case, one has in particular ∂tb = 0, and thus rot e = 0. So, after (3) and (5), (16)


in passive conductors. This is the conduction, or electrokinetics m o d e l. To the difference of the previous ones, it does not concern the whole space, barring exceptions, and thus requires boundary conditions, at the air-conductor interfaces, in order to be properly posed. This is a convenient model problem in order to introduce weak formulations and finite elements, for it has, in this respect, two main virtues. First, it escapes most difficulties inherent in Maxwell's model, because of the stationarity of the fields. And yet, the complexity of setting up boundary conditions is enough (more than enough, many will feel) to be representative of what one encounters in more general situations. Let's explore this model further.

rot e = 0, d = εe, div d = q,

and this is enough to determine e and d in all space, if the electric charge q is known: setting e = − grad ψ, where ψ is the electric potential, one has indeed − div(ε grad ψ) = q, a well posed4 Poisson problem. In the case where ε = ε 0 all over, the solution is given by ψ(x) = (4π ε0) –1 ∫ |x – y|–1 q(y) dy as one will check by differentiating under the summation sign in order to compute ∆ ψ. Model (16) is the core of electrostatics. In a similar way, one has rot h = j, after (1), whence, taking into account div b = 0 and (4), the model of magnetostatics: (17)

rot e = 0, j = σe, div j = 0,

rot h = j, b = µh, div b = 0,

and this determines b and h in all space when j is given. If µ = µ0 all over, the solution is obtained in closed form by introducing the vector field a(x) = (4π) –1 µ0 ∫ |x – y|–1 j(y) dy, called magnetic vector potential, and by setting b = rot a. (By differentiating inside the integral, one will find Biot and Savart's formula, which directly gives b in integral form.) Constitutive laws more involved than b = µ h , nonlinear, or even hysteretic, often occur, in the case of ferromagnetic materials, 4 "Well posed": refers to a problem of which one can prove it has a solution and a unique one, with moreover, continuity of this solution with respect to the data.



obtained by taking the flux of field j across a "cutting surface" such as Σ (Fig. 2.1), hence I = ∫C n · j. But since the problem is linear, one may as well take I as given. The resistance is then R = V/I, where V is the circulation of field e along γ : V = ∫c τ · e.




The conduction problem: "complementary" formulations






Σ γ

We study here pb. (18) of Chap. 1: rot e = 0, j = σ e, and div j = 0, in a simple configuration, with finite element discretization as our aim. As we shall see, there are essentially two ways to proceed, dual in some sense (or, as one says for reasons that will be exposed, "complementary"), and the two techniques rely, one on standard finite elements, the other one on edge elements.



τ Workpiece

n e

S0 FIGURE 2.1.

A simple model problem: To compute the resistance of a conductive workpiece, pinned between two perfectly conductive electrodes. On the right, the symbols in use and their meaning. Path γ goes from S e0 to Se1, with τ as its unitary tangent vector. The "cutting surface" Σ has its boundary in Sj, with n as its normal unitary field (oriented in the same sense as γ). The unitary vector τ, tangent to the boundary ∂C, is such that n × τ point towards the inside of Σ ("Ampère's right-hand rule"), which thus orients ∂C, once n is given over Σ.

2.1 THE MODEL The problem is as follows (Fig. 2.1). Let D be a bounded region of space, representing a moderately conductive piece in which one wishes to know the repartition of currents in order to compute (for instance) its resistance. Let S be its surface, split into two subregions S e and Sj : the former (made of two pieces: S e0 and Se1), is the surface in contact with the electrodes, and the latter is the lateral, insulating surface. The problem consists in finding two vector fields j and e in D, verifying (1)

rot e = 0,


n × e = 0 on Se,


div j = 0,


n · j = 0 on Sj,

Remark 2.1. According to (1) [resp. (3)], the circulation of e [resp. the flux of j] only depends on the endpoints of γ [resp. on the boundary of Σ]. But there is more: Because of (2) [resp. (4)], V [resp. I] is the same for all paths that go, like γ , from S e1 to Se0 [resp. all surfaces that hinge, like Σ, on S j, and are similarly oriented by their normal field]. ◊ The problem is thus, finally: find e an d j that satisfy (1—5) and one o f the two relations

as well as Ohm's law (5)


j = σ e,

∫γ τ · e = V,


∫C n · j = I,

w i t h V or I, as the case may be, given. The resistance looked for is then R = V/I, the other quantity, be it I or V, being given by the one among relations (6) and (7) that has not been enforced.

where σ is the conductivity (positive in D, and possibly a function of position). Eq. (2) is characteristic of perfectly conductive surfaces. Eq. (4) characterizes insulating surfaces: no current flows across the free surface of the piece. Obviously, this modelling is still incomplete, since the information about the causes of the passage of current through the piece is missing. This information can be given in two distinct ways, at will. The first one consists in imposing the circulation of e along a path γ joining Se0 to Se1: it should equal the voltage drop V between the upper and the lower electrodes. Then the required resistance is R = V/I, where I is the intensity, which itself is

2.2 COMPLEMENTARY VARIATIONAL FORMULATIONS There are two ways of attack, and for each, two variants, according to whether I or V is taken as given. The first one consists in beginning by -4-

setting e = − grad ψ, which enforces rot e = 0, and the second one, in setting j = rot u, which enforces div j = 0. Let's tackle the first one first.

If ψ is solution to (10), this quantity is Joule power. But one easily verifies (differentiate P with respect to ψ) that problem (10) is equivalent to (11)

2.2.1 The scalar potential approach

which is the v a r i a t i o n a l7 formulation, in electric potential, of the conduction problem. The electric potential that effectively settles is thus the one, among all admissible potentials, that minimizes Joule loss. Second variant: this time, we don't impose the values of potential ψ on S e0 and Se1 from the outset. For ψ ∈ L 2grad(D), let us set V(ψ) = (∫Se1 ψ)/area(Se1) − (∫Se0 ψ)/area(Se0). The value assumed by this functional for ψ ∈ Ψ is the potential drop ψ1 − ψ0, which is also the circulation ∫γ τ · e. In the first variant, when V was given, we had thus selected one of the subspaces of Ψ characterized by V(ψ) = V (the one for which ψ0 = 0 and ψ1 = V). Now that I is given, one replaces (10) by find ψ ∈ Ψ such that8

Of course, ψ must possess a gradient, which itself must be square integrable, since this corresponds to a finite Joule loss. The most natural functional space in which to look for ψ is therefore Sobolev's space, made of square-summable 5 functions ψ, whose gradient is itself square-summable over D. This space is commonly denoted by H1(D), but for consistency with other notation, we shall name it L 2grad(D). We shall call admissible (electric) potentials those which also satisfy (2), that is, those which belong to the space (8)

Ψ = {ψ ∈ L2grad(D) : n × grad ψ = 0 on Se} ,

which are those for which ψ = Cte s e p a r a t e l y on S e0 and S e1, the values ψ0 and ψ1 of these constants not being fixed in advance. A first variant consists in fixing these constants in such a way that condition (6) be satisfied. One thus restricts to the ψs of the affine subspace



Then, one enforces (3) and (4) in the weak sense6 by requiring ∫D j · grad ψ' = 0

P(ψ) − 2 Ι V(ψ) ≤ P(ψ') − 2 Ι V(ψ') ∀ ψ' ∈ Ψ .

P(ψ) = ∫D σ |grad ψ|2

Ψ 0 = {ψ ∈ Ψ : ψ = 0 on Se} .

= − ∫D div(σ grad ψ) ψ + ∫S n · (σ grad ψ) ψ = − ∫S n · j ψ = − ∫Se0 n · j ψ + ∫Se1 n · j ψ = (ψ1 − ψ0) ∫Se0 n · j = IV(ψ) = I V .

As Ohm's law (5) must still be enforced, one finally sets j = − σ grad ψ in (9), whence the final formulation: find ψ ∈ Ψ V such that ∫D σ grad ψ · grad ψ' = 0

∀ ψ' ∈ Ψ,

One finds V by setting V = V(ψ). Letting ψ' become equal to ψ in (10'), one remarks that

∀ ψ' ∈ Ψ 0,

where Ψ 0 is the vector subspace parallel to Ψ V, that is


∫D σ grad ψ · grad ψ' = Ι V(ψ')

and (11) by find ψ ∈ Ψ such that

Ψ V = {ψ ∈ Ψ : ψ = 0 on Se0, ψ = V on Se1} .


find ψ ∈ Ψ V such that P(ψ) ≤ P(ψ') ∀ ψ' ∈ Ψ V,

Computing P( ψ) thus yields I in the first variant, and V in the second one. As for the resistance, one has, in the first variant, R−1 = P(ψ)/V2, where ψ is the solution of (11), hence the following variational characterization:

∀ ψ' ∈ Ψ 0.

This is the weak formulation, in electric potential, of the conduction problem. Let us now set, for any ψ ∈ L2grad(D),


R −1 = inf{P(ψ)/V2 : ψ ∈ Ψ V} .

In the second variant, P(ψ) − 2 I V(ψ) = − I V = − R I 2, so

P(ψ) = ∫D σ |grad ψ|2. 5 The fondness of numericists for such functional spaces is justified by the comfortable working conditions provided by Hilbert spaces, in which the same amenities as in ordinary Euclidean space are available (distance, dot product, orthogonality ...), except finite dimension. Moreover, there is often an interesting physical interpretation for the norm of an element of such a space.


A problem is "variational" if it can be expressed as the requirement to find x that minimizes f. If f is differentiable, solving the equation ∂f(x) = 0 will solve the problem. If there is a weak form of this equation (as for instance (10) above) one calls it Euler equation of the variational problem. By abuse of language, one often refers to the Euler equation as the "variational formulation", but "weak formulation" is more correct.

6 The "weak formulation" of a system of equations Au = f is the predicate (Au, u') = (f, u') ∀ u', where ( , ) denotes some scalar product. The u' are called "test-functions" (or "test-fields", etc.). The idea is to "test" the equality Au = f by comparing the weighted averages (or "moments") of both sides for all possible "weights" u'.

8 According to the "trace theorem" of the theory of Sobolev spaces, functional over L2grad(D), so we are clear.


V is a continuous


R = sup{[2IV(ψ) − P(ψ)]/I2 : ψ ∈ Ψ } ,


a variational formulation again, but this time, in terms of the vector potential. The u in (16) or (17) is not unique. (This is why it was denoted by u, instead of h: the magnetic field h, which does satisfy rot h = j, is but one of the possible vector potentials for j.) But this is of no import, for all solutions deliver the same j = rot u. The second variant would consist in replacing (16) by

whence, in both cases, a lower bound for the resistance.

2.2.2 The vector potential approach The second way to deal with the problem consists in setting j = rot u, with u ∈ IL2rot(D),9 the vector potential, which enforces div j = 0. In order to satisfy condition (4), one requires n · rot u = 0 on Sj, hence n · j = 0. Let us denote by U the space of admissible vector potentials, here those that satisfy (2) and (4):



(compare with (8)). Set J(u) = ∫Σ n · rot u. By Stokes Theorem,

the circulation of u on ∂C (cf. Fig. 2.1). Condition (7) about the intensity thus amounts to J(u) = I. First variant, let us look for u in UI = {u ∈ U : ∫∂Σ τ · u = I}.

Thanks to the weak formulation ∫D e · rot u' = 0

∀ u' ∈ U0,

= − ∫Se1 ψ n · rot u = − V ∫Se1 n · rot u = V ∫Σ n · rot u I

where U = {u ∈ U : ∫∂Σ τ · u = 0} is the subspace parallel to U , conditions (1) and (2) are enforced: rot e = 0 and n × e = 0 on Se. Indeed, (14) implies (integrate by parts)

= V ∫∂C τ · u = V I , by the very definition of UI. (The change of sign is due to the normals being of opposite orientations on S e1 and on Σ.) So Q(u) = RI2, for each u solution to (16). (One will check without trouble that Q(u) = V2/R for any solution u of (17').) Since P(ψ) and Q(u) were obtained as lower bounds on the sets Ψ V and I U respectively, one has the following bilateral estimates for R:

0 = ∫D e · rot u' = ∫D rot e · u' − ∫S (n, e, u') ∀ u' ∈ U0

(where ( , , ) denotes the mixed product), whence rot e = 0 by taking u' supported in D, then n × e = 0 on S e by taking u' null on Sj. Last, e = σ−1 rot u in (14) yields the w e a k formulation, in vector potential, of the conduction problem: find u ∈ UI such that (16)

∫D σ−1 rot u · rot u' = 0

Q(u) − 2 V J(u) ≤ Q(u') − 2 V J(u') ∀ u' ∈ U.

Q(u) = ∫D σ−1 |rot u|2 = − ∫D grad ψ · rot u = − ∫S ψ n · rot u



∀ u' ∈ U

This time, V is given, and one gets I = J(u) afterwards. By analogy with what happened before, one expects that Q(u) = VI = RI2, if u is solution to (16) or (17). Such is indeed the case. Let us first remark that, since rot e = 0, the field − e = − σ−1 rot u is equal to the gradient of some function ψ, which can always be taken to be 0 on S e0, but the value of which on Se1 is precisley the V one is looking for. Consequently,

J(u) = ∫∂Σ τ · u,


∫D σ−1 rot u · rot u' = V J(u')

and (17) by find u ∈ UI such that

U = {u ∈ IL2rot(D) : n · rot u = 0 on Sj}


find u ∈ UI such that Q(u) ≤ Q(u') ∀ u' ∈ UI,


∀ u' ∈ U0.

V 2/P(ψ') ≤ V2/P(ψ) = R = Q(u)/I2 ≤ Q(u')/I2 ∀ ψ' ∈ Ψ V, u' ∈ UI.

One immediately sees the practical value of this: il will be enough to find some function ψ' in Ψ V and some vector field u' ∈ UI to get a bilateral estimate10 of the quantity of interest ("complementarity" of the two formulations). This is what the approximate resolution of (10) and (16) by the Galerkin's method will allow one to do.

Joule dissipation, as a function of u, is Q(u) = ∫D σ−1 |rot u|2, so problem (16) amounts to 9

By definition, IL2rot(D) = {u ∈ IL2(D) : rot u ∈ IL2(D)}, and IL2div(D) = {u ∈ IL2(D) : div u ∈ L (D)}. These are Hilbert spaces when equipped with scalar products imitated from the one of L2grad(D). 2

10 One may formulate and prove an adequate nonlinear generalization of this bilateral estimate [B4]. In the linear case, the idea can be traced back to Synge [PS].


2.3 "STANDARD" DISCRETIZATION (SCALAR POTENTIAL) To discretize (10) or (11), we'll replace Ψ V and Ψ 0, in these formulations of the problem, by appropriate subspaces, of finite dimension, and so designed as to well approximate the boundary conditions. The reader is probably familiar with the standard "hat functions", or "nodal Lagrange finite elements (of polynomial order 1)", of finite elements theory: given a tetrahedral mesh 11 m of D, such functions (here denoted by w n, for reasons that will only become clear in the next Chapter) are associated with each node n. They are continuous, piecewise affine, with wn(x) = 1 when point x coincides with node n, and wn(x) = 0 at other nodes. Let N be the set of nodes, and N(Σ), for any surface Σ, the subset of nodes belonging to Σ (and its boundary). The number of nodes is denoted N or N(Σ) as the case may be. Again by anticipating on next Chapter, we denote by W 0m the functional space (of finite dimension) generated by the wn: it's thus the space of piecewise affine (relatively to mesh m) continuous functions over D. Let us define Ψ0m = {ψ = ∑ n ∈ N ψn wn : ψn = 0 ∀ n ∈ N( Se)} ≡ Ψ 0 ∩ W0m, ΨVm = {ψ = ∑ n ∈ N ψn wn : ψn = 0 ∀ n ∈ N( Se0), ψn = V ∀ n ∈ N( Se1) } ≡ Ψ V ∩ W0m, and consider the problem, to find ψ ∈ Ψ Vm such that (19)

∫D σ grad ψ · grad ψ' = 0 ∀ ψ' ∈ Ψ 0m.

This linear system with respect to the nodal unknowns ψn, index n spanning the set of nodes in N − N( Se), is equivalent to (20)

find ψ ∈ Ψ Vm such that P(ψ) ≤ P(ψ') ∀ ψ' ∈ Ψ Vm,

which is a quadratic optimization problem with respect to these same nodal variables. The properties of this system (symmetric matrix of entries ∫D σ grad wn · grad wn,, positive definiteness, etc.), are well known. Clearly, this method provides a lower bound for R in (18). So we face the task of finding an upper bound, by following a path similar to (16)(17). This will be done in Chap. 4, after we have presented, in Chap. 3, the appropriate finite elements.

11 If D is not a polyhedron, which after all is the general case, one first defines a "reference mesh" of a polyhedral region homeomorphic to D, that is then continuously mapped onto D.



(∇ is short for "gradient"), and let us denote by W 1 the space of vector fields generated by the wes. Similarly, W2 will be the space generated by the wf s, one per face f = {l, m, n} (cf. Fig. 3.2), with



Whitney elements

wf = 2(wl ∇wm × ∇wn + wm ∇wn × ∇wl + wn ∇wl × ∇wm) .

Last, W 3 is generated by functions wT , one for each tetrahedron T, equal to 1/volume(T) on T and 0 elsewhere. (Its analytical expression in the style of (2) and (3), which one may guess about as an exercise, is of little importance.)

Being a little more thorough than would be strictly required by the intended application to the conduction problem, we'll present in this Chapter a family of geometrical objects introduced around 1957 [ W h] by Whitney (Hassler Whitney, 1907-1989, one of the masters of differential geometry), known as "Whitney (differential) forms". They constitute a rich structure, which happens to be the right framework in which to develop a finite element discretization of electromagnetic theory. I call them "Whitney elements" here for this reason.




Let D ⊂ E3 be a bounded region of space, with a (piecewise) smooth boundary S. Consider a tesselation of D by tetrahedra (or, by an easy generalization, curved images of tetrahedra under smooth mappings). It must be a "mesh", as required by finite element practice, that is, two tetrahedra should intersect, if they do, along a common face, edge or node. One denotes by N , E, F, T the sets of simplices of one and the same dimension of this mesh, that is nodes, edges, faces and tetrahedra respectively, and by m the mesh itself. To each node n, let us attach the function wn, continuous, piecewise affine, equal to 1 at n and to 0 at other nodes. (This way, wn(x) is the barycentric coordinate of point x with respect to node n, if x and n are part of the same tetrahedron. The definition is devised in such a way that the domain of function wn be all D.) Remark the identity (1)

3.1. The "edge element", or Whitney element of degree 1 associated with edge e = {n, m}, here shown on a single tetrahedron with e as one of its edges. The arrows suggest how the vector field we defined in (2) behaves. At point n, for instance, we = grad wm , after (2), and this vector is orthogonal to the face opposite to m. FIGURE

Thus to each simplex s its attached field, scalar- or vector-valued. These fields are Whitney elements. We'll review their main properties, all easy to prove. (The proposed exercises may help.) First, - the value of wn at node n is 1 (and 0 at other nodes), - the circulation of we along edge e is 1, - the flux of wf across face f is 1, - the integral of wT over tetrahedron T is 1, (and also, in each case, 0 for other simplices).

∑ n ∈ N wn = 1

over D. One will denote by W0 the finite-dimensional space generated by the wns. (It depends on m , and should be denoted by W0(m), or W 0m, but index m will be understood.) Now, let e be an edge, joining n to m (in this order 12). To edge e, let us associate the vector field (cf. Fig. 3.1) (2)

That is, a 1-simplex of m. More generally, a p-simplex of m is a list of p nodes, plus the choice of one of the two equivalence classes obtained by considering as equivalent two lists which derive from each other by an even permutation. So {i, j, k} and {j, k, i} are the same face, but {i, k, j} is the face of opposite orientation. Only one of them is supposed to belong to m. It goes the same in all dimensions: all simplices of the mesh are oriented. Orientations are not supposed to "match" when a simplex is contained into a higher dimensional one. (This point will be made precise later, when we introduce "incidence matrices".) 12

we = wn ∇wm − wm ∇wn -8-

the standard P 1 Lagrange elements (polynomial on each tetrahedron, of polynomial degree 1) of finite elements theory. For p = 1 or 2, however, this calls for an unconventional interpretation of the degrees of freedom (DoF). Take h in W 1 for instance. Then, by definition,

k n

x f



h = ∑ e ∈ E h e we,

where the h e are real or complex-valued coefficients. As the circulation of we is 1 along edge e and 0 along others, the circulation of h along edge e is the degree of freedom h e. So the DoF are associated with edges of the mesh, not with nodes, as is customary. This way, if b ∈ W 2, one has b = ∑ f ∈ F bf wf, and the bfs are the fluxes of b across faces. So in this case, degrees of freedom sit at faces. Last, the DoF of functions in W 3, one for each tetrahedron, can be localized at the centers of these.


FIGURE 3.2. The "face element", or Whitney element of degree 2 associated with face f = {l, m, n}, here shown on a single tetrahedron with f as one of its faces. The arrows suggest how the vector field wf defined in (3) behaves. At point m, for instance, w f = ∇w n × ∇w l, after (3), and this vector is orthogonal to both ∇w n and ∇w l, hence parallel to the planes that support faces {l, m, k} and {k, m, n}, that is, to their intersection, which is edge {k, m}.

Remark 3.1. So the w es (as well as other Whitney elements) are linearly independent, for h = 0 in (4) implies h e = 0 for each e. ◊ The convergence properties of Whitney elements are what one may guess. First, in the already known case of W0, let ϕ be a function of L2grad(D), continuous, and let ϕm = ∑ n ∈ N ϕn wn, where ϕn is the value of ϕ at node n. Now, when the "grain" of the mesh tends to zero in an appropriate sense 13 (cf. [C l]), ϕm converges towards ϕ in L 2grad(D). This is the well-known property of P1 finite elements. In the same way, if h is given, regular enough, in IL2rot(D), if h e is the circulation of h along edge e, and if one sets h m = ∑e ∈ E h e we, then hm converges to h in IL2rot(D). Same thing for bm = ∑ f ∈ F bf wf, where bf is the flux of b through f, with convergence in IL2div(D). See [Do] for proofs.

Exercise 3.1. Show the volume of a tetrahedron T = {k, l, m, n} is vol(T) = 4 ∫Τ w n. Show that the area of face {k, l, m} is 3 vol( T) |∇wn|, the length of vector {k, l} is 6 vol(T) |∇wm × ∇wn|, and 6 vol(T) det(∇wk, ∇wl, ∇wm) = 1. Exercise 3.2. Compute ∫T ∇wn · ∇wm. Derive from this the values of ∫D we · w e' according to the respective positions of edges e and e'. Exercise 3.3. Show that field (2) is of the form x → α × x + β in a given tetrahedron, where α and β are vectors of IR 3, with α parallel to the edge opposite to {n, m}. Show that the field (3) is of the form x → γ x + β ( γ ∈ IR) . A second group of properties concerns the continuity, or lack of it, across faces of the mesh. Function wn is continuous. For the field w e, it's more involved. Let us consider two tetrahedra with face {l, m, n} in common, and let x be a point of this face. Then the vector field ∇wn is not continuous at x, since wn is not differentiable. But on the other hand, the tangential part of ∇wn, that is, its projection onto face {l, m, n}, changes continuously when one crosses the face from one tetrahedron to its neighbor: indeed, it only depends on the values of w n on this face, whatever the tetrahedron one considers. As this goes the same for ∇wm, and for all faces of the mesh, one may conclude that the tangential part of we is continuous across faces. Similar reasoning shows that the normal part of w f is continuous across faces. As for w T , it is discontinuous. Thanks to these properties of continuity, W 0 is contained in L2grad, W 1 in 2 IL rot, and W2 in IL2div. The W p are of finite dimension, and can therefore play the role of Galerkin approximation spaces for the latter functional spaces. We knew that as far as W 0 is concerned, since the wns are nothing else than

3.2 THE WHITNEY COMPLEX The properties we have noticed (nature of the degrees of freedom, continuity, convergence) concerned spaces Wp as taken one by one, for different values of p. But there is more: properties of the structure made by all the W p s when taken together, or "Whitney complex"), which are even more remarkable. First, Proposition 3.1. One has the following inclusions: grad(W0) ⊂ W1, rot(W1) ⊂ W2, div(W2) ⊂ W3.



including in particular a condition which aims at avoiding "asymptotic flattening" of the simplices. A sufficient condition in this respect is the one due to Zlamal: the ratio of the radiuses of the circumscribed and inscribed spheres should stay bounded. The necessary condition, of algebraic character, is less easy to describe in general. In dimension 2, it simply means that there are no "too flat" obtuse angles.


Proof. We need some new definitions. The incidence number of a node n in edge e is the integer i(n, e) equal to 1 if n is the end of e, to − 1 if n is its origin, and to 0 in other cases. For instance, i(n, {n, m}) = −1. Similarly, i(e, f) is 0 if f does not contain e, and 1 or −1 according to whether, assuming e = {n, m}, n comes just before or just after m in the circular list of vertices of f: so, for instance, i({n, m}, {m, l, n}) = 1. For i(f, T), we set i({m, n, k}, {k, l, m, n}) = −1 and i({l, m, n}, {k, l, m, n}) = 1, which is enough to exhaust all possibilities 14. We shall call incidence matrices the rectangular matrices G, R, D thus constructed: Ge n = i(n, e), Rf e = i(e, f), DT f = i(f, T). Let us now consider node m. By the very definition of incidence numbers,

above-defined incidence matrix, with N columns and E rows, which thus appears as the discrete analogue to the gradient operator. Similarly (Fig. 3.3), if h = ∑ e ∈ E h e we, then j ≡ rot h = ∑ f ∈ F j f wf, where the j fs are the components of vector j = Rh, of dimension F (the number of faces). Last, if b = ∑ f ∈ F bf wf, one has div b = ∑ T ∈ T ψT wT , where ψ = Db. Matrices R and D, of respective dimensions F × E and T × F (where T is the number of tetrahedra), correspond to the curl and the divergence. Remark the equalities D R = 0 and R G = 0. We'll denote by W p , p = 0 to 3, the spaces IR N, IR E, IR F, IR T , isomorphic to the Cartesian products IR N, IR E , etc. These spaces are isomorphic, for a given m, to the Wp m, but are conceptually distinct. We have thus the following structure:

∑e ∈ E i(m, e) we = ∑n ∈ N (wn ∇wm − wm ∇wn) = (∑n ∈ N wn) ∇wm − wm ∇(∑ n ∈ N wn) ≡ ∇wm


after (1), and thus grad w m ∈ W 1, whence the first inclusion by linearity. Similarly (Exercise 3.4), for e = {n, m}, one has rot w e = 2 ∇wn × ∇wm = ∑f ∈ F i(e, f) wf, hence rot we ∈ W2, and div wf = ∑T ∈ T i(f, T) wT , hence div wf ∈ W3, hence (5). ◊ n

j = h 1 –h 2 –h


G R D 1 2 W → W → W → W 3, p

where the image of each space by the matrix of the upper row is included in the kernel of the next matrix in the row. But (5), or the diagram (6), is not the whole story. We'll say that a domain of E3 is contractible if it is simply connected with a connected boundary15. Then: Proposition 3.2. If the set-union of all tetrahedra in the mesh is contractible, one has the following identities:


W 1 ∩ ker(rot) = grad W0, W2 ∩ ker(div) = rot W1,


in addition to (5). (In other words, sequence (6) is exact. ) Proof. Let h be an element of W 1 such that rot h = 0. Then (D being simply connected), there exists a function ϕ such that h = grad ϕ. The ϕns being the values of ϕ at nodes, let us form k = grad(∑ n ∈ N ϕn wn). Then k ∈ W 1 by the first inclusion of Prop. 3.1, and its DoF are those of h by construction, so (Remark 3.1) h = k ∈ grad W 0. As for the second equality, take an element b of W 2 such that div b = 0. There exists a vector field a such that b = rot a. (Beware, "D simply connected" is not enough for that, and the hypothesis "S connected" cannot be forgotten. For instance, if D = {x ∈ E 3 : 1 < |x| < 2}, the field grad(x → 1/|x|) is divergence-free, since the function x → 1/|x| is harmonic, but is not a curl, since its flux across the closed surface {x : |x| = 1} does not vanish.) The a es being the circulations of a along the edges, let us form c = rot(∑ e ∈ E a e we). Then, by Prop. 3.1, c ∈ W2, and its DoF are those of b by construction, hence b ≡ c ∈ rot W1. ◊


e1 3.3. Computing jf (flux through face f of field j = rot h), from the DF of h ( h i is the circulation of h along edge e i, and the orientation of the eis with respect to f is indicated by the arrows). The terms of the incidence matrix R are here R f e1 = 1, R f e2 = R f e3 = − 1. FIGURE

This result has the following as an important corollary. If ϕ = ∑ n ∈ N ϕn wn is an element of W 0, and if one sets h = grad ϕ, this field h can also be expressed as in (4), the edge DoFs being h {n, m} = ϕm − ϕn. Let h be the vector (of dimension E, the number of edges) formed by taking the h es, ϕ the one of ϕ, where G is the the ϕns (of dimension N, the number of nodes). Then h = Gϕ

15 Because it can then be contracted onto one of its points by continuous deformation. "Connected" means in one piece. "Simply connected", that any closed path can be contracted to a point by continuous deformation. (This is not the case, for instance, of the inside of a torus,which is connected but not simply connected.)


The idea is, if edges {i, j}, {i, k}, {i, l} of T = {i, j, k, l } form a direct frame, faces f for which the vector {i, k} × {k, j} is outgoing with respect to T, like for example {i, k, j}, verify i(f, T) = 1. It's a matter of relative orientation of two simplices one of which is a face of the other.

- 10 -

3.3 METRIC PROPERTIES OF THE COMPLEX What precedes was of combinatorial character. Matrices G, R, D encompass all the knowledge on the topology of the mesh, but say nothing of metric properties: lengths, angles, areas, etc. To take these into account, we introduce the following "mass matrices" (thus named because of a mechanical analogy). Let α be a function over D, strictly positive. (For our needs here, it will be one of the coefficients ε, µ, etc., or its reciprocal.) We denote by Md (α), d = 0, 1, 2, 3, the square matrices of size N × N, E × E, F × F, T × T, whose entries are (Md (α) )s s' = ∫D α ws · ws' = ∫D α ws ws'

if d = 1 or 2, if d = 0 or 3,

where s and s' are two simplices of dimension d. Note that in the first line, the coefficient α can be replaced by a symmetrical tensor of Cartesian components α i j : (Mp (α) )s s' = ∫D ∑i, j = 1, 2, 3 α ij wis wjs' . This allows the consideration of anisotropic materials. Exercise 3.5. Compute all the terms of the Mp when α = 1 (cf. Exer. 3.2).

BIBLIOGRAPHICAL COMMENTS Whitney elements were rediscovered by numerical analysts starting from 1975 in relation with the search for "mixed" finite elements. In particular (cf. Exer. 3.3), the edge element (2) appeared in [ Nd], where it was described in terms of its "shape functions", x → α(T) × x + β(T) in the tetrahedron T , where α(T) and β(T) are three-dimensional vectors. (There are thus 6 degrees of freedom by tetrahedron, in linear invertible correspondence with the 6 edge circulations.) Similarly [ Nd], the face element's shape functions are x → γ(T) x + β(T), with γ(T) ∈ IR. The obvious advantage of a representation like (2) or (3) on shape functions, is to explicitly provide a basis of the space W1 or W 2. The presentation by shape functions, however, seems preferable for vector valued elements of polynomial degree higher than one (cf. [ Ne]), whose description in Whitney's style, with basis functions clearly related to geometrical elements of the simplicial complex, is still an open problem, just as, for that matter, the classification of "tangentially continuous" vectorial elements proposed up to now by various authors [ Cr, MH, vW, WC, ...]. Face elements also were independently rediscovered, in relation with diffraction problems [SW]. (The latter authors have the face flux density equal to 1, instead of the total flux.)

- 11 -


4.1.1 A "constrained linear system"


For this discussion, let us denote by U the space IR E of vectors u = {ue : e ∈ E} with real components. It is isomorphic to IR E (it's a copy of the space W 1 of Chap. 3) and also to W1m by the correspondence u = ∑ e ∈ E ue we. The ues are the edge DoF of field u. We shall denote by ( u, u') ≡ ∑ e ∈ E ue u' e the natural scalar product on U. All similar scalar products in finite dimension to be encountered in the sequel will similarly be denoted by parentheses. Let us first examine the constraints imposed to u by u belonging to UIm. The condition on the current flow through the cut Σ (cf. Fig. 4.1), that is ∫Σ n · j = I, generates an affine relation between the edge DoF. For, by Stokes Theorem, one must have ∫∂Σ τ · u = I, which becomes, when expressed in terms of edge DoF,

Conduction: vector potential approach

4.1 APPROXIMATION BY EDGE ELEMENTS Let us now come back to the problem of Chap. 2 (Fig. 4.1), while keeping the same notations. In particular, U = {u ∈ IL2rot(D) : n · rot u = 0 on S j} and UI = {u ∈ U : ∫∂Σ τ · u = I}. Since W1m, as we saw in the previous Chapter, is the natural approximation space for IL2rot, just as W 0m was for L2grad, a discrete version of the vector potential formulation (16) and (17) can be derived from the scalar potential one by analogy: it's find u ∈ UIm such that



where U m = U ∩ W


I m


and U = U ∩ W





, or in an equivalent way,








Σ γ


UI = {u ∈ U : Lu = I k}

the affine subspace of vectors of DoF which satisfy these conditions and U0 the parallel vector subspace obtained by replacing I by 0. One will remark that the relation u' ∈ U0m translates as Lu' = 0, where u' is the DoF-vector of u', and thus by u' ∈ U0. We may now rewrite (1) and (2) in terms of degrees of freedom. Thanks to the "incidence" matrix R and "mass" matrix M2 of Chap. 3, the problem (1) can be reformulated as find u ∈ UI such that


n n e


∑ e ∈ E i(e, f) ue = 0 ∀ f ∈ F( Sj),

where F( Sj) denotes the set of these Fj faces, Fj < F. So all in all, these conditions write Lu = I k, where L is a matrix with F columns and F j + 1 rows, and k a vector of dimension Fj + 1 all components of which but one (the one corresponding to (3)) are null. We shall denote by

find u ∈ UIm such that Q(u) ≤ Q(u') ∀ u' ∈ UIm .


∑ e ∈ E i(e, ∂Σ) ue = I,

where the i(e, ∂Σ), as in Chap. 3, are incidence numbers: + 1 or − 1, depending on relative orientation, when e belongs to ∂ Σ, and 0 otherwise. Likewise, the boundary condition on Sj, that is n · rot u = 0, generates a linear relation between the edge DoF for each face contained in Sj, hence

∀ u' ∈ U0m ,

∫D σ−1 rot u · rot u' = 0




4.1. Fig. 2.1 (right) of Chap. 2.

(M2(σ−1) Ru, Ru') = 0 ∀ u' ∈ U0,

and (2), once defined Q(u) = (M2(σ−1) Ru, Ru), as

Clearly, (1) is a linear system, with the circulations ue of u along the edges for unknowns, and matrix entries of the form ∫D σ−1 rot we · rot w ε . But this time, we can no longer ignore the fact that these edge circulations are not independent unknowns.


find u ∈ UI such that Q(u) ≤ Q(u') ∀ u' ∈ UI.

Problem (7) is a quadratic optimization problem with linear constraints. As for (6), one may rewrite it again, according to the principle of Lagrange - 12 -

multipliers, under the following "enlarged" matrix form: t




R M2(σ )R L







an alternative (whose practical interest remains to be assessed) to the classical assembly technique. ◊




where the unknown multiplier v is a vector of dimension Fj + 1. Indeed, (6) is equivalent to (Rt MRu, u') = 0 ∀ u' ∈ U0. But the orthogonal of U0 = ker( L) in U is cod(Lt), hence the existence of a vector v (not necessarily unique!) such that Rt MRu = − Ltv.

Let us set V 0m = {u = ∑ e ∈ E ue we ∈ W1 : ue = 0 ∀ e ∈ E( Sj) } (note that V0m ⊂ U 0m), and let us build a special field uI, belonging to U Im, as we now describe. We then shall set VΙm = uI + V0m. Let us begin by introducing a "cut" in the mesh of the surface Sj, that is a path c homologous to the γ of Fig. 4.1, but drawn on S j (Fig. 4.2), and made of edges of the mesh. This allows one to build a function ϕ, defined on Sj − c, continuous, piecewise linear on the triangulation of S j induced by m, equal to 0 left to the cut and to 1 right to it: for this, just attribute an arbitrary DoF value to all nodes in N( Sj − c), and t w o such values to each node of γ , 0 on the left and 1 on the right (Fig. 4.2). Let ϕn be these DoF values. Now, to each ϕn − ϕm) I , and to each edge e = {m, n} of E( Sj − c), let us assign the value ue = (ϕ edge γ the value 0. Let us finally assign to other edges of E arbitrary degrees of freedom ue, and form uI = ∑ e ∈ E ue we. This is the vector field of UIm we wanted.

Exercise 4.1. What is the physical interpretation of the v fs ? "Constrained linear systems" such as (8), often called, according to a dubious terminology, "mixed systems", are common encounters in numerical analysis. One could even argue that modelling situations where phenomena are linear and stationary almost always lead to constrained linear systems. However, classical techniques of solution of linear systems (cf., e.g., [ Ci], [GL]) concern regular systems, whereas in (8), if Ru is indeed unique (and this is what counts), neither u nor v are. How come this difficulty did not occur in Chap. 2, when we studied the analog system in ψ? Actually, pb. (10) or (11) of Chap. 2 does possess a matrix form similar to (8), to wit Gt M1(σ) G




, L 0 k Vk ψ = V k expresses the constraints on nodal values ψn : where the equality Lψ these must be equal to some common value for all nodes belonging to each of the connected components of Se (uniform potential over each electrode) and the difference between these two potentials must be equal to V. But one immediately transforms (9) into a regular system by selecting as independent unknowns the variables ψn attributed to nodes of N − N( Se) , while attributing to other DoF the values 0 or V depending on whether n ∈ N( Se0) or n ∈ N( Se1) . (9)


c S


0 0 0

Exercise 4.2. Describe the matrix L and the vector k of (9). In other words, one was able, in the case of (9), to select a subset of independent DoF from which to express all others. This is a general fact, for all constrained linear systems. But having been able to do it without any difficulty, actually without even thinking about it, in the case of the ψ equation, is not what should be generally be expected: as we shall see, the reduction of (8) to constrained linear system calls for care, and overall, for adequate finite elements. Remark 4.1. The product Gt M1(σ) G, or Rt M2(σ−1) R, is what is called "stiffness matrix" in the finite element method, and its computation is "assembly". The possibility of separately computing G, M1, etc., and to multiply them, offers

1 1

1 0






4.2. Construction of a field belonging to W1 ∩ U I from the induced mesh of the surface Sj and a "cut" of this surface. FIGURE

Indeed, on the one hand this field is in W1. On the other hand, its tangential part u IS is equal to gradS ϕ, b y construction, on Sj − γ (please check this, which is not supposed to be obvious). So rot S uIS = 0 on Sj, which is one of the conditions imposed to elements of U I (cf. (4)). The other condition, that the circulation of u I on ∂C be equal to I, is satisfied by having arranged for the - 13 -

jump16 of ϕ across c to be 1. This was the point of doubling the degrees of freedom along c as we did. Let us thus set VΙm = {uI + u' : u' ∈ V0m}, that is the set of fields of the form I u + u' where u' spans V 0m. As one will easily check, rot V Ιm = rot UIm. Since Q in (2) depends on u only via its curl, problem (2) amounts to

4.2 WHY NOT CLASSICAL LAGRANGIAN ELEMENTS? For at first glance, edge elements for u don't seem mandatory: According to the variational principle (17) of Chap. 2, any approximation method consisting in finding u ∈ U Im such that Q(u) ≤ Q(u') ∀ u' ∈ U Im, whatever the approximation space U Im, as long as it is contained in U I, will yield an upper bound for the resistance. And the approximation method which seems a priori most natural consists in representing each Cartesian component of u as a linear combination of standard P1 finite elements18, whence

find u ∈ VIm such that Q(u) ≤ Q(u') ∀ u' ∈ VIm,


a quadratic optimization problem equivalent to the linear system, find u ∈ VIm such that ∫D σ−1 rot u · rot u' = 0


∀ u' ∈ V0m.


This time, the nature of the "genuine" unknowns in (11) is clear: they are edge DoF of all edges not belonging to the lateral surface S j, and they are indeed independent. In terms of these degrees of freedom, pb. (11) amounts to find v ∈ VI such that

where each uν is a vectorial degree of freedom, localized at node ν, that is in fact three scalar DoF. Let us call IP 1(D), or simply IP 1, the vector space of all u of the form (12). One then takes for U Im some affine subspace of IP 1, in which one looks for a minimizer of the function u → Q(u). We'll successively find three drawbacks, more or less serious, to this approach.

(M2(σ−1) R(v + uI), Rv') = 0 ∀ v' ∈ V,


u = ∑ ν ∈ N uν wν,

where V denotes the subspace of U characterized by ue = 0 ∀ e ∈ E( Sj), and uI the vector of edge DoF of uI. Remark 4.2 . The degree of arbitrariness in the construction of uI is just what is needed for two fields u I being equal up to a gradient, if both are constructed according to this recipe. This is why rot VΙm = rot U Im. This "just what is needed" is loose talk for something deeper, that we'll not examine in full generality: the way exactness properties of the Whitney sequence pass to spaces of traces of these elements on the boundary. ◊ Remark 4.3. System (12) is of the form Av = f, where A is the principal submatrix17 of RtM2(σ−1) R corresponding to edges of E – E( Sj), and this is of course a singular matrix. This does not prevent Rv from being unique, and is harmless from the numerical point of view (cf. later, § 4.2.3) as long as f belongs to the subspace {Av : v ∈ V} (the range, or codomain of A). ◊ So, Chap. 2 being taken into account, we have obtained two finite element approximations of the conduction problem, complementary in the sense of providing bounds for the resistance from both sides. The first method is classical, the second one, in vector potential, makes use of edge elements. But are the latter necessary? Couldn't it be possible to reap the benefits of complementarity while solving for u with ordinary elements?

4.2.1 Boundary conditions are not easily enforced First, some restrictions should be imposed on the DoF in (12) if we want n · rot u = 0 to hold on Sj as well as ∫Σ n · rot u = I. But on a surface equipped with a normal field n, we have n · rot u = ∑ ν ∈ N n · rot(uν wν) = ∑ ν ∈ N n · ∇wν × uν ≡ ∑ ν ∈ N n × ∇wν · uν = ∑ ν ∈ N(S) n × ∇wν · uν, since n × ∇wν = 0 for a node ν not belonging to S (Fig. 4.3). The condition to be checked is thus (13)

∑ ν ∈ N (S) n × ∇wν · uν = 0 on Sj,

that is, a linear relation between components of the uνs for each triangular face in Sj. The flux condition through Σ yields a similar relation. Can one transform these into simple relations, that would bear on one scalar DoF at a time, as was the case above, where we simply set ue = ... (some value) for each edge e of S j, hence as many linearly independent unknowns as edges contained in E − E( Sj)? The method used above consisted in introducing a space of fields with vanishing tangential part (the one denoted by V0m) and


The "jump", or discontinuity of the first kind, across a surface (or, if one deals with dimension 2 as is here the case, a line) is the difference between the limit value "from the left" and that "from the right", with respect to some agreed crossing direction (for instance, in the case of closed surfaces, from inside to outside).

18 k P is the space of continuous functions restrictions of which to mesh simplices are polynomials of degree k or less.


Just as with Remark 4.1, this does not mean that A should be computed this way.

- 14 -

to construct a field u I the tangential part of which on S was locally a gradient. So let us look for an equivalent to this method in the present case.

(most of these products are identically zero, unless n = m, or n and m are joined by some edge). Thus ∇ϕ = ∑n, m ∈ N α n m ∇(wn w m) , and since the products wn wm are not differentiable, the normal component of this field undergoes a nonzero jump across each face (this jump is linear, as a function of coordinates). To demand that all these jumps be 0 is a very strong constraint on the αs (in practice, only the functions ϕ wich are globally quadratic over the domain will satisfy it), so the vector space ker(rot ; IP1) is of very small dimension, and even, most often, of dimension 0 when boundary conditions are taken into consideration. So, except for ad hoc meshes, one must expect that


∇ wν


n S

ker(rot ; IP 1) = {0}.

This contrasts with the analogous property for edge elements, which is (cf. Chap. 3) ker(rot ; W1) ⊃ grad W0. To conclude this discussion, nodal vectorial elements, which generate IP 1, being continuous in their three components, are too stiff at joints (faces) to well adapt to the representation of gradients. Later, we'll see other consequences of this defect. To sum up, linear systems analogous to (1) are difficult to deal with when they come from IP 1-discretizations, because of the impossibility to easily select a maximal set of independent degrees of freedom.

FIGURE 4.3. Unless node ν belongs to the surface bearing n, all the fields n × ∇w ν vanish: either no face opposite ν belongs to S (in which case the support of ∇ν does not encounter S, or one such face is in S (in which case n and ∇w ν are parallel).

First, as regards V0m, a field in this space should satisfy (13), but apart from the case where S is parallel to a coordinate plane, this does not translate easily into a set of linear relations between the DoF. For what to do with (13)? We may understand this relation as ∑ ν ∈ N(S) n(x) × uν · ∇wν(x) = 0 for all points x of Sj inside some face, and integrate the left-hand side over each face, hence one linear relation for each face19. We may also be satisfied with an approximate version of (13), under the form n × uν = 0 for all ν in Sj, but what is n, then? What is the normal at a node in the case of a polyhedral mesh? (To define it as some average of normals to neighboring faces is at best equivalent to the previous procedure, and may be much worse.) With both methods, anyway, the relations thus obtained would not consist in the vanishing of some DoF, so we still would have a constrained linear system on our hands. Worse, all this hard work would be to no avail, because the field uI in IP 1 the tangential part of which would be a gradient cannot be constructed: v e r y few elements of IP 1(S) (or of IP 1(D) as well) are gradients (except for very special meshes, designed precisely to this effect). Indeed, let u ∈ IP 1 be such that rot u = 0, which we denote as u ∈ ker(rot ; 1 IP ). Then there exists, locally at least, a continuous function ϕ such that u = grad ϕ. Since u is piecewise linear on the mesh m and continuous (for all its scalar components), ϕ is piecewise quadratic and differentiable, two hardly compatible conditions. The space of piecewise quadratic continuous functions on m, denoted by P 2, is generated by products wn wm, where n and m span N

4.2.2 On the same mesh, accuracy is poorer Let us nevertheless assume that some affine subspace of IP 1 has been defined by taking boundary conditions into account, and let us call it UIm(IP 1) to mark the distinction with the one based on edge elements, that will be UIm( W1). Then, rot UIm(IP 1) ⊂ rot UIm( W1) , with in general a strict inclusion. Consequently, the minimization of Q, being done over a smaller space, yields a higher value in the case of nodal vectorial elements, and thus, a looser bilateral estimate than by using edge elements, on the same mesh. To verify this, the following observation is enough: Proposition 4.1. Given a mesh m, any field u ∈ IP 1 i s sum of a field o f W1 and of the gradient of some function in P2, that is to say, IP 1 ⊂ W1 + grad P2. Proof. Let u ∈ IP 1 be given, let ue = ∫e τ · u, with e ∈ E, be its edge circulations, and let v = ∑e ∈ E ue we be the field contained in W 1 which has these circulations as DoF. Both u and v being linear in x, the field rot(u − v) is


The merit of this method is to generalize to "curved" tetrahedra.

- 15 -

constant in each tetrahedron. As its fluxes through all faces vanish, by construction (cf. Fig. 3.3, Chap. 3), it must be null. So u = v + ∇ϕ, where ϕ is a function such that ∇ϕ be piecewise linear, which means ϕ ∈ P2. ◊

kernel of A. In that case, A = diag{0, ..., 0, λ min, ..., λ max}, with 0 < λ min < ... < λ max, the number of zeroes being d = dim(ker( A) ). Clearly, the first d components of un evolve in arithmetic progression, whereas the others behave like this:

Exercise 4.3. Check that IP 1 is not contained in W1. (Hint: compute the divergence of u = ∑ n ∈ N un wn. )

uni = (1 − ρλi ) n + 1 u0i + [1 − (1 − ρλi ) n] fi /λ i. This shows, after a classical computation ([Ci, GL, ...], and Fig. 4.4), that the fastest convergence is achieved when 1 − ρλmin = −1 + ρλmax, that is, ρ = 2/(λ min + λ max). A measure of the speed of convergence is then the number

Exercise 4.4. Show that W 1 ⊂ IP 1 + grad P 2 doesn't hold. (Hint: compare dimensions.) As an immediate corollary, we have rot IP 1 ⊂ rot W1, whence the announced inclusion, inasmuch as boundary conditions on Sj have been satisfied from the onset by the definition of U Im(IP 1). As a rule, this inclusion is strict, for the dimension of rot IP 1 doesn't surpass 3N (three DoF per vertex), whereas that of rot W1 is (approximately) the dimension of the quotient space W 1/grad(W0) , about E − Ν, that is 5 to 6 N, depending on the mesh, as we see in a moment.

r(ρ, A) = (λ max − λ min)/(λ max + λ min) ≈ 1 − 1/κ(A) , where κ(A) = λ max/λ min. So this is the ratio that governs the convergence rate. (As for the first d "floating" components, corresponding to the part of u in the kernel of A, which of course cannot be separated from the rest, their linear drift does no harm, provided some elementary precautions are taken against overflow. For the DoF of interest are not those of the vector potential, but those of the current density field, that is, the components of vector Ru, where R is the incidence matrix of edges on faces of Chap. 3. So never mind the behavior of the part of u belonging to the kernel of R, if Run does converge, and this convergence is indeed what is observed, not only with algorithm (15), but with more efficient ones based on the conjugate gradient approach [B a].)

4.2.3 The condition number of the final matrix is larger Both approximation methods (W 1 and IP 1) yield a linear system of the form Au = f, where u is the vector of independent DoF, with A = AW or AP according to the method. Matrix AW [resp. AP ] is a principal submatrix of the one obtained by computing the integrals ∫D σ−1 rot w e · rot we', e and e' ∈ E [resp. the integrals ∫D σ−1 rot (e i wn) · rot (ej w m), n and m spanning N, where the ei, i = 1, 2, 3, are the basis vectors in the underlying Cartesian frame]. The importance of the condition number of the matrix A, that is, the ratio of its extreme eigenvalues, as regards the resolution of Au = f, is a well-known fact. This ratio determines for a good part the number of iterations in the case of an iterative solution method, as well as the numerical accuracy in the case of a direct method. It happens that the condition number of the matrix constructed by the IP 1 method is larger than the one of the matrix associated with W1. Before seeing why this is so, let us parry the obvious objection that matrix A is singular in the case of the W 1 method (non-uniqueness, already noticed, of u in (1)), which means its lowest eigenvalue is 0. Actually, the number which is meaningful20 as conditioning goes is the ratio of the highest eigenvalue to the lowest nonzero one. This can be seen by reasoning on the simplest of all iterative methods, successive approximations, where one creates the sequence of vectors

|1− ρλ| 1 r(ρ , A)


λ max FIGURE


4.4. Conditioning and convergence speed.

Let us therefore return to the comparison between the condition numbers, as defined by the ratios of extreme nonzero eigenvalues. Both matrices AW and AP approximate the operator rot( σ−1 rot) (more precisely said, the operator associated with the corresponding boundary value problem). When the mesh is refined, the asymptotic behavior of their higher eigenvalues is the same. So we must compare the first nonzero eigenvalues, λ 1(AW) and λ 1(AP ). As we noticed earlier in (14), matrix AP is in general

un + 1 = un − ρ(Aun − f) ,



λ min

starting from some vector u0. To study (15), one may always select a basis in which A is diagonal, the first basis vectors being those that generate the 20

One may call it effective conditioning.

- 16 -

regular. But zero is an eigenvalue of rot(σ−1 rot), and it's a general fact that elements of the spectrum are approximated (when the mesh is refined) by those of the discrete spectrum, which does not contain 0. Therefore, when m → 0, λ 1(AP ) tends to 0. But as far as AW is concerned, the situation is totally different: this matrix is singular, because its kernel includes the vectors u = G ϕ (with ϕn ≠ 0 except for the n belonging to S j), that is, those corresponding to gradients of functions contained in the space previously denoted by Ψ 0 (Chap. 2). There is thus no need for the eigenvalue 0 to be approached "from the right", as was the case for AP . And actually, limm → 0 λ 1(AW) > 0. This can be seen by noting that (according to Rayleigh quotient theory)

4.3 CONCLUSION Now, is the case closed? Not before the Defendents have spoken their piece. Deteriorated conditioning? Yes, but asymptotically so, when m → 0, and who effectively goes to the limit in practice? One works with the finest mesh one can afford, and the difference in conditioning may well be of no noticeable consequence for this mesh. The convenience in setting up boundary conditions? All right, but what of the precious possibility of reusing existing software, written for nodal finite elements? And as for the loss of accuracy (which is not that systematic, cf. [B R]), is it not largely compensated by being able to use more DoF, with the same computer resources? After all, are not the 3N degrees of freedom (roughly counting) of the IP 1 method much less numerous, for the same mesh, than the E degrees of freedom of the W1 method? Let us count. Suppose, for definiteness, T ≈ 5N. Then F ≈ 10 N (there are four faces to a tetrahedron, and each face is shared by two of them), and Euler-Poincaré formula, that is N – E + F – T = χ, then gives E ≈ 6N. Thus indeed, one may expect twice as many DoF with edge elements (about E ≈ 6N) in comparison with the nodal method (≈ 3N). These figures, which ignore boundary conditions and even the very presence of a boundary, are very crude (see [Ko] for precise counting). The indicated ratios are only reached at the limit, and may be very different for small meshes. But this does not bend our conclusions: Whitney elements on tetrahedra generate more degrees of freedom than nodal ones. But now, does that really matter? The most significant figure, when it comes to storage and CPU-time requirements, is not the number of unknowns, but that of nonzero entries in matrix A. And this number eventually proves smaller, on the same mesh, for AW than for AP , contrary to what one might have feared. Indeed, let us count the average number of non-zero terms on a row of AP : this is equal to the number of DoF that may "interact" with a given nodal DoF, that is, the number of pairs {m, j} such that

λ 1(AW) = inf{(AW u, u) : |u| = 1, (u, v) = 0 ∀ v ∈ ker(AW)}. But this requested orthogonality to the kernel translates, in terms of associated vector fields, to ∫D u · grad ϕ' = 0 ∀ ϕ' ∈ Ψ 0 ∩ W0m. Let u 1(m) be the field the DoF of which form the eigenvector u1 corresponding to λ 1, and u 1 its limit when m → 0. By the approximation property of W 0m in L2grad, one has ∫D u1 · grad ϕ' = 0 ∀ ϕ' ∈ Ψ 0. So u1 is a divergence-free field. Thus, limm → 0 λ 1(AW) = inf{∫D σ−1 |rot u|2 : u ∈ U0, div u = 0, ∫D |u|2 = 1}, and this Rayleigh quotient is strictly positive indeed. We may then conclude that κ(AW)/κ(AP ) tends to 0: effective conditioning is better with Whitney elements. Remark 4.3. The phenomenon we just described about AP , to wit, the emergence for a given mesh of small positive eigenvalues "in the process of converging to 0", so to speak, whereas other eigenvalues already reached values close to their eventual limits, is called "spectral pollution" [GR]. There is a risk of such pollution each time one searches for the spectrum of an operator of which 0 is an eigenvalue, and it is necessary, in order to avoid it, that the kernel of this operator be "well approximated", from inside, by elements of the approximation space: ker(rot) should contain enough gradients. This rule (which seems to have been independently formulated by various authors) is given, more or less explicitely, in [H a, H K, K i, WC]. Its automatic enforcement, due to the inclusion grad W0 ⊂ W1, when one makes use of Whitney elements, is a decisive point in their favor. ◊

∫D σ−1 rot (ei wn) · rot (ej wm) ≠ 0, for a given {n, i} pair21. As rot (e i wn) = − ei × grad w n, this term vanishes if supports of wn and wm don't intersect, so two DoF interact (barring accidental cancellations) when they are borne by a same node, or by two end-nodes of a same edge. Since each node connects to 12 neighbors, on average , there are 38 extra-diagonal nonzero terms on a row of AP , on average. As for AW, the number of extra-diagonal nonzero terms on the row of edge e is the number of edges ε such that ∫D σ−1 rot w e · rot wε ≠ 0, that is, of edges 21

Recall the ei s are basis vectors in E3.

- 17 -

belonging to the same tetrahedron as e. Edge e is common to 5 tetrahedra, on the average (because it belongs to 5 faces, since F ≈ 5E/3, each face having 3 edges). Its neighbors are thus 10 edges sharing a node with it and 5 opposite edges in their common tetrahedron (Fig. 4.5). Thus, there are 15 extra-diagonal nonzero terms in each row, on average, which makes a total of 90 N terms of this kind in AW, versus 3 × 38 = 114 N in AP , a neat advantage to edge elements.

One may also quote this [PF], about scattering computations: "Experience with the 3D node-based code has been much more discouraging: many problems of moderate rank failed to converge within N iterations (...). The Whitney elements were the only formulation displaying robust convergence with diagonal PBCG22 iterative solution."

Let us finally brush very briefly the issue of singularity. Should the edge-element formulation be "gauged", that is, should the process of selecting independent variables be pushed further, to the point of having only nonredundant DoFs? This can be done with spanning-tree techniques [ AR, Fu, GT, P &]. I still maintain that such gauging is unnecessary, and perhaps detrimental. Recent experiments by Ren [ Rn] seem to confirm this, which was already suggested by Barton's work.



BIBLIOGRAPHICAL COMMENTS The edge elements vs classical elements debate winds its way and will probably go on for long, and it would be tedious to dwell on it. As one says, those who ignore history are bound to relive it. A lot remains to be done, however: research on higher-order edge elements (to the extent that [ Ne] does not close the subject), error analysis [Mk, MY, Ts], edge elements on other element forms than tetrahedra, like prisms, pyramids, etc. [D&].

e' 4.5. If there are 5 tetrahedra around edge e, this edge interacts with 10 adjacent edges (like e ') and 5 opposite edges (like e''). FIGURE

Enough with theory. The first numerical investigations about all this were done by M. Barton for his PhD dissertation [B a] (see a short account in [BC]), about magnetostatics, a problem which is formally the same as (10)(11). His conclusions were as follows:

The problem addressed in Remark 4.3 is crucial, in practice, when there is a distributed source-term, as is the case in magnetostatics. If the discretization of the right-hand side is properly done, one will obtain a system of the form R tM2(σ−1)R a = R tk, where a is the vector of DoF of the vector potential, and k a given vector, and in this case the right-hand side R tk is in the range of A ≡ R tM2(σ−1)R. But otherwise, the system has no solution, which the behavior of iterative algorithms tells vehemently (drift, as evoked in & 4.2.3, slowed convergence, if not divergence). This seems to be the reason for the difficulties encountered by some (cf. [CB] for a discussion of this point), which can easily be avoided [Rn].

"When the novel use of tangentially continuous edge-elements for the representation of magnetic vector potential was first undertaken, there was reason to believe it would result in an interesting new way of computing magnetostatic field distributions. There was only hope that it would result in a significant improvement in the stateof-the-art for such computations. As it has turned out, however, the new algorithm has significantly out-performd the classical technique in every test posed. The use of elements possessing only tangential continuity of the magnetic vector potential allows a great many more degrees of freedom to be employed for a given mesh as compared to the classical formulation; and these degrees of freedom result in a global coefficient matrix no larger than that obtained from the smaller number of degrees of freedom of the other method. (...) It has been demonstrated that the conjugate gradient method for solving sets of linear equations is well-defined and convergent for symmetric but underdetermined sets of equations such as those generated by the new algorithm. As predicted by this conclusion, the linear equations have been successfully solved for all test problems, and the new method has required significantly fewer iterations to converge in almost all cases than the classical algorithm."


They thus refer to Jacob's "preconditioned bi-conjugate gradient" algorithm [Ja].

- 18 -



[MY] [AR] [BR] [Ba] [BC] [B4] [CB] [Ci] [Cl] [Cr] [Do] [D&]

[Fu] [GT] [GL] [GR] [Ha] [HK]


[Ki] [Ko] [Ma]

L. Albanese, G. Rubinacci: "Magnetostatic field computations in terms of two-component vector potentials", Int. J. Numer. Meth. Engnrg., 29 (1990), pp. 515-32. B. Bandelier, F. Rioux-Damidau: "Variables d'arêtes et variables nodales dans la modélisation des champs magnétiques", Revue Phys. Appl., 25 (1990), pp. 605-12. M.L. Barton: Tangentially Continuous Vector Finite Elements for Non-linear 3-D Magnetic Field Problems, Ph. D. Thesis, Carnegie-Mellon University (Pittsburgh), 1987. M.L. Barton, Z.J. Cendes: "New vector finite elements for three dimensional magnetic fields computations", J. Appl. Phys., 61, 8 (1987), pp. 3919-21. A. Bossavit: "Complementarity in Non-Linear Magnetostatics: Bilateral Bounds on the Flux-Current Characteristic", COMPEL, 11, 1 (1992), pp. 9-12. A. Bossavit, P. Chaussecourte (eds): The TEAM Workshop in Aix-les-Bains, July 7-8 1994, EdF, Dpt MMN (1 Av. Gal de Gaulle, 92141 Clamart), 1994 (avail. on request). P.G. Ciarlet: Introduction à l'analyse numérique matricielle et à la programmation, Masson (Paris), 1982. P.G. Ciarlet: The Finite Element Method for Elliptic Problems , North-Holland (Amsterdam), 1978. C. Crowley, P.P. Silvester, H. Hurwitz Jr.: "Covariant projection elements for 3D vector field problems", IEEE Trans., MAG-24, 1 (1988), pp. 397-400. J. Dodziuk: "Finite-Difference Approach to the Hodge Theory of Harmonic Forms", Amer. J. Math., 98, 1 (1976), pp. 79-104. P. Dular, J.-Y. Hody, A. Nicolet, A. Genon, W. Legros: "Mixed Finite Elements Associated with a Collection of Tetrahedra, Hexahedra and Prisms", IEEE Trans., MAG-30 , 5 (1994), pp. 2980-3. K. Fujiwara: "3-D magnetic field computation using edge elements", in Proc. 5th Int. IGTE Symposium, IGTE (26 Kopernikusgasse, Graz, Austria), 1992, pp. 185-212. N.A. Golias, T.D. Tsiboukis: "Magnetostatics with Edge Elements: A Numerical Investigation in the Choice of the Tree", IEEE Trans., MAG-30, 5 (1994), pp. 2877-80. G.H. Golub, C.F. Van Loan : Matrix Computations, North Oxford Academic (Oxford) & Johns Hopkins U.P. (Baltimore), 1983. R. Gruber, J. Rappaz: Finite Element Methods in Linear Ideal Magnetohydrodynamics, Springer-Verlag (Berlin), 1985. M. Hano: "Finite-element analysis of dielectric-loaded waveguides", IEEE Trans., MTT-32, 10 (1984), pp. 1275-79. K. Hayata, M. Koshiba, M. Eguchi, M. Suzuki: "Vectorial Finite-Element Method Without any Spurious Solutions for Dielectric Waveguiding Problems Using Transverse Magnetic Field Component", IEEE Trans., MTT-34, 11 (1986), pp. 1120-24. D.A.H. Jacobs: "Generalization of the Conjugate Gradient Method for Solving Nonsymmetric and Complex Systems of Algebraic Equations", CEGB Report RD/L/N70/80 (1980). F. Kikuchi: "Mixed and penalty formulations for finite elements analysis of an eigenvalue problem in electromagnetism", Comp. Meth. Appl. Mech. Engng., 64 (1987), pp. 509-21. P.R. Kotiuga: "Analysis of finite-element matrices arising from discretizations of helicity functionals", J. Appl. Phys., 67, 9 (1990), pp. 5815-7. J.C. Maxwell : A Treatise on Electricity and Magnetism , Clarendon Press (Oxford), 3d ed., 1891 (Dover edition, New York, 1954). (First edition: Oxford U.P., 1873.)


[Nd] [Ne] [PF] [PS] [P&] [Rn]


[Ts] [vW] [Wh] [WC]

- 19 -

P. Monk: "A finite element method for approximating the time-harmonic Maxwell equations", Numer. Math., 63 (1992), pp. 243-261. P. Monk, E. Süli: "A convergence analysis of Yee's scheme on nonuniform grids", SIAM J. Numer. Anal., 31, 2 (1994), pp. 393-412. G. Mur, A.T. de Hoop: "A Finite-Element Method for Computing Three-Dimensional Electromagnetic Fields in Inhomogeneous Media", IEEE Trans., MAG-19 , 6 (1985), pp. 2188-91. J.C. Nedelec: "Mixed finite elements in IR3", Numer. Math., 35 (1980), pp. 315-41. J.C. Nedelec: "A new family of mixed finite elements in IR3", Numer. Math., 50 (1986), pp. 57-81. J. Parker, R.D. Ferraro, P.C. Liewer: "Comparing 3D Finite Element Formulations Modeling Scattering from a Conducting Sphere", IEEE Trans., MAG-29, 2 (1993), pp. 1646-9. W. Prager, J.L. Synge: "Approximations in elasticity based on the concept of function space", Quart. Appl. Math., V, 3 (1947), pp. 241-69. K. Preis, I. Bardi, O. Biro, C. Magele, G. Vrisk, K.R. Richter: "Different Finite Element Formulations of 3D Magnetostatic Fields", IEEE Trans., MAG-28, 2 (1992), pp. 1056-9. Z. Ren: "Numerical experiences on non-gauged vector potential formulations" , in [CB], pp. 67-70, and "Autogauging of vector potential by iterative solver—Numerical evidence", in 3d Int. Workshop on Electric and Magnetic Fields (Proc., Liège, 6-9 May 1996), A.I.M. (31 Rue St-Gilles, Liège), 1996, pp. 119-24. D.H. Schaubert, D.R. Wilton, W. Glisson: “A Tetrahedral Modeling Method for Electromagnetic Scaterring by Arbitrarily Shaped Inhomogeneous Dielectric Bodies”, IEEE Trans., AP-32, 1 (1984), pp. 77-85. I.A. Tsukerman: "Node and Edge Element Approximation of Discontinuous Fields and Potentials", IEEE Trans., MAG-29, 6 (1993), pp. 2368-70. J.S. Van Welij: "Calculation of Eddy Currents in Terms of H on Hexahedra", IEEE Trans., MAG-21, 6 (1985), pp. 2239-41. H. Whitney: Geometric Integration Theory, Princeton U.P. (Princeton), 1957. S.H. Wong, Z.J. Cendes: "Combined Finite Element-Modal Solution of Three-Dimensional Eddy-Current Problems", IEEE Trans., MAG-24, 6 (1988), pp. 2685-7.