Spectral Transforms on Surfaces - CiteSeerX

0 downloads 0 Views 480KB Size Report
Jun 10, 2005 - these references show an overwhelming bias towards a graph ... can be represented by a Fourier series; their Fourier transform is discrete.
Spectral Transforms on Surfaces Ramsay Dyer June 10, 2005

Abstract We present an introduction to spectral transformations on surfaces from a differential geometric perspective. We show that the Fourier series has an extension to smooth surfaces and that for triangulated surfaces, spectral transforms can be defined which approximate the Fourier coefficients of the underlying smooth manifold. This construction, which is based on a discrete approximation to the differential Laplacian operator, is compared to transforms which are based on the connectivity graph of the mesh alone.

Contents 1 Introduction 2 The 2.1 2.2 2.3 2.4

2.5 2.6

4

Fourier Transform Fourier Series . . . . . . . . . . . . . . . . . . . . Discrete Fourier Transform . . . . . . . . . . . . . Nonuniform Discrete Fourier Transform . . . . . . Some Useful Properties of the Fourier Transform . 2.4.1 Parseval’s Relation . . . . . . . . . . . . . 2.4.2 Convolution . . . . . . . . . . . . . . . . . The Fourier Transform in Two Dimensions . . . . The Fourier Basis as Eigenfunctions . . . . . . . . 2.6.1 Defining a Spectral Transform . . . . . . . 2.6.2 The Laplacian and its Spectrum . . . . . .

3 Spectral Transforms on Surfaces 3.1 Smooth Surfaces . . . . . . . . . . . . . . . . . 3.2 A Graph Theoretic Perspective . . . . . . . . . 3.2.1 The Class of Appropriate Operators . . . 3.2.2 A Note on Orthogonality . . . . . . . . . 3.3 A Menagerie of Laplacian Operators . . . . . . 3.3.1 Kirchoff . . . . . . . . . . . . . . . . . . 3.3.2 Tutte . . . . . . . . . . . . . . . . . . . . 3.3.3 Normalised Graph Laplacian . . . . . . . 3.3.4 Symmetrised Tutte Laplacian . . . . . . 3.3.5 Symmetric Quasi Laplacian . . . . . . . 3.3.6 Discrete Differential . . . . . . . . . . . . 3.3.7 Unweighted Differential . . . . . . . . . . 3.3.8 Fujiwara . . . . . . . . . . . . . . . . . . 3.3.9 Chord Length . . . . . . . . . . . . . . . 3.3.10 Mean Value Operator . . . . . . . . . . . 3.4 Non-negative Definite . . . . . . . . . . . . . . . 3.4.1 The cot operator is non-negative definite

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

4 5 6 9 9 9 10 12 14 16 19

. . . . . . . . . . . . . . . . .

25 26 27 28 29 30 30 31 32 32 33 34 34 34 34 34 34 36

4 Examples Spectral Processing in Computer Graphics

37

5 The Discrete Differential Laplacian 5.1 A Derivation of the Meyer Mean Curvature Operator as a Diffusion Operator 5.1.1 The Divergence of the Gradient . . . . . . . . . . . . . . . . . . . . . 5.2 Orthogonal Eigenfunctions of the Discrete Laplacian . . . . . . . . . . . . . 5.2.1 Some Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Computing the Spectral Transform of the Discrete Differential Laplacian Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 38 42 43 44

2

45

6 Discussion 6.1 Nonuniform Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Multiplicities of the Eigenvalues and Symmetries of the Mesh . . . . . . . . .

46 48 48

Bibliography

48

3

1

Introduction

The relatively new field of digital geometry processing has borrowed many techniques and algorithms from the image processing literature. However, the central tools of image processing are signal transforms and a clear framework for the analogous machinery for geometric data has yet to be developed. The canonical example of a signal transform is the Fourier transform, an understanding of which is usually a prerequisite to a firm understanding of any signal transform. For geometric data defined by a mesh, a common way of creating a Fourier-like transform is to perform an eigendecomposition of a Laplacian operator associated with the mesh. In section 4 we present several examples of how such transforms have been used in the computer graphics literature. * these references show an overwhelming bias towards a graph theoretic perspective * in this paper we try to emphasise the continuous domain and wherever possible we attempt to present the methods as a discretisation of the differential geometric theory. * in sec two intro ft and sec 2.6 make connect wi spec transf * then sec 3 overview of differential-geometric perspective and the graph theoretic perspective and present a menagerie * finally discuss stuff in sec discuss. FIXME – far from complete!

2

The Fourier Transform

The study of spectal transforms on surfaces is motivated by the useful properties they share with conventional signal transforms. In this report we will focus on the relationship between the Fourier transform and the transforms we define on surfaces. Any signal transform that is used in a computational environment must necessarily apply to a discrete signal. However, for our purposes the discrete signal is considered to be sampled from its counterpart on a continuous domain. we will emphasise this perspective by introducing the discrete transforms in terms of the corresponding transforms on a continuous domain. In this section we define the Fourier series and the discrete Fourier transform (DFT) and introduce some of their properties. When we consider surfaces in the subsequent sections we will confine ourselves to closed surfaces without boundary. These are compact domains. A one dimensional example is the circle 1 , S 1 . For any given T ∈ R, there is a one to one correspondence between the functions defined on the unit circle, S 1 , and the periodic functions f : R → R, f (x+T ) = f (x). Such functions can be represented by a Fourier series; their Fourier transform is discrete. Thus although we consider continous spatial domains, our frequency domain will always be discrete. 1

Actually, the circle is the only one dimensional compact manifold of interest. This is because a compact one dimensional manifold without boundary is a closed curve. Any closed curve is isometric to a circle, so, up to a constant scale factor, S 1 is the only compact one dimensional Riemannian manifold without boundary

4

2.1

Fourier Series

Let Ω = [0, T [⊂ R and F(Ω) be the space of continous periodic complex valued functions f : R → C with period T that have a piecewise continuous derivative. F(Ω) is a linear space and we give it a scalar product Z T hφ, ψi = φ∗ ψ dx. (1) 0

A fundamental theorem from analysis states that the functions 1 i2πkx ek (x) = √ e T T

k∈Z

form a basis for F(Ω). This means that for any f ∈ F(Ω) there exists coefficients fˆk ∈ C such that X fˆk ek (x) f (x) = k∈Z

1 X ˆ i2πkx =√ fk e T T k∈Z

(2)

∀x ∈ R.

Furthermore, the sum in equation (2) converges uniformly to f . This means that we are able integrate f by termwise integration of the sum on the right hand side of (2) (see [Clark, 1982] or any other analysis textbook). A demonstration of this theorem can be found in [Courant and Hilbert, 1989] for example. The right side of equation (2) is called the Fourier series for f . Notice that the functions ek (x) form an orthonormal basis: ( 0 if k 6= j, hek , ej i = 1 if k = j. Thus it follows from the uniform convergence of equation (2) that the coefficients are given by Z ˆ fk = hek , f i = e∗k f dx Ω Z T (3) 1 − i2πkx T √ e = f (x) dx. T 0 This is the Fourier transform of the periodic function f . The coefficients fˆk are the Fourier coefficients of f . The original function f can be recovered from its Fourier coefficients via the fourier series. Thus equation (2) represents the inverse Fourier transform of f . Remark: We considered complex valued functions because the complex exponentials serve as an elegant set of basis functions. We could restrict ourselves to real valued functions and 5

define the Fourier series as ∞  X

f (x) = a0 /2 +

ak cos

k=1



2πkx T



+ bk sin



2πkx T



(cf. [Courant and Hilbert, 1989]), but the description of the trigonometric functions as an orthonormal basis becomes somewhat more clumsy. However, it should be noted that in subsequent sections we will be primarily interested in real valued functions.

2.2

Discrete Fourier Transform

We now consider the case where our functions will all be sampled on a uniform grid where N grid points lie in Ω with one of them at the origin. Let h = T /N be the distance between the grid points. Thus functions will be sampled on Ω at the points xm = mh, 0 ≤ m < N . Since our functions are periodic, samples that lie outside of Ω are redundant because if m = nN +p with n ∈ Z and 0 ≤ p < N then f (xm ) = f (nN h + ph) = f (nT + ph) = f (ph). Thus the sampled function can be represented by a vector Πf whose coefficients are fm = f (xm ) = f (hm), 0 ≤ m < N . Let F(ΩN ) be the N -dimensional vector space of these sampled functions. Discretizing the integral in equation (1) on F(Ω) yields a summation on F(Ω N ) Z

T ∗

0

φ ψ dx −→

N −1 X



φ (xm )ψ(xm )h = h

m=0

N −1 X

φ¯m ψm .

(4)

m=0

The sampling operation induces a linear mapping Π : F(Ω) −→ F(ΩN ) which maps the Fourier basis functions ek ∈ F(Ω) onto an orthonormal basis of F(ΩN ). Indeed, consider the discretization of the functions ek ∈ F(Ω). We have i2πkm 1 1 i2πkmh e N ek (xm ) = √ e T = √ T Nh

With respect to the scalar product (4) these vectors are unitary. Also if j ≡ k mod N then Πek = Πej because if j − k = pN then N −1 N −1 1 X i2πpm 1 X i2π(j−k)m e N = e =1 hΠek , Πej i = N m=0 N m=0

and two unit vectors whose scalar product is unity must necessarily be identical. 6

Furthermore, if j 6≡ k mod N then Πek and Πej are orthogonal. For then N −1 1 X i2π(j−k)m e N hΠek , Πej i = N m=0

N −1 1 X  i2π(j−k) m = e N N m=0   1 1 − ei2π(j−k) = N 1 − e i2π(j−k) N = 0,

where the penultimate equality comes from the familiar closed expression for a truncated geometric series. Thus we have N orthonormal   basis  vectors Πek , 0 ≤ k < N . Sometimes the index set N N is chosen so that − 2 ≤ k ≤ 2 − 1, however it is clear from the preceding that this represents the same basis. The spacing between the grid points plays no significant role in the definition of F(Ω N ) or the basis functions Πek . In standard treatments of the DFT it is customary to set h = 1 which is equivalent to assuming that T = N . We now have a canonical isomorphism between F(ΩN ) and CN with the usual scalar product. So given any f ∈ F(ΩN ) = CN we have f=

N −1 X

fˆk Πek

(5)

k=0

which is the analogue of equation (2). Letting E be the N ×N Unitary matrix whose columns are Πek , we can express equation (5) in matrix notation as f = E fˆ.

(6)

This is the inverse DFT. The DFT itself is the inverse of equation (6): fˆ = E ∗ f

(7)

where E ∗ is the Hermetian conjugate of E. By examining the individual coefficients in equations (6) and (7) we get expressions which are perhaps more familiar. The IDFT becomes N −1 1 X ˆ i2πkj fj = √ fk e N N k=0

and the DFT is written

N −1 i2πkj 1 X ˆ fk = √ fj e − N N j=0

7

(8)

(9)

Remarks: a) Many expositions of the DFT omit the factor of √1N from the expression for the DFT and have a factor of N1 instead of √1N in the expression for the IDFT. This is the way the DFT is implemented in Matlab for example. We use the formulation (8) and (9) since it arises more naturally from the consideration of orthogonal basis vectors. b) The linear mapping Π : F(Ω) → F(ΩN ) is many to one. In general, there is no obvious correspondence between the FT of a function f ∈ F(Ω) and the DFT of Πf ∈ F(ΩN ). However, if f ∈ P F(Ω) has a finite Fourier series, thatP is fˆk = 0 if |k| > M ˆ for some integer M , then f = |k|≤M fk ek and by linearity Πf = |k|≤M fˆk Πek . Since Πek = Πej whenever k ≡ j mod N , the j th Fourier coeficient of Πf will be the sum of all the fˆk with k ≡ j mod N . A function f ∈ F(Ω)P is bandlimited by P B if fˆk = 0 for all |k| ≥ B. If f is bandlimited N −1 ˆ by N/2 then Πf = |k| λq then ek has a higher (absolute) frequency than eq . Thus arranging the ek so that their corresponding eigenvalues are in increasing order effectively orders them in increasing absolute frequency. Note also that to each eigenvalue λ|k| with |k| > 0, there corresponds two eigenfunctions, ek and e−k = ek ∗ . Consider now the discrete approximation to LΩ given by the standard second order finite difference template on a uniform grid: (LΩN )i =

1 (2fi − fi−1 − fi+1 ), h2

3

The shift operators can be viewed as the action of the group of translations on Ω. The translations act transitively on Ω (there is only one orbit) and so Ω is a homogeneous space. The Fourier transform can be extended to apply to any homogeneous space (this yields the spherical harmonic transfrom on a sphere for example) [[FIXME – cite Helgeson or Inui or something]]. Unfortunately most of the objects that are of interest to us do not fall into this category.

19

where f ∈ F(ΩN ) and h is the distance between consecutive grid nodes. The discrete Fourier basis vectors Πek are eigenvectors of LΩN :  2πk 2πk 1  (LΩN )(Πek )j = 2 2 − e− N − e N (Πek )j h    1 2πk (Πek )j = 2 2 1 − cos h N   1 πk 2 = 2 4 sin (Πek )j h N  2 Thus the eigenvalue associated with Πek is λ|k| = h12 4 sin2 πk . Using h12 = N 2 and expanding N T  2 4 in a series we find λ|k| = 2πk +O( Nk 2 ). So the eigenvalues of this discrete Laplacian sin πk N T are a good approximation to those of the actual differential operator only whenk 2  N .  ∂2 ∂2 For the case of functions in F(Ω2 ), the differential Laplacian becomes LΩ2 = − ∂x 2 + ∂y 2 and the Fourier basis functions ekx ky are its eigenfunctions with corresponding eigenvalues 2  2  2πky 2πkx + . λ= Tx Ty If we discretise F(Ω2 ) with an Nx × Ny grid and use the stencil defined by (LΩ2N f )ij =

1 1 (2fij − fi−1,j − fi+1,j ) + 2 (2fij − fi,j−1 − fi,j+1 ) 2 hx hy

(31)

we find that the discrete Fourier basis vectors are eigenvectors with eigenvalues     1 πkx 1 πky 2 2 λ = 2 4 sin + 2 4 sin hx Nx hy Ny    4 2  2 ky4 2πkx 2πky kx = + . + +O Tx Ty Nx2 Ny2 Traditionally the Fourier coefficients of f ∈ F(Ω2 ) are plotted in the “frequency domain” where the horizontal and vertical axes represent values of kx and ky respectively. Thus the coefficient fˆkx ky may be plotted as an intensity value at the point (kx , ky ) in the frequency domain. Alternatively, the Fourier transform of f can be represented as a height field above the kx ky -plane. Figure 1 gives an example of these visualisations for a specific function4 . This representation of the frequency domain is useful for gaining intuition into the 2D Fourier transform. Unfortunately, the representation exploits the separable structure of the basis functions ekx ky and this structure is generally not present for eigenfunctions of the Laplacian on other surfaces. Therefore, we arrange the coefficients of the spectral transform of f on a one dimansional line with the ordering defined by the size of the corresponding eigenvalue. 4

This “function” is a texture, 161.jpg, downloaded from Jeremy Debonnet’s web site: http://www.debonet.com/Research/TextureSynthesis/ in July 2002. We have converted it to greyscale and downsampled it to 25 × 25.

20

a)

b) DFT of f

The function f

12 10 8 6 1

4 0.5

2

25 0

20

15

20

25 15

15

10 10

c)

0 25

20

25

y axis

0

0

10

5

5

5

20 15

10

x axis

d)

ky axis

0

5 0

kx axis

Figure 1: Each of the two rows show different ways of visualising the same function and its DFT. On the left is the spatial domain representation and on the right the Fourier coefficients are plotted in the frequency domain. In figure 2 we show plots in the frequency domain of the eigenvalues of the continuous and discrete Laplacian operators. To each point (kx , ky ) corresponds a basis function ekx ky and its eigenvalue appears as the height of the graph at that point. As we saw at the beginning of 2.6, LΩ2N can be described by a convolution with a kernel h. In fact this kernel is apparent from the form of equation (31). We have    1 1  if i = j = 0 + 2  h2x h2x    1 − h2 if i = ±1 and j = 0 x hij =  − h12 if j = ±1 and i = 0     y 0 otherwise. The DFT of h yields the eigenvalues of LΩ2N , so the graph in figure 2b can also be interpreted 21

Eigenvalues of Differential Laplacian

Eigenvalues of Kirchoff Laplacian

12000

5000

10000

4000

8000

3000

6000 2000

4000

1000

2000 0 15

0 15 10

10 15

5 0 0

a)

y

−15

0 −5

−10

−10 −15

5 −5

−5

−10

10

0

5 −5

k wave numbers

15

5

10

k wave numbers

b)

k wave numbers x

y

−10 −15

−15

k wave numbers x

Figure 2: The frequency domain graphs of the eigenvalues of the continuous (a) and discrete (b) Laplacian operators. The eigenvalue of ekx ky is the height of the surface above the point (kx , ky ). as the DFT of h. The convolution kernel for LΩ2 does not exist as a function5 in F(Ω2 ). Contour Plot of Discrete Laplacian Eigenvalues in Frequency Domain

Contour Plot of Laplacian Eigenvalues in Frequency Domain 11000

10

10

4500

10000

4000

9000 5

5

8000

3500 3000

6000

0

ky

ky

7000

0

2500

5000

2000 4000

−5

−5

1500

3000

1000

2000 −10

a)

−10

−5

0 k

x

5

10

500

−10

1000

b)

−10

−5

0 kx

5

10

Figure 3: The frequency domain contour plots of the eigenvalues of the continuous (a) and discrete (b) Laplacian operators. The difference between the spectra shown in figure 2 is perhaps better displayed by the contour plots of figure 3. The contours of the discrete Laplacian indicate the correspondence between the frequency domain representation and the representation used for general spectral transforms. When the discrete Fourier basis vectors are interpreted as eigenvectors of L Ω2N , each will be given a single index 0 ≤ k < Nx Ny . The index of ekx ky will be larger than that of ekx0 ky0 if its corresponding eigenvalue is larger. In other words, if the eigenvalue contour that passes through (kx , ky ) encloses the one that passes through (kx0 , ky0 ), then ekx ky will have a larger index than ekx0 ky0 , thus determining the ordering of the corresponding spectral coefficients of f . If both points lie on the same contour we can say nothing about their relative indices. 5

A convolution kernel for LΩ2 could be defined if we were to embrace distributions (generalised functions).

22

Magnitude of Fourier Coefficients vs Eigenvalues of K 2:300

Coefficient Magnitude

2.5 2 1.5 1 0.5 0

0

50

150

200

250

300

250

300

Magnitude of Spectral Coefficients of f from 2 to 300

2.5 Coefficient Magnitude

100

2 1.5 1 0.5 0

0

50

100

150 Coefficient Number

200

Figure 4: The top graph is the magnitudes of the coefficients of the DFT of the function shown in figure 1 arranged according to the magnitude of the associated eigenvalue of LΩ2N . The lower graph shows the magnitudes of the spectral coefficients of the same function where the basis functions are the output of a black box eigensolver applied to LΩ2N . The coefficient corresponding to the zeroth eigenvalue is omitted in each case since it is orders of magnitude larger than the others (see the large spike in the centre of the lower left image of figure 1). As a concrete example, figure 4 shows the magnitudes of the DFT coefficients of the function of figure 1 arranged in this way together with the magnitudes of a spectral transform of the same function. The spectral transform waas produced by performing an eigendecomposition of LΩ2N . As expected, these graphs look similar but not identical. Since most of the eigenvalues of LΩ2N have a multiplicity of four or eight, the ordering of the basis functions is not locally defined. More than that, there is probably no rearrangement of the indices that could make the graphs identicall since, as we discussed in section 2.6.1, the basis functions produced by the eigensolver are not necessarily the discrete Fourier basis functions. Thus in order to have a meaningful comparison between the two sets of coefficients we use equation (30) to get the coefficients associated with each eigenspace. The resulting graphs are shown in figure 5, and as expected they are indistinguishable. 23

Eigenspace Coefficients of DFT of f vs Eigenspace Number

Eigenspace Coefficient

12 10 8 6 4 2 0

0

10

30

40

50

60

70

80

90

80

90

Eigenspace Coefficients of Spectral Transform of f vs Eigenspace Number

12 Eigenspace Coefficient

20

10 8 6 4 2 0

0

10

20

30

40 50 Eigenspace Number

60

70

Figure 5: These graphs correspond to the graphs of figure 4 but with the coefficients corresponding to the same eigenvalue gathered together according to equation (30). The top graph used the coefficients from the DFT and the bottom graph used those resulting from a black box eigensolver. Note that all eigenspaces are represented in the plots here, whereas in figure 4 only the eigenvectors with indices 1 to 299 are represented (there are 625 eigenvectors indexed from zero). This example emphasises the importance of recognising multiple eigenvalues when comparing two spectral transforms in which the basis functions were not generated in an identical manner. The multiplicities of the eigenvalues are examined in another context in section 6.2 and we discuss them further in section 6. Remark: The discrete Laplacian LΩ2N is a real symmetric matrix and therefore not only are its eigenvalues all real, but it admits a basis of real valued eigenvectors. The eigenvectors returned by a typical eigensolver for such a matrix will be real, but the discrete Fourier basis vectors are complex valued. This is not a paradox since if ekx ky ∈ Eλ , then so is e−kx ,−ky = e∗kx ky . We can always construct a pair of real orthonormal vectors from each such pair of Fourier vectors. 24

3

Spectral Transforms on Surfaces

One of the principle obstacles to using signal processing methods for geometric data is the fact that the “signal” is not easily interpreted as a function. In image processing for example, the signal is a two dimensional array of pixel values. Such information is naturally interpreted as a function of two variables; the x and y coordinates of the pixel. The function value at (x, y) is the value of the pixel at those coordinates. Interpolation theory then allows one to consider a corresponding function on the underlying continuous domain. On the other hand, geometric information describing a surface is usually given as a series of sample points in R3 , perhaps with associated connectivity information. In order to make use of signal processing methods, we must somehow interpret this data as characterising a function. One approach would be to find a parameterisation of the underlying surface onto a planar domain U . The surface is then represented by mapping ψ : U ⊂ R2 → R3 . However, there are a number of dissadvantages associated with this approach. First, there is no canonical parameterization ψ. The relationship between the spectral properties of ψ and those of the surface data is a complicated one; a different parameterisation will have different spectral properties. Perhaps a more serious impediment to this approach is the fact that parameterising a surface generally involves making “cuts”. For example a closed surface cannot be mapped to the plane without introducing parametric discontinuites somewhere on the surface. Again, there is no canonical way to choose these cuts. Furthermore these cuts end up on the boundary of U in two locations that must be matched when creating the boundary conditions for the signal transform. Thus the usual periodic boundary conditions on a rectangular domain that are used for the Fourier transform cannot be accomodated except perhaps for surfaces with genus one. The approach we take is to consider the underlying surface S itself as the domain. We then apply signal processing techniques to functions f defined on the surface. The three coordinate functions which to any point x ∈ S give the (x, y, z) coordinates of x ∈ R 3 describe the embedding of S ⊂ R3 and are taken to be our geometric “signal” 6 . One obvious dissadvantage of this approach is that it presupposes knowledge of S. There seems to be little hope of exploiting such an approach to develop an interpolation theory for extracting a surface from an unorganized point cloud for example. The surfaces that we will consider will be closed surfaces (without boundaries) that are described by a manifold mesh. 6

It is not clear that this is the best definition of the geometric signal. For example, [Tasdizen et al., 2002] suggested that the normal vector field on the surface would be a more appropriate definition of the signal. Another issue is the fact that these three coordinate functions are treated independently; it is not obvious that this is justified. However, we put these issues aside for now and follow the now standard practice of using the coordinate functions as the geometry signal.

25

3.1

Smooth Surfaces

The Laplacian operator can be defined on a smooth surface as the divergence of the gradient: LS = − div ◦∇. However the definition of the divergence and the gradient on a surface requires a significant foray into the realm of differential geometry. For a detailed introduction to the Laplacian operator on surfaces see [Rosenberg, 1997]. Let S be a compact connected smooth (C ∞ ) surface and let A(S) be the space of square integrable real valued functions on S (This space is more conventionally refered to as L 2 (S)). Then the eigenfunctions of LS form an orthonormal basis of A(S). Also, LS has a zero eigenvalue with multiplicity one and all its other eigenvalues are positive and accumulate only at infinity (this means that the eigenspaces are finite dimensional). This is refered to as the Hodge theorem for functions in [Rosenberg, 1997]. Orthogonality in this context refers to the scalar product defined by integration on S. So if ej and ek are eigenfunctions of LS , Z hej , ek i = ej ek dv = δij (32) S

This allows us to construct a spectral transform on S by projecting onto the eigenfunctions of LS in the same manner as was described in sections 2.6.1 and 2.6.2. The Hodge theorem for functions assures us that we will have a discrete set of coefficients Z ˆ fk = hek , f i = ek f (x) dx, (33) S

and that any function f ∈ A(S) can be represented as X f= fˆk ek .

(34)

k∈Z

Equation 34 is the closest analogue to the Fourier series (2) that we have for A(S). Equation (33) defines a spectral transform on S. We saw in section 2.6.2 that using eigenvalues of the Laplacian to order the eigenfunctions yielded an ordering in which the norm of the frequency k = (kx , ky ) of the eigenfunctions was monotonically non-decreasing; the eigenvalues were proportional to the squared norm of the frequency of the corresponding eigenfunction. In the case of smooth surfaces, there are no Fourier basis functions and the concept of frequency is no longer clear. However we do have an intuitive notion of how the functions should be ordered. Functions which undulate slowly over S should be considered to have lower spectral power than functions which exhibit many oscillations over the surface. A theorem of Courant gives us some assurance that the eigenfunctions are indeed ordered correctly by eigenvalue (see [Craioveanu et al., 2001, §3.3]):

26

Theorem 3.1 (Courant) The eigenvalues of LS are given by Z  2 λk = inf k∇f k dv hej , f i = 0, for j < k kf k=1

S

and if λk =

R

S

k∇f k2 dv kf k

kf k 6= 0

then f is an eigenfunction of LS with eigenvalue λk . Theorem 3.1 characterises the k th eigenfunction as the function that has the smallest mean square gradient and is orthogonal to all ej , j < k. This seems like a good substitute for the concept of frequency. We expect high frequency signals to manifest large gradients.

3.2

A Graph Theoretic Perspective

Although a surface S can be described as a differentiable manifold without any reference to an ambiant space, we will assume that S ⊂ R3 so that every point p ∈ S is represented by a point x(p) ∈ R3 . In other words S comes equipped with an intrinsic smooth embedding in R3 . [[XXX FIXME: replace with simplicial complex]] For practical applications only a discrete approximation to the surface S is available to us. A mesh M = (G, X) is a graph G = (V, E) together with a mapping X : V → R3 , which defines the coordinate functions on the vertices: vi ∈ V , X(vi ) = xi = (xi , yi , zi )T ∈ R3 A mesh M approxmates a surface S if all of the vertices of M lie on or near S. A mesh is a valid mesh if there is some smooth surface S for which X(v) ∈ S for all v ∈ V . Henceforth we will assume that all meshes are valid. [[FIXME – this is sloppy and not strong enough. We need to define manifold meshes, but I don’t want to include faces in the definition of a mesh (i.e. we want graphs not simplicial complexes)]] Techniques of spectral analysis have been extensively studied by graph theorists as well as differential geometers. Much of the spectral processing work in the computer graphics literature has had a graph theoretical flavour; the operator whose eigenvectors are computed is defined on the underlying graph S of M independent of the embedding X. X itself then comes into the picture as the signal upon which the spectral transform is applied (by considering each of the three coordinate functions independently). A Laplacian operator on a graph can be defined at each node as   X (Lf )i = ai fi − wij fj  , (35) j∈N (i)

where N (i) is the setP of neighbours of vi ∈ V , and the wij are called the weights and they must sum to unity: j∈N (i) wij = 1. The factor ai is restricted to being a nonzero real number. Thus the graph Laplacian of a function evaluates the difference between the value of the function at a node and a weighted average of the values at neighbouring nodes. A 27

justification for (35) bearing the same name as the differential operator is perhaps given by the mean value property of the latter. If p is a point on the plane and Γh is the circle of radius h centred at p then Z 2 (f (p) − f (ξ)) dξ (36) LΩ2 f = lim 2 h→0 h π Γ h (cf. [Courant and Hilbert, 1989, p. 276] [[FIXME: get a better reference and understand that constant]]). Equation (36) asserts that LΩ2 f (p) is the difference between f (p) and an average of its infinitesimal neighbours. There are many different choices of the weights wij and the constants ai in the definition (35). Many of them will produce a spectral transform that behaves much like the Fourier transform. However, the connection between these transforms and the one described by equation (33) is not always clear. On a mesh M that approximates S, we wish to construct discrete spectral transforms that can be shown quantitatively to approximate (33). We discuss the reasons why this might be useful in section 6. In order to attain such an approximation we expect that the operator that is used to construct the basis functions must encode the geometry of the mesh in addition to the topology of the underlying graph. Nonetheless, the purely connectivity based Laplacian operators have played an important role in many previous geometric processing studies. Also, as has been observed in [Isenburg et al., 2001], and is discussed in section 6, there is evidence that for reasonably uniform sampling there is a surprising amount of geometric information implicit in the topology of G. Also, the discretisations of the Laplacian operator based on a differential geometric approach that we will study still have the form (35). If geometric information is encoded into the ai or the weights wij , the theory may still be interpreted from a graph theoretic perspective. Thus in the following subsections we explore further the graph theoretic approach. 3.2.1

The Class of Appropriate Operators

It was seen in section 2.6 that the shift invariant operators admit the Fourier basis functions as their eigenfunctions. For smooth surfaces, such a concise characterisation of appropriate operators is not forthcoming. Fortunately the Laplacian operator was identified as having desirable properties and this was used to define the canonical transform for smooth surfaces. From the graph theoretic perspective the concept of the Laplacian itself, as described by equation (35), admits an entire class of operators. Note that an graph operator is an N × N matrix where N = |V |. A function on the graph is defined on the nodes of the graph and is arranged as a vector according to the indices of V . Note that the ordering of the nodes is unimportant as long as it is consitent (see section 6.2). In a recent paper, [Zhang, 2004], the family of generalised mesh Laplacians was introduced and characterised by two properties. An operator F is a generalised mesh Laplacian if 1. The kernel of F is the space of constant vectors 2. There are no negative eigenvalues of F 28

By constant vector we mean one whose coefficients are all the same, in analogy with the constant functions. The first property is clearly satisfied by operators defined by equation (35). Note that when applied to operators on Ω or Ω2 , this property rules out many shift invariant operators. It is easy to check that the constant vectors will be eigenvectors of a shift invariant operator. However, shift invariant operators don’t generally annihilate the constant vectors, and they are not restricted from annihilating other vectors (indeed the operator which sends all vectors to zero is shift invariant). The second property arises from the notion of smoothing. In analogy with an explicit time integration of the discretised heat equation (see [Desbrun et al., 1999] for details in this context), we expect H = (I − αF ) to be a smoothing operator. As defined in [Zhang, 2004], this means Hv = v iff v is a constant vector and if λ 6= 1 is an eigenvalue of H then it is real and |λ| < 1. We see that H will satisfy this property whenever α < 2/λN , where λN is the largest eigenvalue of F . [[FIXME: check this with CFL condition. Desbrun seems to be claiming that α needs to be smaller than one. Also, Can we relate the defn of GMF with equation (35)? Do all GMF’s have that form –No the restriction to the nearest neighbours precludes that (U T U is a counter example). Okay then, what restrictions, if any, do we need to put on a i and wij in order to ensure that we have a GMF?]] 3.2.2

A Note on Orthogonality

Although we considered Hermetian operators and complex vectors in section 2.6, this was primarily to accomodate the Fourier basis vectors. We now wish to confine ourselves to real valued vectors and matrices. If H is a real symmetric matrix then it has a complete set of (real) orthonormal eignevectors and all of its eigenvalues are real. Both of these properties are desirable for our purposes. Real eigenvalues impose a natural ordering (up to multiplicity) on the eigenvectors. Also, if the eigenvectors are symmetric, then the spectral coefficients of a vector f can be computed by a simple scalar product: fˆk = hek , f i = ek T f. If H posseses a complete set of real eigenvectors, but they are not orthogonal, we can still compute spectral coefficients, but it requires inverting an N × N matrix, where N is the number of vertices in the mesh. Thus if HE = EΛ, where E is a non-singular matrix whose columns are eigenvectors of H and Λ is the diagonal matrix of eigenvalues, then we may write f = E fˆ. The spectral transform is then given by fˆ = E −1 f.

(37)

Only when the eigenvectors are orthogonal does (37) take the form of equation (6) and so allow a simple inner product computation of the individual fˆk . However, we are at liberty to define a scalar product for which the basis vectors are orthogonal. Indeed if we define B = E −T E −1 and hh, f iB = hT E −T E −1 f 29

(38)

then we have hej , ek iB = δjk and hek , f iB = fˆk . Of course, this does not gain us anything in general since the inverse of E still needs to be computed in order to define the metric. Some of the operators we will be interested in do not have orthogonal eigenvectors, but they allow the matrix E −T E −1 to be deduced from the special form of the operator. Specifically, we will encounter operators of the form H = DS

(39)

where D is a diagonal matrix and S is symmetric. It then follows that H is similar to an orthogonal matrix 1 1 1 1 (40) O = D 2 SD 2 = D− 2 HD 2 . The matrix O has a complete set of orthonormal eigenvectors with real eigenvalues. 1 1 Furthermore, if Oξ = λξ, then H(D 2 ξ) = λ(D 2 ξ). 1 Thus if X is the orthogonal matrix of eigenvectors of O, then E = D 2 X is the matrix of eigenvectors of H. It is easy to see that E −T E −1 = D−1 , so a natural metric associated with the basis of eigenvectors of H is given by hh, f iD−1 = hT f.

(41)

For the discrete differential Laplacian described in section 5 we encounter this situation. We show in section 5.2 that the metric (41) arises naturally in another context: as the discrete approximation to the integral scalar product as in equation (4).

3.3

A Menagerie of Laplacian Operators

We present here a brief description of some of the operators that have been used for defining spectral transforms as well as some whose spectral properties have not yet been exploited in the graphics literature. A more detailed description of the properties of many of these operators can be found in [Zhang, 2004]. We will make use of the following notational conventions. The adjacency matrix A of a graph G is defined by ( 1 if (i, j) ∈ E(G) [A]ij = 0 otherwise. The valence of a vertex vi is denoted by di and is the number of edges incident to vi . Thus di = |N (i)|. R is the diagonal matrix whose diagonals are di . 3.3.1

Kirchoff

The Kirchoff operator is defined by K =R−A

Locally it is described by equation (35) with ai = di and wij = d−1 i : X (Kf )i = (fi − fj ). j∈N (i)

30

(42)

(43)

Properties • It is a symmetric operator • The coefficients in any row sum to zero. (This will be refered to as the zero row sum property. It implies that constant vectors are in the kernel). • Its has no negative weights. • There are no particular requirements on the mesh; it does not have to be triangular or even manifold. • It is a graph laplacian operator and encodes no explicit information on the geometry of the mesh. Note that this is the discrete differential operator for a uniform cartisian grid: L Ω2N = K when h = 1. 3.3.2

Tutte

The Tutte Laplacian is also refered to as the uniform Laplacian. It is obtained by setting factoring out the ai in the Kirchoff Laplacian. Thus T = R−1 K = I − R−1 A. This amounts to setting ai = 1 and leaving wij = d−1 in equation (35): i X (T f )i = d−1 i (fi − fj ).

(44)

(45)

j∈N (i)

Properties • It is not a symmetric matrix; The eigenvectors are made orthogonal by means of the metric hh, f iR = hT Rf . • It has the zero row sum property. • It has unit diagonal enteries. • It has no negative weights. • There are no particular constraints on the mesh; it need not be triangular nor manifold. • It is a graph Laplacian operator. It does not explicitly encode geometric information.

31

3.3.3

Normalised Graph Laplacian

The Tutte Laplacian has the form (39) With valmat−1 playing the role of D and K playing the role of S. In that context, the Normalised Graph Laplacian plays the role of the symmetric matrix O. It is defined by 1

1

1

1

1

1

G = R 2 T R− 2 = R− 2 KR− 2 = I − R− 2 AR− 2

(46)

The local form of this equation can be described by taking ai = 1 and putting wij = √ 1 in d i dj P equation (35), except this no longer conforms to the property that j∈N (i) wij = 1 required by equation (35). We write the local form as (Gf )i = fi − Properties

1 p fj . d d i j j∈N (i) X

(47)

• It is symmetric. • It does NOT have a zero row sum. Thus it does not necessarily annihilate constant vectors. • It has no negative weights. • It does not require any special properties of the mesh. • It does not contain explicit geometric information. Note that the fact that the weights do not sum to zero in equation (47) means that it is not a Laplacian operator as dictated by equation (35). This also implies that the kernel of G cannot be identified with the constant vectors. Therefore, G is not a generalised mesh Laplacian as defined in section 3.2.1 either. These facts mean that the basis functions of G do not directly give rise to a transform with useful properties. However, G is useful as a means of computing the eigenfunctions of T . Since eigensolvers are able to take advantage of a symmetric matrix, one can compute the eigenvectors of G and thus obtain the eigenvectors of T as described in section 3.2.2. The fact that G has the same spectrum (the same set of eigenvalues) as T probably accounts for its popularity in graph theory. 3.3.4

Symmetrised Tutte Laplacian

The symmetrised Tutte Laplacian (or symmetrised uniform Laplacian) is defined simply by U = T T T.

32

(48)

Properties • It is symmetric • It has the zero row sum property • It has no negative weights. • It is a second order Laplacian; its local expression contains nonzero weights for values in the second ring of neighbours. • It does not require any special properties of the mesh. • It does not contain explicit geometric information.

k2

k2

i a)

k1

i j

b)

k1

j

Figure 6: Lableling the vertices for operator definitions. The weights for the SQL operator are indexed as in a) and c) labels the angles used in defining Floater’s mean value operator.

3.3.5

Symmetric Quasi Laplacian

This operator was introduced in [Zhang, 2004] as a symmetric alternative to the Tutte Laplacian. It can be expressed in matrix form as Z = I − W.

(49)

The matrix of weights W is defined by 1 −1 −1 −1 wij = d−1 i + dj − (dk1 + dk2 ), 2 where k1 and k2 refer to the vertices that subtend the edge [i, j] (see figure 6a). The local form is then given directly from equation (49): X (Zf )i = fi − wij fj . (50) j∈N (i)

33

Properties • It is symmetric • It has the zero row sum property • It has unit diagonal enteries. • It may have negative weights. • It is well defined only on a manifold triangular mesh. • It does not contain explicit geometric information. Although the SQL is a purely combinatiorial operator, it does require a manifold triangular mesh for its definition. Also it may have negative weights for example if the vertices at k1 and k2 have low valence. These properties resemble those of the discrete differential Laplacian operator that we will introduce next. That operator also can have negative weights if the angles at k1 and k2 are obtuse. The lower the valence of a vertex, the higher the likelyhood that it will harbour an obtuse angle. In [Zhang, 2004] the SQL operator was also extended to manifold meshes with boundaries. The weights were modified for boundary edges and the diagonal enteries would not generally be one for boundary nodes. Indeed, the diagonal enteries could even be negative at boundary nodes. However, the operator maintains its symmetry under these conditions. 3.3.6

Discrete Differential

This discrete approximation to the differential Laplacian operator is given a detailed construction and description in [Meyer et al., 2003], where it is introduced as a mean curvature normal operator. We also present the same construction from a different perspective in section 5.1. In fact, all of section 5 is devoted to a study of this operator. In matrix form, the operator can be written as LM = A−1 Q,

(51)

where A is a diagonal matrix whose diagonal entries are the areas of influence of the corresponding vertex. Loosely speaking the area of influence one can thought of as the area of the Voronoi cell of the vertex, although ... 3.3.7

Unweighted Differential

3.3.8

Fujiwara

3.3.9

Chord Length

3.3.10

3.4

Mean Value Operator

Non-negative Definite

Note: this should be merged with section 3.2.2 – well, probably rework the whole section. 34

The properties defining a general Laplacian (Prop 1 and Prop 2) probably need refining: we need to be assured that the operator is diagonalisable. Thus Prop 2 should probably be replaced with something like the following. There exists an inner product h·, ·iB for which L is self-adjoint and non-negative definite. The adjoint of an operator L is an operator L∗ defined by hf, L∗ giB = hL f, giB .

(52)

There is a positive definite symmetric matrix B such that the inner product can be described by hf, giB = f T Bg

Thus in terms of matrices, we get from equation (52), f T BL∗ g = f T LT Bg ⇒ BL∗ = LT B

∀f, g ∈ RN

L∗ = B −1 LT B

The requirement that L be self-adjoint then implies that L = B −1 LT B, or B L = LT B = (B L)T . In other words, B L = Q with Q symmetric. Thus any self adjoint operator L must have the matrix form L = B −1 Q (53) with B −1 a positive definite matrix and Q symmetric. Section 3.2.2 then shows that the eigenvectors of L are indeed orthogonal with respect to the inner product. We say that L is non-negative definite with respect to h·, ·iB if hf, L f iB ≥ 0

∀f ∈ RN .

Since hf, L f iB = f T BB −1 Qf = f T Qf,

it follows that showing that L is nonnegative definite with respect to h·, ·i B is equivalent to showing that the matrix Q is non-negative definite with respect to the canonical inner product on RN . Assume that Q is described by X [Qf ]i = wij (fi − fj ) wij = wji . (54) j

Then f T Qf =

X i

=

X ij

fi

X j

wij (fi − fj )

wij (fi2 − fi fj )

1X wij (fi2 − 2fi fj + fj2 ) 2 ij 1X wij (fi − fj )2 . = 2 ij

=

35

If all the weights are positive we have X 1X f T Qf = wij (fi − fj )2 = wij (fi − fj )2 ≥ 0, 2 ij

(55)

[i,j]∈E

so Q is non-negative definite. 3.4.1

The cot operator is non-negative definite

This section belongs somewhere in section 5. Laplacians that are based on the cot operator (51) discussed in section 5 have a matrix Q with the form shown in equation (54) however the weights wij = 12 (cot αij + cot βij ) are not guaranteed to be positive. On a differentiable surface S, the Laplace-Beltrami operator enjoys the following important property: Z Z f L f da = k∇f k2 da. (56) S

S

We will show that the analogous property holds for triangulated surfaces when discrete functions on the surface are supposed to be piecewise linear. Thus we will find that for a triangular mesh M , Z f T Qf =

M

k∇f k2 da.

(57)

This means that for any operator L = B −1 Q that is based on the cot operator, we have Z hf, Lf iB = k∇f k2 da. (58) M

Comparing equations (58) and (56) encourages us to interpret the scalar product h·, ·i B as an approximation to the integral. In order to demonstrate equation (57), we consider the right hand side and note that for a piecewise linear function f , the gradient is constant on each face so Z X k∇f k2 da = k∇f k2 aT , M

T ∈F

where aT is the area of triangle T . Focusing on one triangle T with vertex indices i, j, k, f |T is described in terms of the linear basis elements introduced in section 5.1: f = f i ϕi + f j ϕj + f k ϕk . It was shown that

(xi − xk )⊥ 2aT with similar expressions for ∇ϕi and ∇ϕk . Also, it follows that ∇ϕj =

−∇ϕk · ∇ϕj = 36

1 cot ∠xi 2aT

again with similar expressions for other permutations of the indices. Since the sum of the three linear basis functions is unity (constant) over T , their combined gradients vanish and we can write the gradient of any one in terms of the other two. In the following we exploit all three possible expressions for ∇f : k∇f k2 = ∇f · ∇f = [(fj − fi )∇ϕj + (fk − fi )∇ϕk ] · ∇f = (fj − fi )∇ϕj · [(fi − fj )∇ϕi + (fk − fj )∇ϕk ] + (fk − fi )∇ϕk · [(fi − fk )∇ϕi + (fj − fk )∇ϕj ] 1 1 cot ∠xk + (fi − fk )2 cot ∠xj = (fi − fj )2 2aT 2aT 1 cot ∠xi [(fk − fi )(fk − fj ) + (fj − fi )(fj − fk )] + 2aT 1 1 1 = (fi − fj )2 cot ∠xk + (fi − fk )2 cot ∠xj + (fj − fk )2 cot ∠xi . 2aT 2aT 2aT Thus, multiplying by the area aT and summing over all triangles we get X X 1 k∇f k2 aT = (cot αij + cot βij )(fi − fj )2 2 T ∈F [i,j]∈E

which is also the expression for f T Qf as demonstrated in the derivation of equation (55). Since k∇f k2 is never negative, it follows that Q is nonegative definite.

4

Examples Spectral Processing in Computer Graphics

In this section attempt to assemble the computer graphics related literature that exploits eigendecompositions to manipulate or analyize surface data. Note that the relavent mathematical machinery has been extensively studied in diverse branches of mathematics including those concerned with partial differential equations, numerical analysis, differential geometry, graph theory, and of course operator theory and linear algebra. Here we confine our attention to works that use spectral techniques to manipulate or analyse geometric information in the context of computer graphics or image analysis. The eigenvalue decomposition of a Laplacian operator was introduced to the graphics community by Taubin [Taubin, 1995] as a means of broaching signal processing questions related to geometric data. Taubin was tackling the problem of surface fairing. The spectral techniques were used as a tool for understanding the design of smoothing filters. There were no eigendecompositions explicitly in the final surface fairing algorithm. [[FIXME: ...smoothing, watermarking, compression]]

5

The Discrete Differential Laplacian

[[FIXME – this intro doesn’t fit anymore]] 37

Many surfaces encountered in computer graphics can be thought of as piecewise linear approximations of a smooth surface. Even if the surface being modeled is not smooth, it is usually piecewise smooth and thus may be approximated to arbitrarily precision by a surface that is differentiable. In section 2.6 we introduced the Laplacian operator as a discrete approximation to the differential operator on the underlying continous domain. However most of the previous work on spectral processing that we encountered in section 4 made use of Laplacian operators that were defined only on the connectivity graph of the mesh. There is no clear relationship between these operators and the differential operator on the underlying smooth surface; these operators do not contain explicit information on the geometry of the surface. We will see that this sometimes creates undesirable limitations on the resulting transforms. In this section we introduce a well known discrete approximation to the Laplace-Beltrami operator of differential geometry.

5.1

A Derivation of the Meyer Mean Curvature Operator as a Diffusion Operator

In the work of Meyer et al. [Meyer et al., 2003] a discrete mean curvature normal operator is derived and it is pointed out that this operator is the (discrete) Laplacian operator applied to the coordinate functions. In an earlier work, Turk [Turk, 1991] described a diffusion operator on a polygonal surface. However, the construction of the diffusion operator in [Turk, 1991] did not account for the variable sizes of the Voronoi cells or the non-orthogonality of the directions to the neighbouring cells. Here we show that if the construction outlined by [Turk, 1991] is supplemented by these considerations we obtain a different perspective on the construction described in [Meyer et al., 2003]. Suppose we have a mobile substance distributed over a smooth surface S. The concentration of the substance can be described by a function f : S × R+ → R, which depends on position in S as well as time. The evolution of f over time will be described by the heat equation ∂f = − LS f, (59) ∂t where LS is the Laplacian operator on S. The minus sign in equation (59) is motivated by our desire for a diffusion operator that has non-negative eigenvalues; this is a tradition in differential geometry. Now we wish to construct LM , a discrete approximation to LS that is defined on M , a triangulation of S. We will assume that f is piecewise linear and defined by its values at the vertices of M . A discretized version of equation (59) would then be f n+1 − f n = − LM f n , ∆t

(60)

where the superscripts are refering to a timestep. In order to describe LM , we partition M into cells such that each cell contains one and only one vertex. The voronoi cells of the vertices would provide a possible partitioning, 38

a)

b)

Figure 7: The same triangulation is shown together with sketches of the corresponding voronoi cells in red (a) and barycentric cells in blue (b). Edges that are subtended by an obtuse angle may not be bisected by the boundaries of the voronoi cell as seen in the region within the blue circle in (a). However, the barycentric cells cut each face edge in half by construction; the cells are defined by straight lines between the midpoints of the face edges and the barycentres of the corresponding faces. As an alternative to the barycentric cells, Meyer et al. [Meyer et al., 2003] propose an adjustment to the voronoi cells in the vincinity of obtuse angles. The resulting cells partition the surface so that all face edges are bisected. however for our purposes we want the cell boundaries to cross each incident mesh edge at its midpoint. The voronoi cells fail to meet this criterion in the presence of oblique angles. This problem can be avoided by using cells which have a piecewise linear boundary that passes through the midpoint of each incident edge and the barycentre of each incident triangle. Such cells will be refered to as barycentric cells. Figure 7 shows voronoi and barycentic cells. Note that since they cut each triangular face into three pieces of equal size, the barycentic cells have an area equal to 1/3 the area of the polygon defined by the one ring neighbours of xi . An alternative to barycentic cells is to alter the voronoi cells in the vicinity of obtuse angles, as described in [Meyer et al., 2003]. We will see that the only assumption on the shape of the cell that is used in the derivation of the diffusion operator is that the boundary bisects the edges incident to the vertex contained in the cell. From equation (60), we see that for a given a cell Ωi , the diffusion operator will describe the amount of representative substance density that diffuses out of the cell in a unit time interval. We say “representative” substance density because the discrete version of f has explicit values only at the vertices of the mesh. If fin is the value of f at the ith vertex, located at xi ∈ Ωi , and ∆fi is the amount of substance density that enters Ωi during the interval ∆t = tn+1 − tn , then ∆fi fin+1 − fin = , (61) |Ωi | where |Ωi | denotes the area of Ωi . In other words, we assume that the total amount of 39

substance density that enters the cell is distributed uniformly over all points in the cell. The negation of the gradient of f describes the diffusion flow. The substance density transfered into the cell in a unit time interval is given by the negative of the integral of the component of (−∇f ) along the outward normal to the boundary, so we have Z ∆fi ˆ ds. ∇f · n (62) = ∆t ∂Ωi In order to evaluate the right side of equation (62) we examine the triangle T = [x i , xj , xk ] between the ith vertex and two of its neighbours and consider the integral Z ˆ ds. ∇f · n (63) T ∂Ωi

T

xi

xi βij

b a xk xj (a) Because the integral is path independant, we choose a straight line between the endpoints, a and b. This line is parallel to [xj , xk ] and half its length.

α ij

xj

(b) Definition of the angles αij and βij .

Figure 8: Defining some notation A standard theorem from vector calculus states if ˆt is the tangent to a curve C between points a and b, then Z ∇f · ˆt ds, = f (b) − f (a). C

Thus the integral depends only on the endpoints of C and not on the curve itself. Now since f is linear on T , its gradient is constant and we can find another linear function g whose gradient is equal to a counterclockwise rotation of ∇f by π/2. Then Z Z ˆ ds = ∇f · n ∇g · ˆt ds T T ∂Ωi

T

∂Ωi

40

T

and we see that the integral in equation (63) is also Tpath independent. Thus we may choose a straight line path between the endpoints of ∂Ωi T as indicated in figure 8a. The outward normal to this line is given by ˆ= n

(xj − xk )⊥ kxj − xk k

where v ⊥ is v rotated by π/2 counterclockwise. The integral (63) now becomes Z ˆ ds = (∇f · n) ˆ kb − ak ∇f · n T ∂Ωi

T

1 ˆ kxj − xk k = (∇f · n) 2 1 = ∇f · (xj − xk )⊥ . 2

(64)

To compute the gradient of f we make use of the barycentric coordinate functions φi , φj , φk . These are linear functions defined by their valures at the vertices of T : ( 1 m = i, φi (xm ) = 0 m 6= i. Thus φi + φ j + φ k = 1

(65)

f = f i φi + f j φj + f k φk .

(66)

on T and we have From (65) we have ∇φi = −(∇φj + ∇φk )

so taking the gradient of equation (66) yields

∇f = (fj − fi )∇φj + (fk − fi )∇φk . Note that

xj h xk

area of T , we have

xi

(xi − xk )⊥ ∇φj = α kxi − xk k

(67)

since φj vanishes on the edge [xk , xi ]. The scalar α is the slope of the plane that is the graph of φj . Since φj (xj ) = 1, we have α = h−1 where h is the height of T on the base [xk , xi ] (see diagram). Eliminating h in favour of aT = 1/2 kxi − xk k h, the ∇φj =

(xi − xk )⊥ 2aT

∇φk =

(xj − xi )⊥ . 2aT

and similarly

41

Using these expressions in equation (67), and substituting into equation (64) we get 1 ∇f · (xj − xk )⊥ 2   1 (xi − xk )⊥ (xj − xi )⊥ = (fj − fi ) + (fk − fi ) · (xj − xk )⊥ 2 2aT 2aT   ⊥ ⊥ 1 (xi − xk ) · (xj − xk ) (xj − xi )⊥ · (xj − xk )⊥ = (fj − fi ) + (fk − fi ) 2 k(xi − xk ) × (xj − xk )k k(xk − xj ) × (xi − xj )k 1 = [cot ∠xk (fj − fi ) + cot ∠xj (fk − fi )] . 2 The integral in equation (62) over the entire boundary of Ωi is given by the sum of the contributions from each of the triangles incident to xi . After rearranging the terms, equation (62) becomes 1 X ∆fi =− (cot αij + cot βij ) (fi − fj ), ∆t 2 j∈N (i)

where N (i) is the set of one-ring neighbours of the ith vertex and, following Meyer et al. [Meyer et al., 2003], αij and βij , denote the angles subtending the edge [xi , xj ] as shown in figure 8b. Finally, when we use equation (61) to account for the area of the cell, we can write the fully discretized form of equation (60): X fin+1 − fin 1 (cot αij + cot βij ) (fi − fj ) =− ∆t 2 |Ωi | j∈N (i)

(68)

= − [LM (f n )]i .

5.1.1

The Divergence of the Gradient

The derivation presented above is the same as that of [Meyer et al., 2003], only the perspective has changed and some details have been added. Whereas [Meyer et al., 2003] started from the area minimizing property of mean curvature flow, we used a physical argument based on diffusion. Here we observe that the same derivation will also result from defining the operator in terms of spatial averages of the traditional mathematical definition of the Laplacian on a surface. The Laplacian operator on a smooth surface S is usually defined as the negative of the divergence of the gradient: LS = − div ◦∇ (see [Rosenberg, 1997] for example for the definitions of the divergence and the gradient on a surface). The theorems of vector calculus carry over nicely to smooth surfaces, so if we define the Laplacian of f at xi to be the average of div ◦∇f over Ωi , Z 1 div ◦∇f dv, [LM (f )]i = − |Ωi | Ωi 42

we can apply the divergence theorem to get 1 [LM (f )]i = − |Ωi |

Z

∂Ωi

ˆ ds. ∇f · n

We recognize again the same integral as in equation (62) and we arrive at equation (68) by evaluating this integral as we did before.

5.2

Orthogonal Eigenfunctions of the Discrete Laplacian

On a compact, connected oriented Riemannian manifold, the Laplacian operator admits a complete orthonormal basis of eigenfunctions. This is refered to as the Hodge Theorem for Functions in [Rosenberg, 1997]. This means that if M is such a manifold and L is the Laplacian operator on M , then there exists a set of functions {φi } on M such that Z hφi , φj i = φi φj dv = δij (69) M

and for any f : M → R, we may write f (x) =

∞ X

ci φi (x)

i=0

where ci = hφi , f i. Our goal here is to show how these properties carry over to the discrete case. In the discrete setting we have a compact piecewise linear (triangulated) surface S that is an approximation to some smooth surface M . We will consider the discrete approximation to the Laplacian operator as defined in [Meyer et al., 2003]. Functions on S are defined by their values on the vertices of the triangulation. Thus if S consists of n vertices, a function f on S may be represented by an n-vector. The action of the discrete Laplacian, L, on f is defined at each vertex by [L(f )]i =

1 X (cot αij + cot βij )(fi − fj ), 2ai

(70)

j∈N (i)

where ai is the “area of influence” of vertex i (usually the voronoi cell of that vertex), N (i) is the set of vertices that are neighbours of vertex i, and αij and βij are the angles that subtend the edge {i, j}. The operator L is represented by an n × n matrix. We note that the presence of the factor a1i means that the matrix is not symmetric in general. However we may write L = DQ, (71) where D is a diagonal matrix, [D]ii =

1 , ai

and Q is orthogonal:

[Q]ij = −1/2(cot αij + cot βij ) = −1/2(cot βji + cot αji ) = [Q]ji i 6= j. 43

Equation (71) implies that L is similar to an orthogonal matrix O = D 1/2 QD1/2 . Therefore, if ξ is an eigenvector of O with eigenvalue λ, then φ = D 1/2 ξ is an eigenvector of L with the same eigenvalue. Indeed, L(D1/2 ξ) = D1/2 OD−1/2 (D1/2 ξ) = D1/2 Oξ = λ(D 1/2 ξ). In the continuous case, the scalar product was given by integration, as in equation (32). In the discrete setting we approximate integration by a summation so Z f dv M

becomes

n−1 X

f i ai ,

(72)

i=0

where ai is the area of influence of vertex i, as in (70). This approximation to the integral induces a metric on our vector space of functions on S so that hf, gi =

n−1 X

f i gi a i

i=0 T

(73)

= f Ag, where A = D −1 is the diagonal matrix whose diagonal entries are the area elements of the corresponding vertices. We verify immediately that if {ξi }0≤i≤n−1 is an orthonormal basis of O with respect to the usual Euclidean metric, then the φi = D1/2 ξi form a basis of eigenvectors of L that is orthonormal with respect to the metric introduced in (73): hφi , φj iA = ξi T D1/2 AD1/2 ξj = ξi T ξj = δij 5.2.1

Some Observations

By projecting a function on S onto the basis of eigenvalues of L, we obtain the analogue of the discrete Fourier transform. The coefficients are obtained using the metric defined in equation (73). Thus a function f on S may be represented as f=

n−1 X

ci φi

i=0

where ci = φi T Af . Using this metric means that the influence of each vertex is weighted by the size of the cell surrounding that vertex. Such a weighting accounts for nonuniformity in the sampling density. For example, the zero row sum property of the discrete Laplacian means that the first eigenvector, the one corresponding to the unique zero eigenvalue, has constant coefficients. It is a discrete constant function. The projection onto this eigenvector corresponds to the DC component in the traditional Fourier transform. 44

Since φ0 is normalized with respect to the metric (73), we have 1 φ0 = √ (1, 1, · · · , 1)T AS where AS = Tr A is the area of S. Therefore n−1 1 X c0 = φ0 T Af = √ f i ai AS i=0

and so c0 φ0 =

! n−1 1 X fi ai (1, 1, · · · , 1)T . AS i=0

Each component takes the value of the area weighted average of f . We can contrast this with the graph Laplacian operators which also have a constant eigenvector corresponding to the zero eigenvalue. In this case however the coefficients of c0 φ0 will be an unweighted average of the values at f at each vertex. If the sampling is nonuniform, this is probably not what we want. For a geometric example we can consider the three coordinate functions representing the positions of the vertices. In this case the c i will be three dimensional vectors. For the geometrically defined Laplacian, the coefficients of c0 φ0 will be the centre of mass of S whereas for the graph Laplacians it will be the centroid of the vertices. If we consider for example a sphere that is densly sampled on the left hemisphere and sparsely sampled on the right, the coefficients of c0 φ0 will correspond to roughly the centre of the sphere if we are using the Laplacian of [Meyer et al., 2003], whereas it will corrspond to a point somewhere in the left side of the volume enclosed by the sphere if we are using a graph Laplacian operator. The position of the latter point is an artifact of our sampling and is not geometrically meaningful. 5.2.2

Computing the Spectral Transform of the Discrete Differential Laplacian Operator

We have seen that L = DQ is similar to the symmetric operator O = D 1/2 QD1/2 and that the orthonormal eigenvectors {ξi } of O give rise to a set {D 1/2 ξi } of eigenvectors of L with the same eigenvalues. Furthur, the {D 1/2 ξi } are orthonormal with respect to the metric induced by the discrete integral. We exploit these observations to avoid doing an eigendecomposition directly on the nonsymmetric matrix. Thus we perform an eigendecomposition of O and we retain the diagonal matrix D 1/2 . When performing the spectral transform of a function f , the ith coefficient is then computed using ξi as

fˆi = D1/2 ξi , f A = ξi T D1/2 Af

= ξi T D−1/2 f. 45

6

Discussion

This section is kept more informal to allow us to discuss the questions that face us now, knowing full well that only in hindsight will we be able to distinguish those that are embarassingly na¨ıve from those that are truely deep. We have attempted to present spectral transforms on a mesh M as discretisations of the transform induced by the eigenvectors of the differential Laplacian operator on the underlying smooth manifold. This perspective naturally leads us to consider operators such as the one described in section 5, which are explicitly derived as discrete differential operators. Such operators will invariably contain geometric information in their definition. However, much of the work that has exploited spectral methods for digital geometry processing has been based on operators that are determined completely by the graph associated with M . What is to be gained from the differental geometry approach and what are the disadvantages? One noteable application in which a spectral transform is used explicitly is spectral compression of mesh geometry [Karni and Gotsman, 2000]. The idea is to compute the spectral transformation of each of the coordinate functions and discard the high index coefficients that are expected to be small for smooth objects. The remaining coefficients together with the underlying graph of the mesh (which may be losslessly compressed by a different algorithm) constitutes the compressed representation of the mesh. Reconstruction of the mesh requires computing the eigenfunctions of the Laplacian on the graph and then expanding the coordinate functions via the inverse spectral transform (29). In this algorithm the decoder relies fundamentally on its ability to reconstruct the eigenfunctions based on the graph connectivity information alone. Thus a geometrically defined operator cannot be used for this form of geometry compression. In fact a major impediment to the practical use of any spectral transform is that it requires an eigendecomposition of a square matrix with as many columns as there are nodes in the mesh. Although these matrices are generally sparse and highly structured, there are currently no algorithms that can compute these eigenfunctions in a reasonable time for the large meshes that are commonly encountered today. Our interest in the discrete differential Laplacian operators stems from the hope that we may be able to quantify the relationship between the spectral transforms that they define and the canonical transform (33) of the underlying smooth surface. Specifically, the hope is that we will be able to decimate a mesh to a size where the spectral transform is computationally feasible and obtain spectral coefficients which represent meaningful information about the surface described by the original mesh. We are in the process of conducting experiments to test this hypothesis. As a potential application we hope to develop a means of characterising the shape of a surface by its spectral coefficients in much the same way as Fourier descriptors [Zahn and Roskies, 1972] are used to characterise planar curves. This would permit the construction of mesh databases that could be effectively queried on the basis of the represented shapes. The remainder of this section is devoted to a discussion of some of the questions and issues that have arisen from our investigations. It would be interesting to see if the combinatiorial (graph based) Laplacians can be given 46

a geometric interpretation that would give us insight into the relationship between these operators and the discrete differential operators. Specifically, if LM is a discrete differetial operator on M = (G, X) and KG is the Kirchoff operator on the same mesh, can we find another mesh M 0 = (G, X 0 ) with the same underlying graph and such that KG = LM 0 ? This would allow us to interpret the Kirchoff basis functions as being those of L M 0 mapped onto M . In other words, the common graph on M and M 0 induces a (non-isometric) one to one mapping Ψ : M → M 0 and if ek is an eigenfunction of LM 0 , then ek ◦Ψ is an eigenfunction of KG on M . The point is that the eigenfunctions of KG are functions G → R and using them to define functions on the mesh necessarily introduces some distortion except perhaps for a particular mesh M 0 . The idea of a mesh M 0 that somehow expresses the geometry of the graph G brings to mind the connectivity shapes of [Isenburg et al., 2001]. In that paper it is shown that an embedding of the graph in R3 which attempts to keep all edge lengths the same and maximise the volume often results in a mesh that has considerable resemblance to the original. The reason of this is that the meshes are usually produced by algorithms or modeling packages that do apply a more or less uniform sampling density. Note that an embedding of the graph in R3 which strictly enforces uniform edge lengths is not possible in general. The algorithm of [Isenburg et al., 2001] produces a compromise in this case. To see that such an embedding may not be possible, consider a torus T 2 ⊂ R3 with a regular triangular mesh (all vertices have valence six). If all of the triangle edges were to have equal length, they would be equilateral triangles and the object would have zero curvature everywhere. This is the flat torus and it is known that it cannot be embedded in R3 (see [Singer and Thorpe, 1967] for example). In any event the search for the mesh M 0 is unlikely to have any practical value. Rather, the exercise serves as an interesting thought experiment that indicates one possible way to interpret the combinatorial Laplacian operators in the differential geometry perspective. Another point of interest is the uniqueness of the spectrum of LS with respect to the surface S. It is known that there are nonisometric and even nonhomeomorphic manifolds that have the same spectrum. But the examples all seem to involve manifolds with dimension higher than two. For smooth surfaces in R3 , is the spectrum of the Laplacian unique to the surface? If so, then perhaps the eigenvalues themselves could be effective shape descriptors. When we plot spectral coefficients we generally have the coefficient number on the horizontal axis and the coefficient value on the vertical. If we wer instead to plaot the coefficient value versus the corresponding eigenvalue, the graph would look a bit different since the spacing between the datapoints would no longer be uniform on the horizontal zxis. Is there any advantage to the latter interpretation? How do we interpret the spacing between the eigenvalues? This becomes an important issue when eigenvalues become close together. As described in section 2.6.1, we want to gather together coefficients belonging to the same eigenspace by means of equation (30). Thus in an implementation based on discrete numerical approximation, it becomes crucial that we are able to decide in a consistent manner when two eigenvalues are in fact the same. We will revisit this problem in section 6.2. Aside from this last question, the issues that we have raised so far are almost peripheral 47

to the main focus of our current research. We now turn to the principle problems with which we are currently grappling.

6.1

Nonuniform Sampling

6.2

Multiplicities of the Eigenvalues and Symmetries of the Mesh

We represent a function on a mesh M as a vector of values at the vertices. The order in which we choose to list the vertices is arbitrary, however the spectral decomposition of a linear operator is not affected by this choice. Indeed, for any permutation of the n indices there is an associated permutation matrix P which is obtained by performing the same permutation on the columns of the n × n identity matrix. Note that P is an orthogonal matrix. If f is a function represented with the original ordering of the vertices, then f˜ = P T f will be its representation in the new ordering. A linear operator L is represented by a matrix and it transforms as ˜ = P T LP. L Now if Lξ = λξ, then

˜ ˜ ξ˜ = (P T LP )(P T ξ) = P T Lξ = P T λξ = λξ. L

(74)

We see that the transformed eigenvector becomes an eigenvector of the transformed operator and the eigenvalue remains the same. Equation (74) has interesting implications with respect to symmetries of M . If P leaves L invariant, P T LP = L, we say that P represents a symmetry of M with respect to L. In this case equation (74) implies that ξ and ξ˜ are both eigenvectors of L with the same eigenvalue. If the corresponding eigenvalue has multiplicity one, then P T ξ = ξ. Thus ξ captures the symmetry expressed by P . When we create a spherical mesh by repeatedly subdividing a regular polyhedron, it will have a large symmetry group associated with it. Every eigenvector associated with an eigenvalue of multiplicity one will be invariant under the entire symmetry group. [PICTURES] show examples of such eigenvectors.

References [Bagchi and Mitra, 1999] Bagchi, S. and Mitra, S. K. (1999). The Nonuniform Discrete Fourier Transform and its Applications in Signal Processing. Kluwer Academic Publishers. [Clark, 1982] Clark, C. W. (1982). Elementary Mathematical Analysis. Wadsworth, second edition. [Courant and Hilbert, 1989] Courant, R. and Hilbert, D. (1989). Methods of Mathematical Physics, volume 1. Wiley Interscience, New York, first english edition. German Edition Copyright 1937. 48

[Craioveanu et al., 2001] Craioveanu, M., Puta, M., and Rassias, T. M. (2001). Old and New Aspects in Spectral Geometry. Kluwer Academic Publishers. [Desbrun et al., 1999] Desbrun, M., Meyer, M., Schr¨oder, P., and Barr, A. H. (1999). Implicit fairing of irregular meshes using diffusion and curvature flow. In ACM SIGGRAPH, pages 317–324. [Edwards, 1976] Edwards, R. E. (1976). Fourier Series I, volume 64 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition. [Isenburg et al., 2001] Isenburg, M., Gumhold, S., and Gotsman, C. (2001). Connectivity shapes. In IEEE Visualization. [Karni and Gotsman, 2000] Karni, Z. and Gotsman, C. (2000). Spectral compression of mesh geometry. In ACM SIGGRAPH, pages 279–286. [Meyer et al., 2003] Meyer, M., Desbrun, M., Schr¨oder, P., and Barr, A. H. (2003). Discrete differential-geometry operators for triangulated 2-manifolds. In Hege, H.-C. and Polthier, K., editors, Visualization and Mathematics III, pages 35–57. Springer-Verlag, Heidelberg. [Rosenberg, 1997] Rosenberg, S. (1997). The Laplacian on a Riemannian Manifold. Cambridge University Press. [Singer and Thorpe, 1967] Singer, I. M. and Thorpe, J. A. (1967). Lecture Notes on Elementary Topology and Geometry. Springer-Verlag. [Stark, 2002] Stark, P. (2002). Fourier volume rendering of irregular data sets. Master’s thesis, Simon Fraser University, Burnaby, BC, Canada. [Tasdizen et al., 2002] Tasdizen, T., Whitaker, R., Burchard, P., and Osher, S. (2002). Geometric surface smoothing via anisotropic diffusion of normals. In IEEE Visualization. [Taubin, 1995] Taubin, G. (1995). A signal processing approach to fair surface design. In ACM SIGGRAPH, pages 351–358. [Turk, 1991] Turk, G. (1991). Generating textures on arbitrary surfaces using reactiondiffusion. In ACM SIGGRAPH, pages 289–298. [Zahn and Roskies, 1972] Zahn, C. T. and Roskies, R. Z. (1972). Fourier descriptors for plane closed curves. IEEE Transactions on Computers, C-21(3):269–281. [Zhang, 2004] Zhang, H. (2004). Discrete combinatorial laplacian operators for digital geometry processing. In Proceedings of SIAM Conference on Geometric Design and Computing. Nashboro Press. to appear.

49