On stochastic methods for surface reconstruction - Springer Link

2 downloads 0 Views 1MB Size Report
Feb 8, 2007 - Waqar Saleem. Oliver Schall. Giuseppe Patan`e. Alexander Belyaev. Hans-Peter Seidel. On stochastic methods for surface reconstruction.
Visual Comput (2007) 23: 381–395 DOI 10.1007/s00371-006-0094-3

Waqar Saleem Oliver Schall Giuseppe Patan`e Alexander Belyaev Hans-Peter Seidel

Published online: 8 February 2007 © Springer-Verlag 2007

W. Saleem (u) · O. Schall · A. Belyaev · H.-P. Seidel Max-Planck-Institut Informatik (MPII), Saarbr¨ucken, Germany {wsaleem, schall, belyaev, hpseidel}@mpi-inf.mpg.de G. Patan`e IMATI-GE CNR, Genova, Italy [email protected]

ORIGINAL ARTICLE

On stochastic methods for surface reconstruction

Abstract In this article, we present and discuss three statistical methods for surface reconstruction. A typical input to a surface reconstruction technique consists of a large set of points that has been sampled from a smooth surface and contains uncertain data in the form of noise and outliers. We first present a method that filters out uncertain and redundant information yielding a more accurate and economical surface representation. Then we present two methods, each of which converts the input point data to a standard shape

1 Introduction Stochastic methods and concepts are increasingly being found to model natural phenomena better than the hitherto used strictly logical methods, and a “sea change in our perspective” is envisioned when stochastic methods eventually overshadow traditional methods in use and application [45]. Parallels between statistical learning [27] and the workings of the human brain lead mathematicians to believe that such methods could one day help us understand the nature of intelligence itself [55]. The promise and efficacy of these methods is now also being exploited by the Geometric Modeling community for applications including 3D sculpting [75], shape matching [42, 66], best view computation [72], reconstruction of missing data [9], surface fitting [74] and point likelihood estimation [53]. In this paper, we present three methods that apply statistical ideas in the surface reconstruction domain. Although such methods are usually slower than their tra-

representation; the first produces an implicit representation while the second yields a triangle mesh. Keywords Surface reconstruction · Point cloud denoising · Sparse implicits · Statistical learning

ditional astochastic counterparts, their superior handling of noisy, incomplete and uncertain data makes them especially attractive for surface reconstruction. Our first method, presented in Sect. 2, is a kernel-based approach to scattered data denoising which yields a more accurate and economical representation of the sampled surface. Next, we present methods to reconstruct a surface from the point set P in two standard shape representations, implicit and triangle mesh. In Sect. 3, we discuss a general framework for the “optimal” selection of the set of centers for implicit basis functions to interpolate P . The method presented in Sect. 4 trains a neural network to learn the surface represented by P as a triangle mesh.

2 Probabilistic point cloud denoising Point datasets routinely generated by optical and photometric range finders usually contain a small fraction

382

W. Saleem et al.

of points with a large error (outliers) and are corrupted by noise. In order to remove these deficiencies from scanned point clouds, a large variety of denoising approaches based on low-pass filtering [39], MLS fitting [1, 20, 41] and partial differential equations (PDEs) [37] has been proposed. These works are motivated by many applications in modeling and rendering [1, 11, 52, 54, 57, 79] which rely on clean data and have become increasingly popular because of the continuous improvement of graphics hardware and technologies for the acquisition of point geometry. While the mentioned denoising approaches remove small-amplitude noise well, they still remain sensitive against outliers. In this paper, we develop a technique based on nonparametric kernel density estimation [50, 56] to robustly filter a noisy point set which is scattered over a surface and contains outliers. Given 3D scattered points P = { p1 , . . . , p N }, we want to estimate an unknown density function f(x) of the data. A simple density estimation fˆ(x) of f(x) is for example given by fˆ(x) =

  N x − pi 1  . Φ Nh 3 h i=1

The smoothing parameter h is called the kernel size and Φ is the kernel function which is usually chosen to be Gaussian. Figure 1 illustrates the kernel-based density estimation approach. Local maxima of the density estimation fˆ(x) naturally define centers of clusters in the scattered data P . The main idea behind our filtering approach consists of defining an appropriate density estimation fˆ(x) to determine those cluster centers which deliver an accurate and smooth approximation of the sampled surface. To detect the local maxima of the constructed density estimation fˆ, the mean shift technique [15, 17, 23] is used. Clusters cor-

responding to outliers are then easily detected and can be removed using a simple thresholding scheme. Recently, robust statistics and statistical learning techniques have gained popularity in Computer Graphics and have been successfully applied to other applications such as data analysis [53] and surface reconstruction [21, 30, 63, 65]. We demonstrate that our robust filtering method [60] works nicely on different types of scanned data containing outliers and in combination with different surface reconstruction techniques such as Power Crust [2] and Tight Cocone [19]. 2.1 Kernel definition, convergence and adaptivity In this section, we first address the problem of choosing appropriate kernel functions. Our goal is for the resulting density estimation to have local maxima close to the sampled smooth surface. In other words, the density estimation can be interpreted as a likelihood function which reflects the probability that a position in 3D space is located on the sampled smooth surface. In order to define a likelihood function L, we accumulate local likelihood functions L i defined for every sample point pi ∈ P . We measure the likelihood L i (x) for a certain position x considering the squared distance of x to the least-squares plane fitted to a spatial neighborhood of pi . Being more specific, we determine the fitting plane by computing the weighted covariance matrix   N  || p j − pi || T , (1) Ci = ( p j − ci )( p j − ci ) χ h j=1

where h is the kernel size, χ is a monotonically decreasing weight function and ci is the weighted average of all samples inside the kernel. Since Ci is symmetric and positive semi-definite, its eigenvalues λli , l = 1, 2, 3, are realvalued and non-negative: 0 ≤ λ3i ≤ λ2i ≤ λ1i . Furthermore, the corresponding eigenvectors vli form an orthonormal basis. Thus the covariance matrix Eq. 1 defines the ellipsoid   E i (x) = x : (x − ci )T C−1 i (x − ci ) ≤ 1 , where the least-squares fitting plane is spanned by the two main principal axes vi1 and vi2 of E i and has the normal vi3 = ni . A 2D example is illustrated in Fig. 2. If normals are provided by the scanning device we use them instead of the estimated normals. Using the squared distance of x to the least-squares plane, we measure the likelihood L i (x) as   L i (x) = Φi (x − ci ) h 2 − [(x − ci ) · ni ]2 .

Fig. 1. Non-parametric kernel density estimation for 1D scattered point data. Local maxima of the density estimation fˆ define cluster centers of the original data

Thus, positions x closer to the least-squares plane will be assigned a higher probability than positions that are more distant. Additionally, we assume that the influence

On stochastic methods for surface reconstruction

Fig. 2. A 2D example of the weighted least-squares fitting plane and ellipsoid kernel computation

of a point pi on the likelihood of a position x diminishes with increasing distance. To consider this behavior, we use monotonically decreasing weighting functions Φi to reduce the influence of each L i . In contrast to radial functions in [49, 53], we use a trivariate anisotropic Gaussian function Φi which is adapted to the shape of the ellipsoid E i . This has the advantage that the weighting function is also adapted to the point distribution in a spatial neighborhood of pi . To define the likelihood function L modeling the probability that a certain point x lies on the sampled surface S, we accumulate the local likelihoods L i (x) contributed by all points pi . L(x) =

N 

wi L i (x)

i=1

Note that we can easily incorporate scanning confidence measures wi ∈ [0, 1] associated with each point pi by scaling the amplitudes of the likelihood functions. If no scanning confidences are provided, we use wi = 1. Figure 3 shows an example of a slice of the likelihood function L. After determining the likelihood function L, we use it to smooth the point cloud by moving all samples to positions of high probability. This means we move the samples to positions which are most likely locations on the sampled surface. To find the local maxima of L, we use a procedure similar to a gradient-ascent maximization. We freeze the weighting functions Φ j since they change slowly and approximate ∇ L(x) by −2

N  j=1

w j Φ j (x − c j ) · [(x − c j ) · n j ] · n j .

(2)

383

Fig. 3. A slice of the likelihood function L of the noisy Buddha model (left) and zooms of the framed regions (right). The function values are represented by colors increasing from blue to purple. Note that L is a smooth function

To allow a fast convergence of the samples to probability maxima, we choose an adaptive step-size as τ=

2

N

1

j=1 w j Φ j (x − c j )

.

(3)

This means that the step size is small near the probability maximum and increases towards the border of each kernel. This provides a fast and stable convergence of all sample points. Combining Eqs. 2 and 3, we get the resulting iterative scheme p0i = pi , pk+1 = pki − mki i k

 k

 N w Φ p − c · p − c · nj · nj j j j j j=1 i i . mki = k

N j=1 w j Φ j pi − c j In order to filter the point cloud P , we apply the iterative scheme individually to every sample. We stop the iterative process when k+1 p − pki < 10−4 h. i Each sample usually converges in less than 10 iterations. A feature of our filtering method is the inherent clustering property. As the number of kernels is larger than the number of maxima in the likelihood function L (see Fig. 1), several sample points converge to the same probability maximum. We cluster those samples and place one representative point at the local maximum of L. See Table 1 for details on the point reduction rate.

384

W. Saleem et al.

Table 1. Timings for the ellipsoid kernel computation and the filtering for the models presented in this paper. The kernel size h is chosen in the interval of one to ten times the average sampling density of the input data. The character N denotes the number of input samples and M the number of filtered points. The parameter k indicates the number of nearest neighbors used for the adaptive kernel computation. All results were computed on a 2.66 GHz Pentium 4 with 1.5 GB of RAM

Dataset

N

M

Kernels

Filtering

h

Face Bunny Bimba Dragon head Dragon Dragon

180 K 362 K + 25 K 1.9 M 485 K 2.1 M 2.1 M

114 K 324 K 1.2 M 170 K 796 K 795 K

1.38 s 3.2 s 16 s 23.22 s 1 m 43 s 6 m 40 s

18.45 s 52 s 80 s 10 m 53 s 36 m 26 s 38 m 05 s

0.8 0.001 1 0.0015 0.0015 k = 250

So far we only used a fixed radius h to compute the local neighborhoods for the ellipsoidal weight function and least-squares fitting plane computation. However, invariant kernels might not be suitable for datasets with varying sampling density. To overcome this problem, we use the k-neighborhood of each sample pi for the PCA analysis to compute the ellipsoidal kernel E i . In this manner, we not only adapt the kernel shape to the point sample distribution in a neighborhood of pi but also the kernel size to the spatial sampling density. The motivation behind this choice can be observed in Fig. 4. If a fixed radius h is used, local maxima of L are created distant to the most likely surface in regions of the point cloud with large amplitude noise. Those maxima also attract points during the iterative filtering process creating a second layer of points around the most likely surface (left image). The usage of adap-

Fig. 4. The effect of adaptive kernels. Left: The Dragon dataset is smoothed using a fixed kernel size. Large amplitude noise at the right foot of the Dragon cannot be filtered due to maxima of L distant to the most surface likely. Right: Filtering result of the same dataset using adaptive kernels. Outlying maxima are well dampened. Beside very few points, the noisy samples in the rectangular region are filtered completely

tive kernels leads to larger kernel sizes in these regions due to the lower sampling density of large amplitude noise. Therefore, kernels of both layers intersect which dampens the effect of local maxima. This results in an improved filtering of large scale noise (right image). 2.2 Discussion Results of our denoising approach on structured light scans and laser scanned data show that it has a good performance on different types of scanned data. Experiments illustrate the strength of our method in removing outliers, especially 3D “salt and pepper” outliers. Finally, we see that results of well-known surface reconstruction methods can be improved in conjunction with our filtering method. Table 1 summarizes timings and the parameters used to generate the results. In Fig. 5, we show a point cloud face dataset acquired by a structured light scanner before and after filtering using our method. The raw point cloud suffers from several outliers and ridges which are typical artifacts caused by the structured light. We show this comparison to illustrate the effectiveness of our method in removing outliers and for smoothing difficult datasets. Due to the clustering

Fig. 5. Filtering of a face scan acquired using a structured light scanner. Initial scattered point data contains scanning artifacts and outliers (left). Our method automatically removes the outliers and nicely suppresses the defects (right)

On stochastic methods for surface reconstruction

Fig. 6. Left: Raw registered range data of the Bimba model obtained using a laser scanner. The data is corrupted by dense small amplitude noise and scanning artifacts close to the mouth and the right eye of the model. Right: The artifacts are well dampened and noise is removed after filtering with our method

property of our method, groups of outliers usually converge to a set of single points sparsely distributed around the surface samples. These points can be characterized by a very low spatial sampling density compared to the surface samples. We use this criterion to detect outliers and remove them using simple thresholding. Figure 7 shows an additional example with a large amount of randomly generated points which can be interpreted as 3D “salt and pepper” outliers. In the case of images, “salt and pepper” noise corrupts random image pixels with intensity spikes. This means that a number of pixels in the image have a very large intensity difference to neighboring pixels. For point clouds, we can model this kind of noise by displacing points of the dataset far from the smooth surface. In our example, we move points inside the bounding box of the dataset. Additionally, we add noise to the normals by perturbing them with random angles. Al-

Fig. 7. Left: Raw registered range scans of the Stanford Bunny dataset expanded by 25 K random “salt and pepper” outliers. Right: Our method accurately denoises the given point set and removes the dense cloud of outliers properly

385

though the outlier density is high as shown in Fig. 7, our algorithm is able to remove the noise and the outliers properly. In Fig. 6, we demonstrate the filtering efficiency of our algorithm on laser scanned data. We show this comparison as laser scans are usually affected by different types of noise compared to structured light scans. Due to the different acquisition technique, laser scans are usually not corrupted by ridges and pits caused by structured light. Instead, they are affected by dense small-amplitude noise. Figure 6 illustrates that high-frequency noise is removed by our method while lower frequency details like hair, mouth and eyes of the Bimba model are preserved. As noted previously, our method uses adaptive kernels to handle large scale noise. Figure 4 shows that while the dragon scan cannot be filtered accurately using a fixed kernel size, adaptive kernels provide a proper filtering of large amplitude noise. An interesting application of our denoising method is to preprocess noisy data before a surface is reconstructed. Usually a surface reconstruction algorithm is directly applied to noisy datasets which reduces the efficiency of the used technique. We show that results can be significantly improved by applying them to data which has been preprocessed using our denoising approach. For surface reconstruction, we use two wellknown Delaunay-based algorithms, namely Power Crust [2] and Tight Cocone [19], which are available for scientific purposes. In Fig. 8, we show reconstructions generated using both algorithms of the raw dataset as well as the preprocessed data. The results of both algorithms are significantly improved when the filtered data is used as input. Note the caption of Fig. 8 for more details.

3 Surface reconstruction with sparse implicits In implicit modeling [10], a 3D data set P := { pi : i = 1, . . . , N} is approximated by an implicit surface Σ := {x ∈ R3 : f(x) = 0}, and the function N f(x) := i=1 αi ϕi (x) is a linear combination of the basis elements B := {ϕi (x)}i=1,... ,N . The underlining mathematical framework builds on numerical linear algebra and the degrees of freedom on the choice of B (e.g., globally [13, 70] and compactly [43, 47] supported RBFs, Partition of Unity [46, 76], Moving-Least-Squares methods [1, 22, 35, 38, 52, 64]) enable to adapt the model parameters to specific problem constraints such as huge data sets with attributes, local accuracy, and degree of smoothness. Furthermore, multi-resolution techniques have been recently proposed [68] and a wide range of applications, including deformation, fast rendering, and collision detection [3, 48, 69], have been targeted by several authors.

386

W. Saleem et al.

Fig. 8a–h. Parts a and b present the head of the Dragon scans from the Stanford Scanning Repository before and after our filtering procedure. Parts c and d show zooms of the images a and b close to the tongue region. Notice that noise is removed and that the filtered samples indicate a surface. Parts e and g illustrate Power Crust and Tight Cocone reconstructions from the noisy samples shown in a. Parts f and h show reconstruction results from the filtered data shown in b. While the Power Crust algorithm shows noticeably improved results with small defects f, the Tight Cocone algorithm reconstructs a smooth mesh h

On stochastic methods for surface reconstruction

387

In our formulation, we consider a Reproducing Kernel Hilbert Space H (RKHS) [4] with kernel Φ(x, y)1; in this case, each basis function ϕi (x) := Φ(x, pi ), i = 1, . . . , N, is centered at a point of the input data set. Among the properties of RKHS, we remember the reproduction property h(x) = h(y), Φ(x, y)H , ∀h ∈ H , ∀x, y ∈ Rd ,

(4)

that will be used in the following discussion. A sparse approximation method searches, among all the possible approximations of f with the same error, for the one f  that involves the smallest number of basis functions. In terms of the corresponding iso-surfaces, this is equivalent to approximating Σ with Σ  := {x ∈ R3 : f  (x) = 0}. Previous work on surface sparsification can be subdivided into the following groups: local, global, and clustering techniques. Local methods build a smooth surface through an iterative and multi-scale procedure based on a local polynomial approximation. In this case, the centers and radii are determined by a posteriori updates of the model and guided by the local approximation error [13, 14, 34, 48, 64]. Global methods find a sparse representation by minimizing a constrained convex quadratic optimization problem [24, 51, 65, 74]; a detailed discussion on them will be given shortly in this section. Since each ϕi is centered at a point pi of P , clustering techniques can also be used to select the centers of the sparse representation. The idea is to group those points which satisfy a common “property” and to center a basis function at a representative point of each cluster. Planarity and closeness, measured in the Euclidean space using k-means clustering [40], Principal Components Analysis (PCA) [33], and Voronoi diagrams [59], are possible criteria (see Fig. 9). These methods are quite stable with respect to outliers and noise but they do not take into account the kernel function used to construct the implicit surface. To overcome the limitations of Euclidean-based clustering, kernel methods [18] evaluate the correlation among points with respect to the scalar product induced by a positive defined kernel. In this case, the PCA and the k-means algorithm lead to efficient clustering techniques such as the KPCA [61, Chap. 1] and the Voronoi tessellation of the feature space [73]. From the previous discussion, it follows that any sparsification scheme combines two conflicting criteria: achieving a high approximation accuracy and obtaining an economical surface representation. The approach we present builds on a global approximation method which does not require the use of heuristics. We employ compactly-supported radial basis functions and use Tikhonov regularization to achieve a near optimal selection of their centers; an iterative approach, which defines Common choices are the Gaussian kernel Φ(x, y) := e(−x−y polynomial of degree d Φ(x, y) := (1 − x, y)d . 1

2 /2σ)

or the

Fig. 9. a Input points on a Bernoulli lemniscate and initial set of centers (yellow circles). b–c First and last iteration of the k-means clustering [40]. d Reconstructed curve and iso-contours of the associated scalar field

a multi-level approximation, is used to cope with arising constrained optimization problems. Following [24], the quality of the approximation of f with f  is measured by the quadratic misfit error  f − f  2H and the selection of the basis functions which N ai ϕi (x) is given by the coefficontribute to f  (x) := i=1 cients ai  = 0, i = 1, . . . , N. The sparsification value, i.e., the number of basis functions used in f  , is quantified by N N . the l1 -norm al1 := i=1 |ai | of the vector a := (ai )i=1 Then, we consider a compromise between these two terms and minimize the functional N 2  1 F(a) := f − ai Φ(x, pi ) + al1 H 2 i=1

 

(5)

f

where (> 0) is the tradeoff between the misfit measure and sparsity. If = 0, we get the standard least-squares approximation scheme, while by increasing we neglect a greater number of basis functions and accept a lower approximation accuracy. As shown in [24], replacing each unknown ai of Eq. 5 with a pair of positive variables (ai+ , ai− ), such that ai = ai+ − ai− , results in the following constrained convex quadratic optimization problem (i.e., Support Vector Machine) [24]: min {F(a+ , a− )}

ai+ ,ai− ∈S

(6)

388

W. Saleem et al.

with

non-linear equations

F(a+ , a− ) :=

1 2

N 





Φ( pi , p j ) ai+ − ai− a+ j − aj

i, j=1

N N  

+

ai + ai− , yi ai+ − ai− +

− i=1

N   +

ai − ai− = 0, S := a+ ≥ 0, a− ≥ 0, i=1

 = 0, i = 1, . . . , N .

This last formulation is equivalent to Eq. 5 and facilitates its numerical optimization by removing the absolute values. If P is a large data set, defining a sparse representation of f as a solution of the quadratic minimization problem Eq. 6 with 2N unknowns is generally unfeasible due to the amount of input data, the possible illconditioning of the coefficient matrix L := (L ij )i, j=1,... ,N , L ij := Φ( pi , p j ), and the unbounded feasible set S. Therefore, all these factors badly affect the stability and convergence of iterative methods [16, 25]. In [63, 65], Eq. 6 is solved by applying a decomposition method and a heuristic coordinate descendant optimization scheme respectively. In the last case, at each iteration (2N − 1) variables are fixed and the objective function is minimized with respect to the remaining free parameter. To avoid the formulation in Eq. 6, which is a consequence of the C 0 -regularity of the l1 -norm in Eq. 5, we use the smooth approximation al1 ≈

N   2 1/2 ai + η ,

η → 0,

i=1

and we replace the functional Eq. 5 with N N   2  2 1/2 1 G(a) := f − ai Φ(x, pi ) H +

ai + η . 2 i=1

i=1

In the following, we prove that minimizing G is equivalent to solving a system of non-linear equations. Using the reproduction property Eq. 4, we can rewrite G as G(a) ≡

N N  1  Φ( pi , p j )ai a j − yi a i 2 i, j=1

+

N   i=1

with



∆(a) := diag 

i=1

f( pi ) = yi , i = 1, . . . , N, and feasible set

ai+ ai−

∇G = 0 ←→ [L + ∆(a)] a = y

i=1

1/2 1 ai2 + η +  f 2H , 2

where the term 12  f 2H is constant. Then, the critical points of G are the solutions of the following system of

1

(7)

1

1/2 , . . . ,  2 1/2 a12 + η aN + η

 ,

N and y := (yi )i=1 . The matrix formulation shows the two factors of the sparsification scheme: the matrix L, which is associated to the kernel function, and the diagonal matrix ∆(a) related to the sparsification term. If we consider a Gaussian or compactly supported kernel, which corresponds to a positive/semi-positive definite matrix L, the Hessian matrix [L + η[∆(a)]3 ] of G is positive definite and the convex functional G admits a unique minimum. The solution of Eq. 7 is evaluated by running the iterative scheme

[L + ∆(a(n) )]a(n+1) = y ←→ a(n+1) = [L + ∆(a(n) )]−1 y,

(8)

with a(0) initial guess and n ≥ 1. The term a(n+1) is achieved from a(n) by solving a linear system with direct or iterative solvers, e.g., the Gauss–Seidel or conjugate gradient method [25]. The iterative procedure stops when the solution becomes stationary, i.e., we do not improve the number of null coefficients and/or the residual error between two consecutive iterations is below a given threshold. 3.1 Discussion As done in [70], the initial set of centers is given by P N placed at a small plus 2N additional centers { pi ± δni }i=1 distance δ from pi , where ni is an approximation of the normal at pi . Recently [74], the surface normal vectors have been incorporated in the regularization framework, thus avoiding off-set conditions to guarantee a non-null solution. This approach uses curvature and grid-based clustering on point clouds to guide the selection of the basis function centers and radii, and to achieve high-quality approximations. Figures 10 and 11 show the curve reconstruction with several sparsification percentages (Φ is the Gaussian Kernel); we note that the local noise and irregular sampling, which affect the approximation in Fig. 11a, are attenuated in Fig. 11b where we use a minor number of basis functions. This property is due to the misfit error in Eq. 5 and to a lower conditioning number of the coefficient matrix related to the least-squares approximation. Furthermore, a smooth kernel function Φ and the induced norm  H provide a smooth solution, thus reducing the influence of noise and outliers of P in the reconstructed surface. Finally, Fig. 11c–d shows the center selection on a curve with a non-uniform sampling.

On stochastic methods for surface reconstruction

389

Fig. 10. a Input Bernoulli lemniscate with N centers depicted by black dots. Dataset N M kernels filtering h. b–d Curve approximations with different percentages of selected centers: n is the number of iterations in Eq. 8

Fig. 11a–d. Reconstructed curves using input centers (black dots). a Affected by noise and c irregularly distributed. b–d Selected centers (black dots) and reconstructed curves

Using a k-neighborhood of each center [5] requires a storage overhead of (3k + 1)N non-null entries for the matrix L. Then, each iteration Eq. 8 updates only the principal diagonal of L, preserves its sparsity and positive definiteness, and requires O(N) time to solve the linear system. The proposed sparsification scheme is equivalent to Support Vector Machine and it avoids the constrained convex quadratic minimization and the use of heuristics involved in SVMs. Since the sparsification starts from the full resolution with 3N basis functions, we get a fine-to-coarse approach and the iterative procedure Eq. 8 builds a multi-level approximation scheme based on a sequence of nested spaces; for more details, we refer the reader to [51]. If we get a complete sparsification, i.e., the iterative solver of the system of non-linear equations converges to the null solution, each approximation is achieved by using the intermediate iterations (see Fig. 12). As the iterations in Eq. 8 proceed, the L ∞ error beN (n) ai ϕi (x) tween the current approximation f (n) (x)= i=1

Fig. 12. a Sparsification function and reconstructed surface with 80 K input centers: the x-axis shows the number of iterations and the y-axis the corresponding number of null coefficients, that is, the number of neglected basis functions. b–f Surface reconstruction with a different percentage of selected centers and L ∞ error

390

W. Saleem et al.

and f , measured by L ∞ ( f, f (n) ) := max

i=1,... ,N

   f( pi ) − f (n) ( pi ) ,

increases due to a minor number of basis functions. Unlike local approximation techniques, which are capable of adapting the center selection to local accuracy through an extensive use of function evaluations, we cannot use this error as a stop criterion due to the global formulation of our problem. The fine-to-coarse structure requires a storage overhead and computational cost greater than local approximation [1, 47, 48]. For instance, if k = 20 we are able to handle at least 500 K centers on a Pentium IV 2.80 GHz with 1 GBRAM; in this case, the evaluation of L and the sparsification scheme require approximately 90 − 180 seconds. Our method always converges to a global minimum and it can be used for those applications where center selection is mandatory to speed up surface sampling, interactive modeling and queries [12, 65]. However, it cannot deal with huge data sets, which can be handled efficiently by [46, 48, 74]. Setting the support of the basis functions is a delicate part of the approximation and sparsification scheme; in fact, its choice affects the size of the details that will be recovered as well as the maximum sparsification percentage, which avoids artifacts in the reconstructed model. In our framework, the support associated to the basis function ϕi is set equal to the minimum radius of the sphere centered at pi that contains the k-nearest neighbors of pi , and k varies from 10 to 20 depending on the number of input points. Our tests have shown that these values lead to a good compromise between sparsification rate and approximation accuracy (see Fig. 13). Finally, in the examples of Figs. 12 and 13 we used Φ(r) := (1 − r)4+(4 +

16r + 12r 2 + 3r 3 ) [43, 62] as the sparse kernel, where r = x−y and σ is its compact support. σ

4 Shape learning from point clouds In this section, we present neural meshes, a technique to learn the shape of a 3D data set P := { pi (x i , yi , z i ) : i = 1, . . . , N} by training a neural network [8]. Previously, neural networks have been trained to reconstruct parametric and freeform surfaces [26, 44, 77] representing P , whereby the neural network learns a function f(x, y) such that  f(x i , yi ) − z i ) < , ∀i = 1 . . . n and for some acceptable error level . An extension of neural networks, functional networks, have been used to reconstruct P with B-spline and Bezier surfaces [29] (see also references therein). The above methods fall under the supervised learning category; they assume a relationship between the input variables. Unsupervised learning methods make no such assumption. The Neural Mesh technique also belongs to this class of methods. In this class, the neural network is not trained to compute a surface. Instead, the trained neural network is the desired surface. The methods described in [6, 7, 28, 71] learn control grids for reconstruction of P and parametric grids for subsequent parameterization of P . Neural networks can also be trained to directly interpolate or approximate P [36, 78]. In the unsupervised learning methods described above, the topology and number of vertices of the learnt surface remain unchanged since initialization. As they initialize the surface with a 2D grid, they can accurately represent P only if P represents a surface patch. Also, the learnt surface may under-represent detailed features

Fig. 13a–d. Reconstruction with compactly supported radial basis functions on a model with 240 K input centers and different percentages of selected centers. In c–d , the support of the RBFs is twice of the one used in a–b ; a greater support achieves a smoother reconstruction and a higher approximation accuracy with a minor number of basis functions

On stochastic methods for surface reconstruction

in P . As a solution to the latter problem, subdivision is suggested in [78]. In [6, 71], where the surface has the topology of a quad grid, the authors suggest tracking the activity of each vertex with an associated counter, which increases each time the vertex participates in learning. They can then spot active vertices by their high counter values, and add entire rows/columns of vertices in their neighborhoods. Both these solutions are global in nature and end up adding new vertices in unwanted regions of the surface as well. A Neural Mesh is initialized as a closed triangle mesh, M. Each vertex in M stores a counter value, τ, and a winning sample number, Sw . The mesh M learns from each training sample, s, from P by moving the corresponding winner vertex, vw , towards the sample and applying smoothing to its 1-ring neighbors. An illustration for the 1D case is given in Fig. 14a. The winner’s new position is given as vw ← vw + αw · F(d) · d, where d=− v→ w s and αw is a parameter between 0 and 1. The function F(d) filters out the effects of outliers in P . A moving average, µd , and standard deviation, σd , of d over the past 1000 training samples are maintained. An outlier threshold is then calculated as d = µd + αd σd , using an input tolerance αd , and F(d) is defined as  1 if |d| ≤ d F(d) = d . if |d| > d |d| Smoothing the winner’s neighborhood avoids foldovers and local minima in M, illustrated for the 1D case in Fig. 15. For each vertex, vi , in the 1-ring of vw , its Laplacian [67] is calculated, L(vi ) = n(v1 i ) vk (vk − v)i , followed by the displacement, Ls (vi ) = L(vi )−(L(vi ) · ni )ni , where n(vi ) is vi ’s valence, vk ’s its 1-ring neighbors and ni its approximated normal. The vi ’s position is then updated, vi ← vi + αs Ls (vi ), where αs is a smoothing parameter between 0 and 1.

Fig. 14. a Positions of the winner and its neighbors are updated for each training sample. b Vertices are added/removed using complementary vertex split (left to right) and half-edge collapse (right to left) operations

391

The winning vertex’s counter, τ(vw ), is then updated using vw ’s winning sample number, Sw (vw ), and the current sample number, Sc. First, the number of samples since the current winner’s last win are computed, x = Sc − Sw (vw ) − 1. Then the updates are made, x τ(v ) + 1), and S (v ) ← S . α τ(vw ) ← αctr (αctr w w w c ctr is 1 1 λN calculated as αctr = ( 2 ) , where N is the current number of vertices in M and λ is an input parameter such that a vertex loses half its counter value if it is not the winner for λN samples. As intended, active vertices are now identified by their high counter values. Every Cadd samples, all counter values are synchronized, where Cadd is an input parameter. For each vertex, vi , in M, the number of previous non-winning samples is calculated, x = Sc − Sw (vi ), and x τ(v ), and S (v ) ← S . the updates are made, τ(vi ) ← αctr i w i c Then a vertex split operation is performed on the vertex with the highest counter value. Counter synchronization is also performed after every Crem samples, where Crem is input by the user. This time, vertices with counter values lower than a certain threshold are removed from M using half-edge collapse operations. Addition/removal of vertices is illustrated in Fig. 14b, which also shows the vertices whose valences are affected by the operations. Halfedge collapse operations that would cause M to become non-manifold (Fig. 16a,b) are not performed. For more details, we refer the reader to [31, 58].

Fig. 16a–c. Preserving manifoldness (boundaries are shown in black). Collapsing edge A B a,b or removing triangle A BC c results in a non-manifold mesh. The half-edge collapses a,b are not performed, and the triangle removal is corrected by removing neighboring triangles

Operations affecting the topology of M are invoked after every NCto p samples, where Cto p is chosen by the user and N is the current number of vertices in M. The average triangle area, A, in M is used to calculate a triangle removal and a boundary merging threshold. Triangles with area greater than the triangle removal threshold are removed, and boundaries whose Hausdorff distance to each other is less than the boundary merging threshold are merged. If the removal of a triangle causes M to no longer be manifold, neighboring triangles are removed to restore manifoldness (Fig. 16c). Figure 17 shows the effect of these steps. 4.1 Discussion

Fig. 15. Unwanted artifacts degrade mesh quality. Vertices represented by unfilled circles will never be selected as winners

Neural meshes effectively solve the problems unaddressed by previous surface learning methods. They can represent

392

W. Saleem et al.

Fig. 17. Learning topology. The hole is learnt (left) as large selfintersecting triangles, which are removed to form boundaries (center). With continued training, the boundaries grow close to each other and are merged to form the handle (right)

entire surfaces, not just patches. Learning is adaptive; it starts with a small, simple initial surface to which vertices are added only where needed, i.e., in the 1-ring neighborhood of active vertices. Vertices that become misplaced during training and over represent P are removed. Also, neural meshes possess the ability to learn topology. Notice that the only step of the algorithm where P is required is in picking training samples, thus making the running time independent of the size of P. This is in direct contrast to methods that need to process all input points in order to output a surface. Independence from the input point cloud also allows out-of-core processing of large data sets. The coarse-to-fine way of shape learning in neural meshes offers a deeper insight into the shape. Vertices in M in high curvature areas display large variations in their normals during training compared to those representing flatter areas. This information can be tracked [32] by the vertices’ counters, leading to higher counter values, and thus higher vertex population, in high curvature areas. Notice that in comparison, the default neural meshes mimic the density of P . Some reconstructions are shown in Fig. 18. Like most learning algorithms, neural meshes suffer from the need for user parameters. While a default set of values can be set, best results will be obtained by tuning the parameters in accordance to the input set. This could be seen as an advantage for the expert user. For an extensive treatment of this issue, we refer the reader to [58]. Neural meshes also inherit long running times from learning methods. The complexity of the algorithm described above is O(N 2 ), based mainly on sorting vertices according to their counter values to find the ones with highest and lowest values. In an alternate implementation of the algorithm, the vertices are copied to a priority queue data structure, where a vertex’s counter value is replaced by its position in the priority queue. It has been shown [58] that implementing the priority queue as a self-balancing binary tree reduces the complexity of the neural mesh algorithm to O(N log N) with no significant difference to output mesh quality. Despite the speedup offered by the priority queue implementation, the method is slow and non-competitive

Fig. 18. Neural mesh reconstructions of the cube (top) and Bimba (bottom) models according to sampling density of P (left) and surface curvature (right). Reconstructions of the same size are compared

with contemporary geometry-based surface reconstruction methods. The majority of the running time is spent in Geometry Learning. We expect that a shrink-wrapping approach with the neural mesh initialized as an inflated bounding sphere with number of vertices close to the final number could offer a solution to this problem.

5 Conclusion In this paper, we presented three stochastic methods for surface reconstruction. Despite the fact that some of these methods are relatively slow, we believe them to be important as they represent the entry of stochasticity into Geometric Modeling. We believe that in their natural ability to reliably deal with uncertain and fuzzy data, these methods hold the key to many problems in Geometric Modeling where one typically works with measurements with exactly these defects. Acknowledgement We would like to thank Tamal Dey and Nina Amenta for making their surface reconstruction software available. The Dancer and Bimba models are courtesy of the AIM@SHAPE Shape Repository and the Buddha, Dragon, and Bunny datasets are courtesy of the Stanford 3D Scanning Repository. This research is partially supported by the European FP6 NoE grant 506766, AIM@SHAPE.

On stochastic methods for surface reconstruction

393

References 1. Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Silva, C.T.: Point set surfaces. IEEE Visualization 2001 pp. 21–28 (2001) 2. Amenta, N., Choi, S., Kolluri, R.: The power crust. In: Proceedings of 6th ACM Symposium on Solid Modeling, pp. 249–260 (2001) 3. Angelidis, A., Cani, M.P.: Adaptive implicit modeling using subdivision curves and surfaces as skeletons. In: SMA ’02: Proceedings of the ACM Symposium on Solid Modeling and Applications, pp. 45–52. ACM, Boston (2002) 4. Aronszajn, N.: Theory of reproducing kernels. Trans. Amer. Math. Soc. 68, 337–404 (1950) 5. Arya, S., Mount, D., Netanyahu, N., Silverman, R., Wu, A.: An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. J. ACM 45(6), 891–923 (1998) 6. Barhak, J., Fischer, A.: Adaptive reconstruction of freeform objects with 3D SOM neural network grids. In: PG ’01: Proceedings of the 9th Pacific Conference on Computer Graphics and Applications, p. 97 (2001) 7. Barhak, J., Fischer, A.: Parameterization and reconstruction from 3D scattered points based on neural network and PDE techniques. IEEE Trans. Visual. Comput. Graph. 7(1), 1–16 (2001) 8. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, New York (1995) 9. Blanz, V., Mehl, A., Vetter, T., Seidel, H.P.: A statistical method for robust 3D surface reconstruction from sparse data. In: Y. Aloimonos, G. Taubin (eds.) 2nd International Symposium on 3D Data Processing, Visualization, and Transmission, 3DPVT 2004, pp. 293–300. IEEE (2004) 10. Bloomenthal, J., Wyvill, B. (eds.): Introduction to Implicit Surfaces. Kaufmann, San Francisco (1997) 11. Botsch, M., Kobbelt, L.: High-quality point-based rendering on modern GPUs. In: PG ’03: Proceedings of the 11th Pacific Conference on Computer Graphics and Applications, pp. 335–343. IEEE Computer Society, Washington, DC (2003) 12. Botsch, M., Kobbelt, L.: Real-time shape editing using radial basis functions. Comput. Graph. Forum 24(3), 611–621 (2005) 13. Carr, J.C., Beatson, R.K., Cherrie, J.B., Mitchell, T.J., Fright, W.R., McCallum, B.C., Evans, T.R.: Reconstruction and representation of 3D objects with radial basis functions. In: SIGGRAPH ’01, pp. 67–76. ACM, Boston (2001) 14. Chen, S., Wigger, J.: Fast orthogonal least squares algorithm for efficient subset model selection. IEEE Trans. Signal Process. 43(7), 1713–1715 (1995)

15. Cheng, Y.: Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 17, 790–799 (1995) 16. Coleman, T.F., Li, Y.: An interior trust region approach for nonlinear minimization subject to bounds. SIAM J. Optimiz. 6, 418–445 (1996) 17. Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002) 18. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995) 19. Dey, T.K., Goswami, S.: Tight Cocone: A water-tight surface reconstructor. In: Proceedings of 8th ACM Symposium Solid Modeling Applications, pp. 127–134 (2003) 20. Dey, T.K., Sun, J.: Adaptive MLS surfaces for reconstruction with guarantees. In: Eurographics Symposium on Geometry Processing 2005, pp. 43–52 (2005) 21. Fenn, M., Steidl, G.: Robust local approximation of scattered data. In: R. Klette, R. Kozera, L. Noakes, J. Weickert (eds.) Geometric Properties from Incomplete Data, pp. 317–334. Springer, Berlin Heidelberg New York (2005) 22. Fleishman, S., Cohen-Or, D., Silva, C.T.: Robust moving least-squares fitting with sharp features. ACM Trans. Graph. 24(3), 544–552 (2005) 23. Fukunaga, K., Hostetler, L.D.: The estimation of the gradient of a density function with applications in pattern recognition. IEEE Trans. Inform. Theory 21, 32–40 (1975) 24. Girosi, F.: An equivalence between sparse approximation and support vector machines. Neural Comput. 10(6), 1455–1480 (1998) 25. Golub, G., VanLoan, G.: Matrix Computations, 2nd edn. John Hopkins University Press, Baltimore, MD (1989) 26. Gu, P., Yan, X.: Neural network approach to the reconstruction of freeform surfaces for reverse engineering. Comput. Aided Des. 27(1), 59–64 (1995) 27. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, Berlin Heidelberg New York (2001) 28. Hoffmann, M., V´arady, L.: Free-form surfaces for scattered data by neural networks. J. Geom. Graph. 2, 1–6 (1998) 29. Iglesias, A., Echevarr´ia, G., G´alvez, A.: Functional networks for B-spline surface reconstruction. Future Gener. Comput. Syst. 20(8), 1337–1353 (2004) 30. Ivrissimtzis, I., Lee, Y., Lee, S., Jeong, W.K., Seidel, H.P.: Neural mesh ensembles. In: Y. Aloimonos, G. Taubin (eds.) 2nd International Symposium on 3D Data Processing, Visualization, and

31.

32.

33.

34.

35.

36.

37. 38. 39. 40. 41.

42.

43.

44.

45.

Transmission, 3DPVT 2004, pp. 308–315. IEEE (2004) Ivrissimtzis, I.P., Jeong, W.K., Seidel, H.P.: Using growing cell structures for surface reconstruction. In: Shape Modeling International, pp. 78–88, 288. IEEE Computer Society, Washington, DC (2003) Jeong, W.K., Ivrissimtzis, I.P., Seidel, H.P.: Neural meshes: Statistical learning based on normals. In: Pacific Conference on Computer Graphics and Applications, pp. 404–408. IEEE Computer Society, Washington, DC (2003) Jolliffe, I.T.: Principal component analysis. In: Principal Component Analysis. Springer, Berlin Heidelberg New York (1986) Kanai, T., Ohtake, Y., Kase, K.: Hierarchical error-driven approximation of impplicit surfaces from polygonal meshes. In: Proceedings of Geometry Processing, pp. 21–30 (2006) Kazhdan, M.M.: Reconstruction of solid models from oriented point sets. In: Symposium on Geometry Processing, pp. 73–82 (2005) Knopf, G.K., Sangole, A.: Interpolating scattered data using 2D self-organizing feature maps. Graph. Models 66(1), 50–69 (2004) Lange, C., Polthier, K.: Anisotropic fairing of point sets. Comput. Aided Geom. Des. 22(7), 680–692 (2005) Levin, D.: The approximation power of moving least-squares. Math. Comput 67(224), 1517–1531 (1998) Linsen, L.: Point cloud representation. Tech. Rep. 2001-3, Fakultät für Informatik, Universität Karlsruhe (2001) Lloyd, S.: An algorithm for vector quantizer design. IEEE Trans. Commun. 28(7), 84–95 (1982) Mederos, B., Velho, L., de Figueiredo, L.H.: Smooth surface reconstruction from noisy clouds. J. Brazilian Comput. Soc. (2004) Mitra, N.J., Guibas, L., Giesen, J., Pauly, M.: Probabilistic fingerprints for shapes. In: Symposium on Geometry Processing, pp. 121–130 (2006) Morse, B.S., Yoo, T.S., Chen, D.T., Rheingans, P., Subramanian, K.R.: Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions. In: Proc. of SMI, pp. 89–98. IEEE Computer Society, Washington, DC (2001) Mostafa, M.G.H., Yamany, S.M., Farag, A.A.: Integrating shape from shading and range data using neural networks. CVPR 02, 2015–2020 (1999) Mumford, D.: The dawning of the age of stochasticity. In: V.I. Arnold, M. Atiyah, P. Lax, B. Mazur (eds.) Mathematics: Frontiers and Perspectives 2000, pp. 197–218. American Mathematical Society, Providence, RI (1999)

394

W. Saleem et al.

46. Ohtake, Y., Belyaev, A., Alexa, M., Turk, G., Seidel, H.P.: Multi-level partition of unity implicits. ACM Trans. Graph. 22(3), 463–470 (2003) 47. Ohtake, Y., Belyaev, A., Seidel, H.P.: 3D scattered data interpolation and approximation with multilevel compactly supported RBFs. Graph. Models 67(3), 150–165 (2005) 48. Ohtake, Y., Belyaev, A.G., Alexa, M.: Sparse low-degree implicits with applications to high quality rendering, feature extraction, and smoothing. In: Symposium on Geometry Processing, pp. 149–158 (2005) 49. Ohtake, Y., Belyaev, A.G., Seidel, H.P.: An integrating approach to meshing scattered point data. In: ACM Symposium on Solid and Physical Modeling (2005) 50. Parzen, E.: On the estimation of a probability density function and the mode. Ann. Math. Stat. 33, 1065–1076 (1962) 51. Patan´e, G.: SIMS: a multi-level approach to surface reconstruction with sparse implicits. In: IEEE International Conference on Shape Modeling and Applications, pp. 222–233 (2006) 52. Pauly, M., Keiser, R., Kobbelt, L.P., Gross, M.: Shape modeling with point-sampled geometry. In: Proceedings of SIGGRAPH 2003 22, 641–650 (2003) 53. Pauly, M., Mitra, N.J., Guibas, L.J.: Uncertainty and variability in point cloud surface data. In: Eurographics Symposium on Point-Based Graphics, pp. 77–84. Zurich, Switzerland (2004) 54. Pfister, H., Zwicker, M., Baar, J.V., Gross, M.: Surfels: Surface elements as rendering primitives. In: Proceedings of ACM SIGGRAPH 2000, pp. 335–342 (2000) 55. Poggio, T., Smale, S.: The mathematics of learning: dealing with data. Amer. Math. Soc. Notice 50(5), 537–544 (2003) 56. Rosenblatt, M.: Remarks on some non-parametric estimates of a density function. Ann. Math. Stat. 27, 832–837 (1956) 57. Rusinkiewicz, S., Levoy, M.: QSplat: a multiresolution point rendering system for large meshes. In: Proceedings of ACM SIGGRAPH 2000, pp. 343–352 (2000)

58. Saleem, W.: A flexible framework for learning-based surface reconstruction. Dissertation, Computer Science Department, University of Saarland, Saarbrücken (2004) 59. Samozino, M., Alexa, M., Alliez, P., Yvinec, M.: Reconstruction with voronoi central radial basis functions. In: Proceedings of Eurographics, pp. 51–60 (2006, in press) 60. Schall, O., Belyaev, A.G., Seidel, H.P.: Robust filtering of noisy scattered point data. In: M. Pauly, M. Zwicker (eds.) Eurographics Symposium on Point-Based Graphics 2005, pp. 71–77. Stony Brook, NY (2005) 61. Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge, MA (2002) 62. Schölkopf, B., Steinke, F., Blanz, V.: Object correspondence as a machine learning problem. In: ICML ’05: Proceedings of the 22nd International Conference on Machine Learning, pp. 776–783. ACM, Boston (2005) 63. Schölkopf, B., Giesen, J., Spalinger, S.: Kernel methods for implicit surface modeling. In: Advances in Neural Information Processing Systems 17, pp. 1193–1200. MIT Press, Cambridge, MA (2005) 64. Shen, C., O’Brien, J.F., Shewchuk, J.R.: Interpolating and approximating implicit surfaces from polygon soup. ACM Trans. Graph 23(3), 896–904 (2004) 65. Steinke, F., Schölkopf, B., Blanz, V.: Support vector machines for 3D shape processing. Comput. Graph. Forum 24(3), 285–294 (2005) 66. Super, B.J.: Learning chance probability functions for shape retrieval or classification. In: CVPRW ’04: Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’04) vol. 6, p. 93. IEEE Computer Society, Washington, DC (2004) 67. Taubin, G.: A signal processing approach to fair surface design. In: SIGGRAPH ’95: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 351–358 (1995)

68. Tobor, I., Reuter, P., Schlick, C.: Multi-scale reconstruction of implicit surfaces with attributes from large unorganized point sets. In: Proceedings of SMI, pp. 19–30 (2004) 69. Turk, G., O’Brien, J.F.: Shape transformation using variational implicit functions. In: Proceedings of SIGGRAPH ’99, pp. 335–342. ACM/Addison-Wesley, Boston (1999) 70. Turk, G., O’Brien, J.F.: Modelling with implicit surfaces that interpolate. ACM Trans. Graph. 21(4), 855–873 (2002) 71. V´arady, L., Hoffmann, M., Kov´acs, E.: Improved free-form modeling of scattered data by dynamic neural networks. J. Geom. Graph. 3, 177–181 (1999) 72. V´azquez, P.P., Feixas, M., Sbert, M., Heidrich, W.: Automatic view selection using viewpoint entropy and its applications to image-based modelling. Comput. Graph. Forum 22(4), 689–700 (2003) 73. Verri, A., Camastra, F.: A novel kernel method for clustering. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 801–804 (2005) 74. Walder, C., Schölkopf, B., Chapelle, O.: Implicit surface modelling with a globally regularised basis of compact support. In: Computer Graphics Forum (Proc. Eurographics). Blackwell, Oxford (2006) 75. Willis, A., Speicher, J., Cooper, D.B.: Surface sculpting with stochastic deformable 3D surfaces. ICPR 2, 249–252 (2004) 76. Xie, H., McDonnell, K.T., Qin, H.: Surface reconstruction of noisy and defective data sets. In: IEEE Visualization, pp. 259–266 (2004) 77. Yang, M., Lee, E.: Improved neural network model for reverse engineering. Int. J. Prod. Res. 38(9), 2067–2078 (2000) 78. Yu, Y.: Surface reconstruction from unorganized points using self-organizing neural networks. In: IEEE Visualization, Conference Proceedings, pp. 61–64 (1999) 79. Zwicker, M., Pfister, H., van Baar, J., Gross, M.: Surface splatting. In: Proceedings of SIGGRAPH 2001, pp. 371–378. ACM, New York (2001)

On stochastic methods for surface reconstruction

WAQAR S ALEEM is a Ph.D. student at the Max-Planck-Institut (MPI) Informatik, where he works on statistical methods for shape applications, automatic view generation of 3D models and shape similarity. He is also involved in maintenance and development of the AIM@SHAPE Shape Repository. He has completed his M.Sc. (2004) and B.Sc. (2002) degrees in Computer Science, respectively, from Saarland Univeristy and Mohammad Ali Jinnah University, Karachi.

of Genova (2005) and a Post-Laurea Degree Master in Application of Mathematics to Industry from the F. Severi National Institute for Advanced Mathematics - University of Milano (2000). Currently, he is a Research Fellow in the Shape Modelling Group at IMATI-CNR, Genova, Italy. His research interests include numerical linear algebra, surface approximation, parameterization, and analysis.

O LIVER S CHALL received a Diploma in Computer Science from RWTH Aachen University in 2003 and is now a Ph.D. student at the MaxPlanck-Institut (MPI) Informatik in Saarbrücken, Germany. His current research interests include denoising of static and time-varying geometric data and surface reconstruction.

A LEXANDER B ELYAEV is currently a Senior Researcher at the Computer Graphics Department of the Max-Planck-Institut (MPI) Informatik, Saarbrücken, Germany. His research interests include geometric modeling and processing and homogenization of PDE operators. Belyaev received his M.S. and Ph.D. degrees in Mathematics from Moscow State University in 1986 and 1990, respectively.

G IUSEPPE PATAN E´ received a Ph.D. in Mathematics and Applications from the University

H ANS -P ETER S EIDEL is the scientific director and chair of the computer graphics group at

395

the Max-Planck-Institut (MPI) Informatik and a professor of computer science at the University of Saarbruecken, Germany. Seidel has published some 200 technical papers in the field and has lectured widely on these topics. He has received grants from a wide range of organizations, including the German National Science Foundation (DFG), the German Federal Government (BMFBF), the European Community (EU), NATO, and the German-Israel Foundation (GIF). In 2003 Seidel was awarded the Leibniz Preis, the most prestigious German research award, from the German Research Foundation (DFG). Seidel is the first computer graphics researcher to receive this award. In 2004 he was selected as founding chair of the Eurographics Awards Programme.