2D Parametrization of 3D Meshes

5 downloads 0 Views 2MB Size Report
May 5, 2003 - Desbrun & David, 2002 shows that the least squares conformal maps and .... Desbrun Mathieu, David Cohen-Steiner, Alliez Pierre, Devillers ...
HELSINKI UNIVERSITY OF TECHNOLOGY Telecommunications Software and Multimedia Laboratory T-111.500 Seminar on computer graphics Spring 2003

2D Parametrization of 3D Meshes

Markku Rontu 49574D

May 5th 2003

2D Parametrization of 3D Meshes Markku Rontu HUT www.hut.fi/u/mrontu [email protected]

May 5th 2003

Abstract In computer graphics there’s need for describing the surface characteristics such as color and surface normal of three-dimensional objects. This can be achieved by parametrizing the 3-dimensional surface of the mesh onto a flat 2-dimensional surface easily representable as a simple rectangular map on which the surface characteristics are conveniently stored. Then the texture map is used in the rendering process. Parametrizing arbitary meshes is by no means simple. For long graphics artists had to hand craft suitable mappings from the surface of polygonized objects to rectangular texture maps which is a lot of work even for simple models. Several algorithms for obtaining suitable mappings automatically have been independently devised and these make it possible to make tools to fully automate the parametrization. This work presents the basic ideas underlying the parametrization process and solutions from two recent papers.

1

Introduction

Creating a visually compelling three dimensional model is an artistic problem but using the artist’s creation in an application with real-time or non-real-time graphics is a problem for the programmers. How are the models defined? What kind of information do they contain? The best representation of data varies from one application to another. In scientific visualization where volumetric data is visualized a natural approach are voxels, basically 1

pixels in 3-dimensional space with color. Most common way to represent models is through polygonal (usually triangular) meshes defining the shape and texture maps to represent the surface characteristics. Sometimes when the subdivision is fine enough it suffices to provide surface characteristics simply at vertices with linear interpolation in between. For most applications though, especially using modern hardware accelerated graphics, the cost of defining many more vertices for reasonable resolution is higher than using an equivalent texture map. This is natural considering a 3-dimensional point needs more information than a point in the 2-dimensional parametrization. Besides in computer graphics, the problem occurs also in cartography where optimal mapping of the Earth’s surface has been researched for long.

1.1 Applications Representing color is a simple application of texture maps but it is by no means the only one. One of the latest uses in real-time rendering is to simplify an original high detail mesh until it only has a fraction of polygons left and then use the original mesh to create a bump map or normal map for the low detail mesh for better shading. One application of uniquely mapped models is a 3D paint system. There the artist uses a modeling program as usual but paints the textures straight on to the surface of the model bypassing the need to open a regular painting application and then painting on the often complex texture map. There are still other uses for 2-dimensional parametrizations. It’s faster to remesh an arbitary mesh, if the surface is first flattened. This has been done in Desbrun et al. , 2003.

1.2 Parametrization So what is needed is a mapping from the surface of the 3-dimensional mesh of the model to an isomorphic 2-dimensional flat surface. This is called parametrization. Then the surface can be further mapped to the actual texture map and texels. Isomorphism roughly means that there are the same amount of vertices with the same connections (edges) in the 3-dimensional surface and the 2-dimensional flat surface i.e. we don’t add or remove any vertices nor do we change the edges but just move the vertices around. With polygonal meshes we are interested in mappings that are piecewise-linear so that, if we provide the parametrization at the mesh vertices as texture coordinates, we obtain 2

the intermediate parameter values with simple linear interpolation. This limitation means that in general a “visually perfect mapping” not possible and there is a certain amount of error involved in the process. Therefore the real problem is minimizing that error so that the mapping is at least visually pleasing. The concept of what is considered a perfect or best possible mapping depends on the paper and is presented in the sections 2 and 3. Next section also presents some desirable properties. Besides using polygonal meshes, it’s also possible to use higher order parametric surfaces such as various splines. These kind of surfaces have natural parametrizations that can be used trivially. Depending on the application it’s possible to use them but there are certain tradeoffs. With real-time graphics and modern hardware accelerating cards sending geometry as parametric surfaces that are subsequently tesselated by the graphics processor saves bandwidth. Triangular models are however popular and parametric surfaces have their problems in many uses. Higher order surfaces and geometric primitives such as spheres are good for other applications such as raytracing which are out of the scope of this paper.

1.3 Desirable Properties Not any mapping will do but there are in fact several requirements that define useful parametrizations. The first natural property is that the orientation or postition of the model in the artist’s modeling program should not matter so the measure for the error must be rotation and translation invariant. Two other natural requirements are that the mapping should work for any resolution mesh. It should not matter whether we use a low polygon approximation or subdivide the mesh when we can afford it. Therefore we require indenpendence to resolution. There are also other sources for models than artists such as various 3-dimensional object scanners. There are often errors in the data or the data is somewhat incomplete, for example some part is blocked from the scanner. The sampling that defines the mesh might not be equal around the surface, especially if the mesh is optimized and larger polygons are used wherever possible. It would be useful if the parametrization would not depend on uniform sampling but would handle itself gracefully in non-uniform situations. Therefore we also require independence to sampling. The error measure we get should also be continuous so that whenever the triangulation is changed a little the measure also changes only a little. Going to finer and finer triangulations should mean the measure approaches some continuous measure. The

3

Figure 1: Independence to resolution and sampling (Lévy et al. , 2002)

measure should be additive so that whenever two meshes are joined the measure of the resulting mesh should be the sum of the separate meshes minus any shared part. Preserving orientation means that triangles will preserve their orientation through the mapping. Without it, at a border where a so called triangle flip occurs, there would be distortions because the mapping would change direction between the triangles.

1.4

Terminology

A short note on terminology.

Texture atlas means the same thing as a texture map. Usually a regularly subdivided rectangular grid storing some characteristic of a surface. Isomorphic mapping is one that preserves the structure. In this case there are the same amount of vertices with the same connections (edges) in the 3-dimensional surface and the 2-dimensional flat surface i.e. we don’t add or remove any vertices nor do we change the edges but just move the vertices around. Homeomorphic Two objects are homeomorphic if they can be deformed into each other by a continuous, invertible mapping. The mapping is one-to-one and onto and the inverse mapping is continuous. Conformal A mapping is conformal if it preserves angles (see Figure 2). 4

Figure 2: In a conformal map, the tangent vectors to the iso-u and to the iso-v curves are orthogonal and have the same length (Lévy et al. , 2002).

Authalic A mapping is authalic if it preserves areas. 1-ring is a vertex of a mesh and its immediate neighbors. Euler characteristic is a topological invariant of a surface that can be computed in many ways but one is χ = V − E + F where V is number of vertices, E edges and F faces. Chi Energy An energy measure for parametrizations derived from Euler characteristic.

1.5

Process

For now we have talked only about the parametrization but the actual parametrization process involves three steps. First the model is segmented into pieces homeomorphic to disks. Then the parametrization is done for each piece separately and a 2-dimensional surface (chart) obtained (for each piece). Finally the now flattened charts are packed as optimally as is reasonable to the texture atlas. The model is now ready for use.

1.6 Structure The next two sections describe two methods for obtaining the parametrization. After that in section 4 the solutions are compared. Then the two other significant steps in the process, segmentation and packing, are described in sections 5 and 6 respectively. Finally some words about the future in 7. 5

2

Least Square Conformal Maps (LSCM)

Lévy et al. , 2002 present a solution for obtaining visually satisfying mappings that they derive from conformal maps using complex numbers. Riemann’s theorem states that for all surfaces homeomorphic to a disc, it’s possible to find a parametrization that is conformal. Within the constraint of a piecewise-linear mapping of a triangulation this is impossible and they need to minimize the error which they do in the least squares sense. We can express the conformality condition seen in Figure 2 as N (u, v) ×

∂X ∂X (u, v) = (u, v) ∂u ∂v

(1)

If we now represent each triangle in its local base as pairs (x1 , y1 ), (x2 , y2 ) and (x3 , y3 ) this becomes ∂X ∂X (u, v) − i (u, v) = 0 (2) ∂u ∂v now using complex numbers X = x + iy. Considering the derivatives of the inverse function we get ∂U ∂U (x, y) + i (x, y) = 0 ∂x ∂y

(3)

where U = u + iv. Finally we enforce this condition over the entire triangle and get 2 Z ∂U ∂U ∂U ∂U C(T ) = +i dA = +i AT ∂x ∂y ∂y T ∂x

(4)

where C is called the criterion (to be minimized) for a triangle T (for a triangulation sum all triangles). AT is the area of the triangle T . The problem is now to find such complex numbers (parameter values) for each vertex that the criterion is indeed minimized. This involves defining the gradient, writing the equation into matrix form and then solving the resulting system. For the triangle with vertices (x1 , y1 ), (x2 , y2 ) and (x3 , y3 ) and scalars u1 , u2 and u3 we have the gradient ∂u ∂x ∂u ∂y

!

=

1 dT

y2 − y3 y3 − y1 y1 − y2 x3 − x2 x1 − x3 x2 − x1 6

!





u1    u2  u3

(5)

where dT = (x1 y2 − y1 x2 ) + (x2 y3 − y2 x3 ) + (x3 y1 − y3 x1 ) is twice the area of the triangle. Now if we use the complex number representation we get  T ∂u ∂u i  W1 W2 W3 u1 u 2 u3 +i = ∂x ∂y dT

where

   W1 = (x3 − x2 ) + i(y3 − y2 )  

W2 = (x1 − x3 ) + i(y1 − y3 ) W3 = (x2 − x1 ) + i(y2 − y1 )

(6)

(7)

We can now write Equation 3 as  T ∂U ∂U i  W1 W2 W3 U1 U2 U3 +i = =0 ∂x ∂y dT

(8)

where Uj = uj + ivj . Thus we can write the criterion for the whole triangulation that we want to minimize as   U1  U  X   C 2 = C(T ) (9)  ...  T ∈ triangulation Un with

 T 2 1  W W W u u u j1 ,T j2 ,T j3 ,T j1 j2 j3 dT where each triangle T has vertices with indices j1 , j2 and j3 .

C(T ) =

(10)

Lévy et al. , 2002 further write this as real matrices and vectors that can then be solved numerically. To get a non-trivial solution to the optimization problem some boundary points must be fixed so some of the Uj are pinned. They propose to take the two vertices furthest apart and place them on the texture atlas.

3

Intrinsic Parametrizations

Desbrun et al. , 2002 define their parametrization by examining desirable intrinsic properties of meshes. The measure or score of a mesh must satisfy the basic properties already mentioned in section 1.3. They present that the only measures that satisfy these properties for a triangulation are three so called Minkowski functionals: area, 7

Figure 3: Mapping from 3D 1-ring to 2D (Desbrun et al. , 2002)

Euler characteristic and perimeter. These three functionals can be thought to form a base for a 3-dimensional space so that all possible admissible functionals are a linear combination of these three. Furthermore the simplest relevant distortion measures between two 1-rings (see Figure 3) form a 2-dimensional space. An optimal parametrization tries to minimize the distortion measure in the mapping from a given 3D 1-ring to an isomorphic 2D 1-ring (see Figure 3) for a fixed, given boundary mapping (so actually only the center vertex moves). The minimum of the distortion measure is obtained when the partial derivates of the measure are zero. The first optimal mapping derived is Discrete Conformal Mapping which is based on the notion of Dirichlet Energy that is borrowed from differential geometry. Area =

1 R 2 M

|fu × fv |dudv ≤

1 R 2 M

|fu ||fv | (11)



1 R (f 2 4 M u

+

fv2 )dudv

= Dirichlet Energy

where M is a surface patch. Minimum of this turns out to be a conformal mapping. But we are interested of what this becomes in a triangulation. The energy for a 1-ring in a triangulation is EA =

cot αij |ui − uj |2

X

(12)

oriented edges(i,j)

See Figure 3 for the angles and vertices. Since the distortion measure is quadratic, derivation results in a simple linear system which has a unique (non-trivial) solution once the boundaries are fixed. The minimum is defined to be the discrete conformal map. X ∂EA = (cot αij + cot βij )(ui − uj ) = 0 ∂ui j∈N (i)

N (i) denotes the neighborhood of vertex i. 8

(13)

The second optimal mapping presented in Desbrun et al. , 2002 is the novel Discrete Authalic Mapping. It is based on the Euler characteristic from which they derive the so called Chi Energy by using the fact that Euler characteristic is also the integral of P Gaussian curvature which is equal to 2π − j θj where the θj are the tip angles around a vertex. So the Chi Energy will be

Eχ =

X (cot γij + cot δij ) j∈N (i)

|xi − xj

|2

(ui − uj )2

(14)

The corresponding optimization condition then becomes X (cot γij + cot δij ) ∂Eχ = (ui − uj ) = 0 ∂ui |xi − xj |2 j∈N (i)

(15)

which is know as the discrete authalic map. Now we can gather all the conditions for the vertices to a matrices and vectors as and simultaneously solve for the minimum of the system (like with the previous method).

4

Comparison

Desbrun & David, 2002 shows that the least squares conformal maps and discrete natural conformal parametrization are in fact one and the same. This means whatever applies to one applies to the other and the results are the same. Lévy, 2002 however note that in the case of a fixed boundary this is trivially true but when the boundary is free more precautions must be taken in the proof. They conclude that they have shown the equivalence but in practice they obtained different results with the two methods and there might be some some numerical conditioning problems. The definitions of the energy minima in all the presented methods lead to sparse linear systems when written as matrices and vectors. These are effieciently solved using iterative methods such as the conjugate gradient method. As is noted in the papers if more complex distortion measures are wanted there will be non-linear energies and equations. Also some of the existing alternatives have non-linear equations which do provide pleasing parametrizations but they are notably slow and can be caught in a local minimum (instead of finding a global minimum). Lévy et al. , 2002 constrain two points from the boundary but Desbrun et al. , 2002 constrain the entire boundary and then provide an algorithm for adjusting the boundary to further minimize the distortion measure. Desbrun et al. , 2002 get equivalent natural 9

Figure 4: Example with LSCM (Lévy et al. , 2002)

borders (Neumann boundaries) and better mappings in theory. These are known as Natural Conformal Maps. Figure 4 shows a bunny parametrized with the LSCM. Note that there are additional texels around the edges of the charts to help to reduce mipmapping artifacts. Figure 5 shows how the discrete authalic parametrization in the middle and discrete conformal parametrization on the right do in the case of a highly non-uniform sampling. Both provide a smooth natural parametrization unlike many methods presented so far. The segmentation and packing algorithms in the following sections can be applied to the approaches from both papers. Also both methods can use post-processing to further optimize stretch in the maps developed by Sander et al. , 2001.

5

Segmentation

Since 3D models are often closed surfaces and thus not homeomorphic to disks, they must be segmented into smaller pieces (charts) before the forementioned parametrization algorithms can be applied. Just like the parametrization, the used solution has been to force the artist to divide the model into suitable pieces before making the parametrization. However, an automatic solution is desirable for the same reasons. The algorithm should basically divide the model into as logical pieces as possible. Errors and deformation due to the segmentation should be minimized when possible. 10

Figure 5: Example with DCP (Desbrun et al. , 2002)

11

Figure 6: Segmentation (Lévy et al. , 2002)

Lévy et al. , 2002 mention that natural boundaries are zones of high curvature. This because segment boundaries will create small discontinuities anyway and if the model is shaded using surface normals as is common, the variations in the shading (i.e. light and dark) will dominate the discontinuity making it hard to notice. A second point mentioned is that cylindrical shapes are useful to detect because cutting the cylinder and rolling the surface open creates easy parametrizations. The segmentation algorithm presented in Lévy et al. , 2002 works as follows. Classify the edges of the mesh so that some portion (e.g. 5%) of the sharpest edges (e.g. based on the normals of the connected triangles) are assigned as so called features which are natural boundaries because they are sharp edges. Then these feature curves are further processed to reduce the amount of small features in zones of high curvature. Also small features caused by noise are filtered. When the features have been detected, the charts are created. This is done using a modified s-source Dijkstra algorithm. First distances of all the faces to the features are calculated and the local minima of the distance function (points that are as far away as possible from features) are set as seeds. Then the algorithm grows charts away from these seeds using the distance to features as the priority and assigning neighboring faces to the chart that began at the source seed. The end result is that the boundaries are close to the original feature lines and the faces belong to some chart. Figure 6 demonstrates the segmentation. Leftmost in A are the feature lines. Next in B is visualized direct distance from the seeds which is not as useful as distance from features shown in C as it creates more patches which are also not as natural. Finally, rightmost in D are the created segments colored. Figure 4 also has the bunny segmented. We can see that the segments in fact are quite naturally placed.

12

Figure 7: Packing: The current chart is shown in yellow and horizon in blue. The new chart is green and its bottom horizon is pink and top horizon red. Black space denotes new wasted space. (Lévy et al. , 2002)

6 Packing Once a mesh has been segmented into charts, the charts must be packed into the texture map (texture atlas). It should be as tight fit as possible to conserve texture memory. Packing has been proved NP-complete (bin packing) so an optimal packing is generally not possible. The used solutions employ various heuristics to make the process fast. For example, it is possible to consider the smallest bounding rectangles of the segments and pack those but this wastes space if the boundaries are of arbitary shape (concave, irregular) as is the case with the presented segmentation algorithm. Lévy et al. , 2002 presents an idea where each chart is first scaled so that its area in (u, v) parameter space is the same as (x, y, z) space (or whatever the desired parameter space size of the chart is). Then the maximum diameter of the chart is aligned vertically and the charts are sorted in decreasing order. Now starting with the largest chart the charts are inserted into the atlas to the best possible place. The method of insertion used is one that keeps track of the horizon and minimizes the wasted space that is left between the bottom horizon of the added chart and the existing horizon. This is displayed in the Figure 7. The current horizon is searched for the right place based on the bottom horizon of the new inserted chart. Then the horizon of the selected place is updated with the top horizon of the new chart.

13

Figure 8: Results (Lévy et al. , 2002)

7

Final Words

As the capabilities of modern graphics cards have grown, so have grown the complexity of the meshes. Also the speed has made it possible to use e.g. texture and normal maps for better looking meshes. Real-time graphics has rapidly gone from sprites to hundreds and now thousands of faces for each model. For example Sweeney, 2003 describes the next-generation Unreal technology that uses 2000000-polygon original meshes that then undergo heavy processing (and presumably simplified before usage). Automated, robust tools are a necessity for such 3d content creation and processing. The presented ideas from Lévy et al. , 2002 and Desbrun et al. , 2002 enable the creation of such tools that handle the heavy parametrization process. They are not the only methods but are the first to provide robust solutions efficiently. Better parametrizations are possible by using non-linear equations but their solving is not as easy nor is it fast. Lévy et al. , 2002 claim times in the order of ten seconds seconds to a couple minutes on a modest personal computer for the parametrization depending on model complexity (see Figure 8). Also the segmentation takes from a few seconds to about a minute. Packing times are said to be insignificant being less than a second for all tested data sets. Authors of the both methods however identify the need of further research to optimize the solvers further making extremely large models feasible. It should be noted that the meshes used are smaller than what Sweeney, 2003 mentions, only ten to hundred thousand vertices. Perhaps, if the packing cost indeed is so small, the packing could be improved since the quoted numbers of packing efficiency range in the order of 50-60%. This is quite a lot of wasted space that could be utilized. Errata Lévy, 2002 corrects the original LSCM paper admitting triangle flips can occur even though they have not seen those in practice. The authors are investigating the problem and present three workarounds. Desbrun & David, 2002 also provide 14

hindsight into the instrinsic parametrizations. It should be noted that it’s quite impossible to paint for example the desired colors to an automatically obtained 2-dimensional texture atlas (or how would you paint the map in Figure 4). As a painting surface it is too irregular and the borders between the separate charts are far from obvious. When using artist created low-polygon meshes this has been possible. There will be need for something like 3-dimensional paint systems or at least procedural textures to properly define the desired characteristics.

REFERENCES Desbrun Mathieu, & David Cohen-Steiner. 2002. Hindsight: LSCM and DNCP are one and the same. Desbrun Mathieu, Meyer Mark, & Alliez Pierre. 2002. Intrinsic Parametrizations of Surface Meshes. In: EUROGRAPHICS. Desbrun Mathieu, David Cohen-Steiner, Alliez Pierre, Devillers Olivier, & Lévy Bruno. 2003. Anisotropic Polygonal Remeshing. Lévy Bruno. 2002. Least Squares Conformal Maps - erratum and discussions. Lévy Bruno, Petitjean Sylvain, Ray Nicolas, & Maillot Jérome. 2002 (July). Least Squares Conformal Maps for Automatic Texture Atlas Generation. In: ACM SIGGRAPH Proceedings. Sander Pedro V., Snyder John, Gortler Steven J., & Hoppe Hugues. 2001. Texture mapping progressive meshes. Pages 409–416 of: Proceedings of the 28th annual conference on Computer graphics and interactive techniques. ACM Press. Sweeney Tim. 2003. Tim Sweeney 64-bit Interview.

15