Fast Collision Detection Me

0 downloads 0 Views 605KB Size Report
(Teschner et al., 2003). Methods based on distance fields which specify the minimum distance to a closed surface are described in (Teschner et al., 2005).
Elsevier Editorial System(tm) for Journal of Biomechanics Manuscript Draft

Manuscript Number: Title: Fast Collision Detection Methods for Joint Surfaces Article Type: Full Length Article (max 3000 words) Section/Category: Keywords: Joint modeling; Graphical simulation; Collision detection; Penetration depth. Corresponding Author: Mr. Ehsan Arbabi, Corresponding Author's Institution: First Author: Ehsan Arbabi, PhD Student Order of Authors: Ehsan Arbabi, PhD Student; Ronan Boulic, PhD; Daniel Thalmann, PhD, Full Professor Manuscript Region of Origin: Abstract:

Cover Letter

Type: Original Article Title: Fast Collision Detection Methods for Joint Surfaces Authors: Ehsan Arbabi a, Ronan Boulic b and Daniel Thalmann c Affiliations: a, b, c Virtual Reality Lab., Swiss Federal Institute of Technology in Lausanne (EPFL), Station 14, 1015 Lausanne, Switzerland {a ehsan.arbabi, b ronan.boulic, c daniel.thalmann}@epfl.ch Corresponding Author: Ehsan Arbabi Contact address: EPFL-IC-ISIM-VRLAB INJ 133 (Batiment INJ) Station 14 1015 Lausanne Switzerland Telephone numbers: 0041-21 69 35244 0041-21 69 35248 Fax number: 0041-21 69 35328 E-mail address: [email protected] Keywords: Joint modeling; Graphical simulation; Collision detection; Penetration depth Word Count (Introduction to Conclusion):

~2800

All authors were fully involved in the study and preparation of the manuscript and the material within has not been and will not be submitted for publication elsewhere.

Conflict of Interest Statement

There is no conflict of interest to be stated.

Referee Suggestions

Dr. Franck Multon Associate professor in University Rennes 2 Associate researcher in IRISA [email protected] Projet Bunraku - Irisa/Inria de Rennes Campus de Beaulieu 35042 Rennes Cedex Telephone : +33 299 847 100 Fax : +33 299 847 171

Manuscript

Fast Collision Detection Methods for Joint Surfaces Ehsan Arbabi a, Ronan Boulic b and Daniel Thalmann c a, b, c

Virtual Reality Lab., Swiss Federal Institute of Technology in Lausanne (EPFL), Station 14, 1015 Lausanne, Switzerland {a ehsan.arbabi, b ronan.boulic, c daniel.thalmann}@epfl.ch

Abstract In the recent years medical diagnosis and surgery planning often require the precise evaluation of joint movements. This has led to exploit reconstructed three dimensional models of the joint tissues obtained from CT or MR Images (for bones, cartilages, etc). In such context, efficiently and precisely detecting collisions among the virtual tissues is critical for guaranteeing the quality of any further analysis. The common methods of collision detection are usually designed for general purpose applications in Computer Graphics or CAD-CAM. Hence they face worst case scenarios when handling the quasi-perfect concavity-convexity matching of the articular surfaces. In this paper we present two fast collision detection methods that take advantage of the relative proximity and the nature of the movement to discard unnecessary calculations. The proposed approaches also provide the penetration depth. They are compared with other collision detection methods and tested in different biomedical scenarios related to the human hip joint. Keywords: Joint modeling; Graphical simulation; Collision detection; Penetration depth

1

Introduction

The quality of medical diagnosis related to joint pathologies has improved significantly by exploiting computer aided intervention and biomedical simulations (Chegini et al., 2006; Gilles et al., 2006; Kang et al., 2003; Armand et al., 2004; France et al., 2005; Sfantos and Aliabadi, 2007; Arbabi et al., 2007a). Investigating the joints behavior in normal and pathological cases (Martin, 2005) can help medical doctors to have a more accurate diagnosis and surgery plan. Human joint simulations usually starts by reconstructing three dimensional meshes of the joint tissues (bones, cartilages, etc) from CT or MR Images (Gilles et al., 2006) and estimating the center of rotation

1

such as for the hip (Kang et al., 2003; Camomilla et al., 2006). Then, a critical task to handle is the precise detection of collisions between virtual tissues, so that the amount of stresses in the colliding areas is faithfully evaluated (Chegini et al., 2006), or the range of motion in a specific orientation is correctly estimated (Kang et al., 2003; Armand et al., 2004; Arbabi et al., 2007a). When simulating the deformations occurring within the joint capsule, the applied collision detection technique is playing a key role, at least in terms of performance, and most of the time in terms of validity of the simulation output (e.g. stress distribution). In this paper we present two fast collision detection methods, which are highly adapted toward handling joints behavior (i.e. rotating and/or sliding movements). They not only increase the speed of the collision detection (a critical issue for real time applications), but they also provide the necessary information for the soft tissue deformation simulation (i.e. colliding pairs and penetration depth in appropriate directions). The proposed methods can be used either independently or in combination with any tissue mechanical model for further mechanical calculation. 2

Previous works

Many methods for collision detection are based on "Bounding Volume Hierarchies". Different kinds of bounding volumes such as AABB, DOP, OBB, sphere, spherical shell have been proposed (Moore and Wilhelms, 1988; Krishnan et al., 1998; Larsson and Akenine-Möller, 2001; Zachmann and Langetepe, 2003; Teschner et al., 2005; Volino and Magnenat-Thalmann, 1994; Volino et al., 2005). In some other cases, the space is divided into cells to which potentially colliding elements are associated; the collision detection is then limited to the subset of element belonging to each cell (Teschner et al., 2003). Methods based on distance fields which specify the minimum distance to a closed surface are described in (Teschner et al., 2005). Image-space techniques have been proposed

2

for collision detection too. These approaches commonly process projections of the objects to accelerate collision queries (Baciu et al., 1999; Heidelberger et al., 2003). Most of the mentioned methods are designed for general/semi-general purposes. Therefore, they should consider any kinds of possible collision among virtual objects, in their algorithms. Other methods were specifically designed for a specific application in mind. In these methods, the processing time is reduced by taking advantage of some domain specific limitations. Recently, such a method of collision detection has been proposed for dealing with situations in which soft structures are in constant but dynamic contact, by performing a spherical sampling of one mesh (Maciel et al., 2007). The approximation resulting from the sampling allows this approach to be exploited for real-time interactions. We propose two high-accuracy methods for detecting collision that are especially appropriate for handling the quasi-perfect concavity-convexity matching of the articular surfaces. This could be achieved by taking advantage of the movement limitations to rotation. Both cylindrical and radial segmentation of the space have been exploited for organizing the model data and recovering the penetration depth at low cost as detailed in the next section. 3

Collision in rotating objects

When two objects collide with each other during rotation, two kinds of penetration may occur: 1tangential or 2- radial. The tangential penetration happens in the angular direction, which is tangential to the rotational trajectory (see region 1 in Figure 1). On the other hand, the radial penetration usually happens among the surfaces that are quasi sliding on each other during rotation. Therefore, the penetration happens in the radial direction (see region 2 in Figure 1). In this paper both kinds of tangential and radial penetrations are considered and two specific methods are described in section 4 and section 5, respectively.

3

4

Tangential collision detection during rotation

The method relies on a cylindrical segmentation of space around a given rotation axis. This approach is an extension to the spatial discretization used by Arbabi et al. (2007a). The algorithm returns the penetrating mobile vertices and the corresponding penetrating fixed triangles in the direction defined by the rotation axis. Such information consequently provides the angular penetration depths without any additional computations. For clarity, we consider one of the objects as fixed and call it the fixed object (the other one is called the mobile object). The two main steps of the algorithm for a constant axis of rotation are: Cylindrical segmentation of the fixed object by storing polygon indices in corresponding table cell(s), Cylindrical collision detection of the mobile object vertices by determining the table cell they belong to (i.e. ring) and checking potential collision along a circular trajectory with the fixed polygons stored in the cell. 4.1

Axis-aligned coordinate system

We first transform both objects into a new coordinate system where the z-axis is aligned with the considered rotation axis for the collision detection. The rotation axis is either the joint unique anatomic degree of mobility or the instantaneous rotation axis in case it possesses multiple degrees of mobility. This transformation can be repeated in case the rotating axis changes. 4.2

Cylindrical segmentation of the space around the objects and filling the table

Each vertex of both mobile and fixed object is converted from Cartesian coordinate (X, Y, Z) to cylindrical coordinate (r, θ, z). The vertices cylindrical coordinate are first used to detect and prune the non-colliding parts from further calculations.

4

The space is then cylindrically partitioned into ring cells, indexed by ‘z’ and ‘r’ (Figure 2: Left). A table gathering these ring cells stores the information of the fixed object (Figure 2: Middle and Right). For each fixed polygon, its index is stored in all the ring cells intersecting the polygon. 4.3

Detecting the collision

First the algorithm searches the ring cell containing each mobile vertex. Since the mobile object is rotating around the z-axis, a mobile vertex can only collide with the fixed polygons stored in the unique associated ring. Therefore, we just need to check the angular distance between the mobile vertex and the fixed polygons indexed in the table cell. However, it may happen that a fixed polygon does not collide with the mobile vertex even if they belong to the same ring. So, the algorithm checks whether the circular arc of the mobile vertex intersects the polygon or not. Then among all the qualified polygons, we select the one with the smallest arc angle to the mobile vertex as colliding vertex. We determine whether the mobile vertex is inside the fixed object or not by using the found polygon’s normal vector n and the tangent vector t to the circular projection of the mobile vertex on the polygon (noted P on Figure 3 right): if (t . n < 0) (i.e. angle between t and n > π/2) a=V-P

then

t = -t

then

V is penetrating the fixed object.

(i.e. the vector connecting P to V)

if (t . a ≤ 0) (i.e. angle between t and a ≥ π/2)

This process is done for all the mobile vertices and finally all the penetrating mobile vertices and their corresponding fixed polygons are found. 5

Radial collision detection during rotation

5

The method is based on the radial segmentation of the object’s spatial occupancy, instead of a ray based sampling of the fixed object as done by Maciel et al. (2007). Compared to Maciel et al. (2007), the method not only return exact collision answers (penetrating vertices), but also has a significantly faster stage for updating table. The two main steps of the collision detection algorithm are: Radial segmentation of the fixed object by storing polygon indices in corresponding table cell(s), Collision detection of the mobile object vertices by determining the table cell they belong to and checking potential collision with the fixed polygons stored in the cell. 5.1

Radial segmentation

Radial segmentation should be done in such a way that all the points having the same orientation (but different distance), are classified in the same cell. A separate investigation showed that the simplicity of the mapping function can affect the total computational speed more significantly than the uniformity of the radial cells (Arbabi et al., 2007b). The reason is that the mapping function is called a large number of times during the process. Thus, we used a normalized Cartesian mapping function (see Figure 4): (a1, a2, a3, R) =MappingFunction(x, y, z) R = sqrt(x2+y2+z2), a1 = x/R, a2 = y/R, a3 = 1 {if z≥0}, a3 = 0 {if z=0) THEN IF (PolygonIntersection(i,j)) THEN CollidingPairs[k]=(i,j) PenetrationDepth[k]=Diff k=k+1; ENDIF ENDIF ENDFOR ENDFOR Return(k ‘number of collision pairs are found’) END

Figure 8 Click here to download high resolution image

Figure 9 Click here to download high resolution image

Figure 10 Click here to download high resolution image

Figure 11 Click here to download high resolution image

Figure 12 Click here to download high resolution image

Figure 13 Click here to download high resolution image

Figure 14 Click here to download high resolution image

Figure 15 Click here to download high resolution image

Figure 16 Click here to download high resolution image

Figure 17 Click here to download high resolution image

Figure 18 Click here to download high resolution image

Table 1

Table 1: Different resolution of the objects in the first group of scenarios. Number of vertices in the femur’s mesh Number of vertices in the pelvis’es mesh Total number of vertices (mesh) Number of triangles in the femur’s mesh Number of triangles in the pelvis’es mesh Total number of triangles (mesh) Number of tetrahedra in the femur’s volume Number of tetrahedra in the pelvis’es volume Total number of tetrahedra (volume)

Scen.1 773 1 221 1 994 1 542 2 442 3 984 8 232 9 954 18 186

Scen.2 3 086 4 884 7 970 6 168 9 768 15 936 28 257 31 410 59 667

Scen.3 12 338 19 536 31 874 24 672 39 072 63 744 104 839 100 550 205 389

Scen.4 49 346 78 144 127 490 98688 156 288 254 976 368 839 344 486 713 325

Table 2

Table 2: Different resolution of the objects in the second group of scenarios. Number of vertices in the femur cartilage’s mesh Number of vertices in the pelvis cartilage and labrum’s mesh Total number of vertices (mesh) Number of triangles in the femur cartilage’s mesh Number of triangles in the pelvis cartilage and labrum’s mesh Total number of triangles (mesh) Number of tetrahedra in the femur cartilage’s mesh Number of tetrahedra in the pelvis cartilage and labrum’s mesh Total number of tetrahedra (volume)

Scen.5 1 414 1 438 2 852 2 824 2 876 5 700 35 269 36 291 71 560

Scen.6 2 171 2 111 4 282 4 338 4 222 8 560 53 824 42 420 96 244

Scen.7 3 869 3 935 7 804 7 734 7 870 15 604 78 895 84 855 163 750

Scen.8 8 649 7 412 16 061 17 294 14 824 32 118 147 094 109 272 256 366