Reconstruction of quadric surfaces from disparity ... - CiteSeerX

0 downloads 0 Views 151KB Size Report
In stereo vision, much attention has been paid to the determination of binocular .... The quadratic model serves as a local approximation of disparity in a small ...

Reconstruction of quadric surfaces from disparity measurements Olli Jokinen Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing Otakaari 1, FIN-02150 Espoo, Finland In: Applications of Digital Image Processing XVII, A. G. Tescher, Editor, Proc. SPIE 2298, pp. 593-604, 1994.

ABSTRACT Techniques for the reconstruction of quadric surfaces (cones and cylinders) from disparity measurements obtained by a stereo vision system are presented. For the initial estimation of the geometrical parameters of cones, a regression based approach is adopted from [Newman, Flynn, and Jain, CVGIP: Image Understanding, Vol. 58, pp. 235-249]. A new method based on surface normal estimation is developed for the cylindrical case. For the re nement of the parameter estimates, the Levenberg-Marquardt nonlinear least squares algorithm is used as in [Flynn and Jain, Proc. 1988 IEEE Comput. Soc. Conf. Computer Vision and Pattern Recognition, pp. 261-267] but two mistaken formulas (one for a cone and the same for a cylinder) are corrected. The prior approaches for quadrics in stereo vision have been more or less approximative. Our contribution is to t the exact quadric shape to disparity data using the existing methods for range data. The disparity space o ers a convenient frame to analyze the di erences between the measurements and the reconstructed model.

Keywords: Stereo vision, 3-D modeling, quadric surfaces 1. INTRODUCTION In stereo vision, much attention has been paid to the determination of binocular disparity from two images taken simultaneously from di erent viewpoints. The nal step in the process is to build a model for a 3-D object surface from the disparity measurements. In this paper, the reconstruction of quadric surfaces is considered. The prior approaches for quadrics in stereo vision have been more or less approximative. The paper lls this gap by tting the exact quadric shape to disparity data. We rst introduce the concept of disparity and describe the measuring system used for disparity measurements. Related work in stereo vision and in 3-D computer vision generally are then reviewed. Our approach for quadrics applies existing techniques for range data to the special case of disparity data. The method for quadric surfaces is presented in Section 2. This section is divided into subsections according to the di erent stages of the procedure starting from rough data segmentation and formulation of the problem, followed by parameter initial estimation and re nement, and ending up to model building and analysis. Section 3 contains the conclusions and Section 4 acknowledgments. The paper should be considered as a part of a larger work where the disparity map is used as a basic tool during the modeling process. In the course of our presentation, we consequently perform some stages of the procedure in the disparity frame although they could be performed in a Cartesian (X; Y; Z ) frame, as well. In the future, we hope to investigate multiview integration in the disparity frame.

1.1 Disparity Consider the normal case of stereography where two identical cameras have parallel optical axes and a coincident image plane (see Fig. 1). The origin of a right-handed (X; Y; Z ) coordinate system is located at the projection centre of the left camera. The X -axis is parallel to the stereo baseline (the line through the projection centres of the two cameras) and the negative Z -axis is directed along the line of sight. Consequently, the projection centre of the right camera is located at (B; 0; 0), where B > 0 is the distance between the optical axes. The image plane is normal to the Z -axis and intercepts it at the point (0; 0; ?H ), where H > 0 equals the focal length of the cameras in the

normal case of stereography. Notice that the image plane is considered to be located between the projection centres and the object. Let the origin of the left image coordinate system be located at (0; 0; ?H ) and the origin of the right image coordinate system at (B; 0; ?H ). Denote the image coordinates of the left and right image by (xl ; yl ) and (xr ; yr ), respectively. Under perspective projection, a point (X; Y; Z ) from the object surface projects into the left and right images, respectively, at     (1:1) (x ; y ) = ?HX ; ?HY ; (x ; y ) = ?H (X ? B ) ; ?HY : l

l

Horizontal disparity px is de ned as

Z

Z

r

r

Z

Z

px = xl ? xr = ? HB Z :

(1:2) Vertical disparity py = yl ? yr equals zero. This means that the corresponding features in the left and right image lie in the same horizontal scan line. The left camera centered (X; Y; Z ) coordinates are obtained in the left image coordinates by Byl BH l (1:3) X = Bx p ; Y = p ; Z=? p : x

x

x

In the sequel, we drop the subscripts in xl , yl , and px, i.e., we denote x  xl , y  yl , and p  px. The right-handed rectangular coordinate system with coordinates (x; y; p) is referred as a disparity coordinate system or shortly as a disparity frame.

Fig. 1. Normal case of stereography.

1.2 Measuring procedure For disparity measurements, a suitable measuring system is needed. Our system consist of a stereo head connected to a photogrammetric station and supported by a feature projector [9]. The system is rst calibrated in order to know the relative orientation of the two cameras. The absolute orientation with respect to an external laboratory coordinate system is not needed. The calibration procedure is performed after every change in the orientation of the cameras [8]. The disparity measurements are recorded by projecting light rays onto the object surface. In order to avoid the correspondence problem, the number of projected lines is increased as a power of two (1,2,4,8,16,...). In practice, it is dicult to set up two cameras so that their optical axes are parallel and vertical disparity equals zero. Therefore, the cameras are turned slightly towards each other and after initial recordings, the images are recti ed and resampled to the normal case of stereography [15]. The corresponding features are then found from the same horizontal line in the recti ed images. As a result of the matching process, measured disparities are obtained in the left image pixel coordinates for a 512  M raster where M is the number of projected rays. For the intermediate pixels, disparity values are linearly

interpolated from the neighboring pixels in the same horizontal line. Outside the measurement coverage, disparity is set to some reference value (e.g. to zero) so that every pixel location has some disparity value resulting in a full 512  512 disparity map P . Often it is, however, appropriate to consider a sparse N  N disparity map PN which is obtained from P by taking every (512=N )th disparity value in horizontal and vertical directions. Here N is assumed to be a power of two. More precisely, PN is a matrix whose elements are de ned by PN (j; i) = p(x(i); y(j )), j = 1; : : : ; N , i = 1; : : : ; N , where x(i) = x +512s(i ? 1)=N and y(j ) = y +512s(N ? j )=N . Here x, y, and the scale factor s are numerical constants which depend on the actual camera setup and the image recti cation process. Future development of the system includes installation of a LCD-projector so that detailed time-varying texture can be projected onto the object surface. As a consequence, we beliew that the e ect of systematic errors which may be present if the projected texture is the same all the time can be diminished. Moreover, the correspondence matching can be performed in every pixel location leading to a fully measured disparity map P . The actual computer programs concerning disparity measurements are written in C and executed in a Mach 3.0 workstation. The MATLAB software has been used in disparity map modeling in order to easily compute inverse matrices and to show pictures. The modeling stages are executed in a HP 9000/705 workstation.

1.3 Related work A vision-based coordinate measuring system similar to ours has been developed at the National Research Council Canada and described in [6]. The main contribution of [6] lies in error analysis. They estimate the accuracy of the system, possible sources of error, and conditions under which the accuracy values can be achieved. Techniques for calibration and an edge-following algorithm for model-building purposes are also presented. For a review of the reconstruction problem in stereo vision, consider rst a planar model as in [13]. A plane in the left camera centered (X; Y; Z ) coordinate system given by Z = 0 + 1 X + 2 Y is transformed into

p(x; y) = ? B (H + 1 x + 2 y) 0

(1:4)

when using (1.3) and solving for p. Consequently, disparity is a linear function of the image coordinates and linear least squares techniques can be used for plane tting in the disparity frame. For quadric surfaces the situation is more complicated. An implicit quadric surface can be expressed as

Q(X; Y; Z ) = 1 X 2 + 2 Y 2 + 3 Z 2 + 4 XY + 5 XZ + 6 Y Z + 7 X + 8 Y + 9 Z + 10 = 0: (1:5) Using (1.3) and multiplying by p yields another quadric surface q(x; y; p) = 0 in the image coordinates and disparity.

Consequently, the usual linear least squares technique is not applicaple this time. In [13], a model where disparity is a quadratic function of the image coordinates is proposed. This yields an approximation for a variety of curved surfaces thus satisfying the purpose of modeling in [13], i.e., to make the disparity map denser and correct it from possible measuring errors. Another approach where disparity is approximated by a quadratic function in the image coordinates is presented in [5]. The quadratic model serves as a local approximation of disparity in a small neighborhood where the depth Z (X; Y ) in the X and Y coordinates and consequently the disparity p(x; y) in the corresponding image coordinates is analytic. In this neighborhood, both depth and disparity can be expanded in a Taylor series. Comparing the coecients of these series gives means for the reconstruction of the underlying surface patch in terms of disparity. In order to nd the partition of the image into regions in each of which disparity is analytic, the models tted to adjacent neighborhoods are compared. If the models di er more than a certain threshold in the area of overlap, a discontinuity in disparity or in the gradient of disparity may occur. Disparity models are also used in the early stage of correspondence matching. Among a variety of possible matches, those are selected which give the best locally smooth surface reconstruction in the sense that the residual between detected and modeled disparity is as small as possible.

Yet another approach for surface modeling with special emphasis on the integration of feature matching, contour detection, and surface interpolation is presented in [10]. At the rst stage, a planar model is used as a local approximation of disparity. Then a quadratic model is tted to larger patches which are formed by comparing the mutual consistency of the neighboring planar patches. Ridge and occluding contours are detected by tting bipartite circular planar patches to adjacent patches and by comparing the depth and orientation of the two halves of a bipartite patch. A model for the entire scene is built by interpolating the quadratic patches within regions enclosed by the detected contours. A slightly di erent disparity analysis based on di erential imaging is presented in [19]. Disparity is interpreted in terms of surface orientations and discontinuities allowing a direct recovery of 3-D surface geometry of rst order from disparity. A stability analysis is also presented showing that the method can be expected to be robust. Consider next the surface reconstruction from range data. The term 'range data' refers here to (X; Y; Z ) coordinate measurements from an object surface without any assumption on the measuring system. Usually the measurements are obtained by a laser range- nder but our video camera system provides also range data. The term 'disparity data' includes the assumption that the measuring system consists of two video cameras. Anyway, it seems that the surface reconstruction problem has received much more attention for range data than for disparity data re ecting the fact that digital close-range photogrammetry is a relatively new technique for industrial 3-D measurements. For an overview on three-dimensional surface reconstruction methods for range data, we refer to [3] and references in there, especially [1], [2], [4], [17], and [18]. Other references include [2] and [12]. We found that techniques developed in [7] and [14] should be applied to disparity data. In [7], quadrics are rst classi ed into spheres, cylinders, and cones with the help of curvature estimates. Curvature estimates are then used to estimate the geometrical parameters of the classi ed quadric surfaces. An iterative procedure is used for the re nement of the parameter estimates. The experimental results reported in [7] show that the curvature estimates are very sensitive to noise which a ects especially the case of a conical surface. In [14], curvature estimates are not used any more. Parameter estimation is instead based on a Hough transform in the spherical and cylindrical cases. A regression based approach has been developed for the conical case. Moreover, some of the parameters are assumed to be partially known a priori in the sense that they can have some possible values and the task is to determine which value gives the best t with the data. The method is thus intended mainly for recognition purposes. For cones, data-driven experiments without a priori knowledge of the parameters are also presented. Our approach for quadrics has been inspired by [7] and [14]. For the initial estimation of the geometrical parameters, we adopt the data-driven procedure in [14] for cones and develop a similar method for cylinders. The iterative procedure in [7] is adopted for the re nement and assurance of the parameter estimates. The method is described in detail in the next section.

2. MODELING DISPARITY MAPS OF QUADRIC SURFACES In this section, we present a procedure to model disparity maps of quadric surfaces with special emphasis on techniques for the estimation of the geometrical parameters of cones and cylinders. As illustrative examples, we use real data from a right circular cone and from a right circular cylinder. Sparse disparity maps P64 over the conical and cylindrical surfaces are shown in Figs. 2 and 3, respectively. Around the quadrics there are also measurements from a planar surface present in the data.

2.1 Rough segmentation The rst step in the modeling of a disparity map is segmentation. In this paper, we perform only a rough segmentation necessary for the illustrative cases of Figs. 2 and 3. In the conical case, a set of points which clearly belong to the planar surface is selected and a plane tted to these points using linear least squares techniques. The conical region is determined by those pixels whose disparity value is higher than the tted planar disparity plus a given tolerance value. In the cylindrical case of Fig. 3, the intersection of the cylinder with the surrounding plane is

a priori known to consist of two straight lines. These are determined directly from the gure by selecting two points from each line (as seen by a human eye). The cylindrical points lie between the computed straight lines.

2.2 Quadric tting as a minimization problem The next step in the modeling of quadric surfaces is to write the equation of a quadric as a function of suitable variables and parameters. We consider rst the conical case and use the same parameters as in [7]. In the tripleprimed coordinate system the origin of which is located at the vertex of the cone and in which the Z 000 -axis is parallel to the axis of the cone, the equation of the cone is given by (X 000 )2 +(Y 000 )2 ? d12 (Z 000 )2 = 0, where d is the unit-radius distance (the radius of the cone is 1 at Z 000 = d). The coordinate transformation from the left camera centered (X; Y; Z ) system to the object centered (X 000 ; Y 000 ; Z 000 ) system is given by rst making a translation of the origin to (X0 ; Y0 ; Z0) and then rotations of  and ' about the X 0 and Y 00 axes, respectively. The rotations are more speci cally de ned by

0 X 00 1 0 1 @ Y 00 A = @ 0 Z 00

1 0 1 0 X 000 1 0 cos ' @ Y 000 A = @ 0

0 0 X0 cos  sin  A @ Y 0 A ; 0 ? sin  cos  Z0

Z 000

10 1

0 ? sin ' X 00 1 0 A @ Y 00 A : sin ' 0 cos ' Z 00

(2:1)

Note that a cone is symmetric with respect to the plane perpendicular to the axis of the cone and passing through the vertex. Consequently, we can assume that jj  2 and j'j  2 . As a result of the transformations, the equation of the cone is given by

F1 (X; Y; Z ; d; ; '; X0 ; Y0 ; Z0 ) = [(X ? X0 ) cos ' + sin '((Y ? Y0 ) sin  ? (Z ? Z0) cos )]2 + [(Y ? Y0 ) cos  + (Z ? Z0 ) sin ]2 ? d12 [(X ? X0 ) sin ' ? cos '((Y ? Y0 ) sin  ? (Z ? Z0 ) cos )]2 = 0:

(2:2)

This equation di ers from the corresponding equation in [7, p. 264]. The mistaken formula in [7] may have caused a part of the problems mentioned in there concerning cones. To proceed a bit further from Eq. (2.2), substitute (1.3) in (2.2) and multiply by (p=B )2 to get

F (x; y; p; d; ; '; X0 ; Y0 ; Z0 ) = [(x ? pX0 =B ) cos ' + sin '((y ? pY0 =B ) sin  + (H + pZ0 =B ) cos )]2 + [(y ? pY0 =B ) cos  ? (H + pZ0 =B ) sin ]2 ? d12 [(x ? pX0 =B ) sin ' ? cos '((y ? pY0 =B ) sin  + (H + pZ0 =B ) cos )]2 = 0:

(2:3)

As in [7], we use as parameters of a cylinder the axis direction given by two angles ( and '), a point (X0 ; Y0 ; Z0 ) from the axis xing the location of the cylinder, and the radius R of the cylinder. In the cylinder centered coordinate system (with Z 000 -axis parallel to the axis of the cylinder), the equation of the cylinder is given by (X 000 )2 + (Y 000 )2 ? R2 = 0. The coordinate transformation from the left camera centered system to the object centered system is obtained by rst making a translation of the origin to (X0 ; Y0 ; Z0 ) and then performing rotations of  and ' according to (2.1). Using (1.3), we obtain for the equation of the cylinder in the disparity frame

F (x; y; p; R; ; '; X0 ; Y0 ; Z0 ) = [(x ? pX0 =B ) cos ' + sin '((y ? pY0 =B ) sin  + (H + pZ0=B ) cos )]2 + [(y ? pY0 =B ) cos  ? (H + pZ0 =B ) sin ]2 ? (pR=B )2 = 0:

(2:4)

As in the conical case, there is the same logical discrepancy between (2.4) and the corresponding formula (in the (X; Y; Z ) frame) in [7, p. 264].

In order to nd the parameter values which give the best t with data, we proceed as in [7]. Assume that we have a set f(ik ; jk ; pk )jk = 1; : : : ; K g of observations from a quadric surface. In the conical case, we minimize the squared sum

2 =

K X

k=1

[F (ik ; jk ; pk ; d; ; '; X0 ; Y0 ; Z0 )]2 ;

(2:5)

where the function F is given in (2.3). In the cylindrical case, the function F depends on the cylinder parameters according to (2.4). If the data were perfect and the parameters correct, then 2 would be equal to zero. Thus, it is motivated to search parameter values which minimize 2 . Since the function F in (2.3) and (2.4) depends nonlinearly on the parameters, an iterative procedure is used for the minimization of 2 . In the following, techniques for the initial estimation of the parameters are presented.

2.3 Initial estimation of the cone parameters Initial estimation of the cone parameters is based on the ideas developed in [14]. While we are trying to remain in the disparity frame as long as possible, we rst show that the disparity map of a cone is also a conical surface in the disparity frame. Let D : R3 ! R3 denote the coordinate transformation from the left camera centered (X; Y; Z ) coordinate system to the disparity coordinate system (x; y; p) given by (x; y; p) = D(X; Y; Z ) = (?HX=Z; ?HY=Z; ?HB=Z ):

(2:6)

According to (1.4), we know that D maps planes to planes. The same holds for straight lines since a straight line is determined by two intersecting planes which intersect also in the disparity frame while D is one-to-one. A cone is formed by moving a line segment of exible length so that the other end point is xed to the vertex and the other end point draws a closed curve in a plane determining the bottom of the cone. Consequently, the image of a cone under D is a surface which can be formed by moving a line segment in the disparity frame. Since D is one-to-one, the vertex is mapped by D to a point to which the other end point of the line segment in the disparity frame has to be xed. The bottom plane of the cone is mapped to a plane on which the other end point of the line segment in the disparity frame must lie and draw a closed curve. Thus, the map D projects a cone from the (X; Y; Z ) frame to a cone in the disparity frame. However, the shape of the cone is usually deformed by D since the curve in the bottom plane does not necessarily have the same shape in the disparity frame as it has in the (X; Y; Z ) frame. Thus, the image of a right circular cone under D need not be a right circular cone. The vertex of a cone is estimated in the disparity frame. First, the unit surface normals nk are estimated at every measurement point (ik ; jk ; pk ) of the conical surface by using a technique called principal component analysis [11]. In [7], principal component analysis has been used for plane tting and in [14], also for the estimation of the cylinder's axis direction. The surface normals (in the (X; Y; Z ) frame) are, however, assumed to be a priori known in [14]. In order to estimate the surface normal in a measurement point, select a local neighborhood consisting of the surrounding eight pixels in addition to the pixel location under consideration. For the boundary pixels, only those surrounding locations which belong to the segmented conical region are included. Let ik , j k , and pk denote the sets consisting of the i, j , and p values, respectively, of the selected local neighborhood. The sample covariance matrix of the neighborhood set is given by

0 Var i k Sk = @ Cov(ik ; j k )

1

Cov(ik ; j k ) Cov(ik ; pk ) Var j k Cov(j k ; pk ) A : Cov(ik ; pk ) Cov(j k ; pk ) Var pk

(2:7)

Compute the eigenvalues 1  2  3 and the corresponding eigenvectors v1 , v2 , and v3 of Sk . The eigenvector v1 points into the direction of largest sample variance, v2 is orthogonal to v1 with largest variance, and v3 is orthogonal to both v1 and v2 . Consequently, v3 can be used as an estimate of a unit surface normal, since the variance is expected to be minimum in that direction. Since the cone has not a normal in the vertex, the surface normal

is not estimated in the neighboring pixels of an approximate vertex location which can be obtained as a point of highest disparity in the case of Fig. 2. In general, disparity has a local maximum or minimum in the vertex and its approximate location can be seen from the measurements if we have measured around the vertex. If there is much noise present in the data, the normal estimates may be poor. Good results were however obtained in the real data case of Fig. 2. Having estimated the normals, we proceed as in [14]. All planes that are tangent to the conical surface intersect at the vertex. Thus, the vertex can be estimated by substituting the vertex coordinates (i0 ; j0 ; p0 ) in the equations of the tangent planes obtained by using the above estimated normals n^k (the hat stands for an estimated value). This leads to the equation system n^k  r0 = n^k  rk , k = 1; : : : ; K , where r0 is the vertex position vector and rk is the position vector of the kth measurement point. The vertex coordinates are estimated from the equation system using linear least squares techniques. Since the shape of a cone is usually deformed by D, we estimate the cone axis direction A (a unit vector) and ^ 0 and Rk denote the the vertex angle = arctan(1=d) in the (X; Y; Z ) coordinate frame. Let the upper case letters R estimated vertex location and the kth measurement point in the left camera centered (X; Y; Z ) coordinate system, ^ 0)  A= cos = jRk ? R^ 0j, k = 1; : : : ; K , respectively. The linear least squares solution of the equation system (Rk ? R yields estimates for the axis vector coordinates divided by cos . Using the fact that A is a unit vector, an estimate for the vertex angle is obtained. As mentioned in [14], an error made in the vertex estimation propagates to axis estimation. In [14], there are also presented results where the data-driven procedure has given a slightly di erent vertex angle from the true value. In fact, the model-driven approach in [14] assumes that the vertex angle is obtained from a pre-speci ed set of possible values. In order to ensure that the parameter values give the best t with the data, we perform (in Section 2.5) an iterative procedure for the re nement of the parameter estimates. It should however be mentioned that in the case of Fig. 2, the initial estimates already seem to be quite accurate and the subsequent iteration only makes the estimates slightly better. To see how the techniques for the initial estimation of the cone parameters work for disparity data with di erent noise levels, we performed some preliminary tests with synthetic data. The parameters B and H were chosen as B = 53:5 cm and H = 950 pixels. The relative orientation of the cone with respect to the camera system was xed with parameters  = 0:2, ' = 0:1, X0 = 2, Y0 = 1, and Z0 = ?1:5. The unit-radius distance was given the value d = 1:9. A synthetic disparity map over the cone was computed with image resolution 64  64 pixels. About half of the image coverage was hit by the cone. On the conical surface, the data points covered a relatively small patch over a truncated cone seen from a single side view. Normally distributed noise with zero mean and standard deviation  (in pixels) was added to the disparity values. The results shown in Table 1 indicate that the cone parameters were quite accurately recovered for noise levels   0:1 pixels but for higher levels, the estimates became unreliable. Some errors were present even for noise-free data. The gures of Table 1 are based on single experiments, no Monte Carlo simulations were performed. X^0 Y^0 Z^0  '^ d^ ^ 0:0 1:8842 0:2054 0:1002 2:0131 0:9969 ?1:4877 0:01 1:8826 0:2058 0:1001 2:0121 0:9959 ?1:4907 0:1 1:8243 0:2204 0:0940 1:9581 0:9552 ?1:6260 1:0 0:5014 0:8998 ?0:3427 1:4936 0:6397 ?2:7389 Table 1. Initial parameter estimates of a synthetic cone for di erent noise levels . True values are d = 1:9,  = 0:2, ' = 0:1, X0 = 2, Y0 = 1, and Z0 = ?1:5.

2.4 Initial estimation of the cylinder parameters The ideas in [14] for cone tting inspired us to develop a corresponding method for the initial estimation of the parameters of a right circular cylinder. Based on the fact that the map D de ned in (2.6) is one-to-one, one could

show that the disparity map of a cylinder is also a cylindrical surface in the disparity frame. The surface normals could be estimated in the disparity frame and used to estimate the cylinder's axis direction. However, since the shape of a cylinder is usually deformed by D, we need the surface normals also in the (X; Y; Z ) frame for the estimation of the cylinder's axis location. Thus, it is better to estimate the surface normals directly in the (X; Y; Z ) frame. The principle of surface normal estimation is the same as for cones, i.e., we rst select a local neighborhood in the left image around the pixel location which corresponds to the measurement point Rk under consideration. Then the eigenvectors and eigenvalues of the covariance matrix of the neighborhood data (X k ; Y k ; Z k ) are computed. The eigenvector corresponding to the smallest eigenvalue yields an estimate for the unit surface normal Nk at Rk . Estimation of the cylinder's axis direction is based on the fact that for noise-free data, the cross product of two nonparallel surface normals is parallel to the cylinder's axis. We thus compute a set of cross products of estimated normals selected by avoiding parallelism. The mean value of this set gives an accurate estimate for the axis direction vector A. In the illustrative case of Fig. 3, the pixels in the cylindrical region were scanned into a vector proceeding rst in the horizontal i-direction and then in the vertical j -direction. A set of cross products was computed according ^ k  N^ k+10 , k = 1; : : : ; K ? 10, since we detected that taking the two normals for the cross product from to Ak = N di erent sides of the cylinder yields better estimates than normals in nearby points. The axis direction was estimated as KX ?10 sgn(Ak2 )Ak =(K ? 10); (2:8) A^ = k=1

where the e ect of sgn(Ak2 ), the sign of the second component of Ak , is to switch all the terms in the sum in the (approximately) same direction. In order to x the location of the cylinder's axis, note that the axis lies in the plane determined by a point on the cylindrical surface, the surface normal at that point, and the axis direction vector drawn to that point. Using the estimated normals and the estimated axis direction vector, we determine a plane for every measurement point. The intersection of these planes gives the whole axis line and a unique point from the axis is obtained by introducing a constraint, say Y = Y0 . To put it more precisely, we have the equation system

Y = Y

0 (2:9) N^ k  A^  (Rk ? R0) = 0; k = 1; : : : ; K; ^ k  A^ and denote the components of Wk by Wk1 , where R0 is the point to be estimated from the axis. Let Wk = N

Wk2 , and Wk3 . Then we have

Wk1 X0 + Wk3 Z0 = Wk  Rk ? Wk2 Y0 ; k = 1; : : : ; K;

(2:10)

from which estimates for the parameters X0 and Z0 can be solved by linear least squares techniques. The radius of the cylinder is estimated as a mean distance of the measurement points Rk from the recovered axis line. The described method is based on surface normal estimation. A possible error made in normal estimation (e.g. in the case of noisy data) propagates into axis direction estimation and further into axis location and radius estimation. On the other hand, the method is based on regression so that normally distributed error should cancel out. In the case of Fig. 3, excellent results were obtained for the axis direction and location estimation. The performance of the techniques presented in this section was tested by synthetic data covering a quarter of cylinder's circumference. The sample size was 41  41 = 1681 points equally spaced on the surface patch. The data was corrupted by adding to each coordinate X , Y , and Z normally distributed noise with zero mean and standard deviation . Note that the data generation di ers from the one used to generate the synthetic conical data in the previous section. The results shown in Table 2 are based on single experiments, no Monte Carlo simulations were performed. The initial parameter estimates seem to be quite accurate for noise levels   0:01R but become unreliable for higher levels. The method gives even for noise-free data a bit too distant axis location estimate resulting in a slightly too large radius estimate. Additional tests indicate that this phenomenon disappears when the data

points cover the whole cylindrical circumference. The method however works in single view cases even if a relatively small patch of the cylinder is visible and for noise-free data, the cylinder parameters can be recovered even from tiny patches. The computational time depends greatly on the data sample size. For the size 41  41, our algorithm implemented using the matlab software in a HP 9000/705 workstation executed the surface normal estimation in 40 s, the axis direction estimation in 10 s, and the axis location estimation in 6 s. Tentative tests propose that the method would be rather insensitive to single outliers.

=R 0:0 0:001 0:01 0:1

R^

0:1003 0:1003 0:0958 0:0404

^ ?1:4000 ?1:3997 ?1:4128 ?1:5585

'^

0:4000 0:4002 0:3408 0:0849

X^0 0:3000 0:3000 0:3031 0:3231

Z^0 ?2:0004 ?2:0004 ?1:9946 ?1:9062

Table 2. Initial parameter estimates of a synthetic cylinder for di erent noise levels . True values are R = 0:1,  = ?1:4, ' = 0:4, X0 = 0:3, and Z0 = ?2:0.

2.5 Parameter re nement An iteration is performed in order to assure and re ne the parameter estimates of cones and cylinders in the sense that 2 given in (2.5) is minimized. The iterative procedure follows that of the standard Levenberg-Marquardt nonlinear least squares method descriebed in [16, pp. 521-525]. The method combines the inverse-Hessian method and the steepest descent method. The latter is used far from the minimum, switching continuously to the former as the minimum is approached. In the inverse-Hessian method, 2 is expected to be well approximated by a quadratic function in the parameters. The increments in the parameters are obtained as a product of the inverse of the Hessian matrix and the negative gradient vector of 2 . In the steepest descent method, the increments in the parameters are taken into the direction of the negative gradient of 2 . The iteration is stopped when 2 essentially stops decreasing. In the real data cases of Figs. 2 and 3, less than 10 iteration loops were needed to achieve the minimum. The performance of the iterative procedure was tested for the re nement of the initial parameter estimates given in Tables 1 and 2. Remember that the synthetic data was generated quite di erently in the two cases. In the conical case, the noise was added to the disparity values while in the cylindrical case, the noise was added to the object coordinates. The results shown in Tables 3 and 4 indicate improvement in the parameter estimates for all noise levels. Although the initial estimates in the conical case with  = 1 were poor, the iterative procedure has been able to re ne the estimates surprisingly well. For noise-free data, the errors present in Table 1 have disappeared in Table 3. The same holds for the noise-free cylindrical case concerning the minor errors in the axis location and radius estimates in Table 2. Ten iteration loops were executed in all other cases except in the cylindrical case with  = 0:1R where the iteration was stopped after four loops (after that the iteration started to diverge). The computational time was 5 s to execute ten iterations for the cylindrical data set of size 41  41. As a least squares method, the Levenberg-Marquardt method should be sensitive to outliers. In fact, we detected that a single outlier can cause the iteration to fail. d^ ^ X^0 Y^0 Z^0  '^ 0:0 1:9000 0:2000 0:1000 2:0000 1:0000 ?1:5000 0:01 1:9000 0:2000 0:1000 2:0000 1:0000 ?1:4999 0:1 1:8996 0:2000 0:1000 1:9996 0:9998 ?1:5008 1:0 1:9016 0:1995 0:0977 1:9856 0:9952 ?1:5189 Table 3. Re ned parameter estimates of a synthetic cone for di erent noise levels . True values are d = 1:9,  = 0:2, ' = 0:1, X0 = 2, Y0 = 1, and Z0 = ?1:5.

=R 0:0 0:001 0:01 0:1

R^

0:1000 0:1000 0:1012 0:1151

^ ?1:4000 ?1:4002 ?1:4009 ?1:4109

'^

0:4000 0:3999 0:4023 0:3751

X^0 0:3000 0:3000 0:2998 0:3010

Z^0 ?2:0000 ?2:0000 ?2:0013 ?2:0152

Table 4. Re ned parameter estimates of a synthetic cylinder for di erent noise levels . True values are R = 0:1,  = ?1:4, ' = 0:4, X0 = 0:3, and Z0 = ?2:0.

2.6 Model building and analysis The nal step in the modeling process begins with the computation of a modeled disparity map. This requires the determination of a new disparity value for every pixel location in the measurement coverage. Consequently, known object geometries can be used to determine ridge edges between adjacent patches in the disparity map. In the conical case of Fig. 2, a new disparity value in a pixel location is chosen to be the larger one of the modeled conical and planar disparity values computed in that pixel location. The same maximum principle holds for the cylindrical case of Fig. 3. The detection and modeling of occluding contours is a subject of further investigation. The modeled disparity maps for the conical and cylindrical cases of Figs. 2 and 3 are shown in Figs. 4 and 5, respectively. An error analysis is performed by subtracting the modeled disparity map from the measured one. This gives a pointwise error estimate for disparity in every pixel location. The di erences between the measured and modeled left camera centered (X; Y; Z ) coordinates can be computed for each coordinate and represented as functions of the pixel coordinates, as well. For the illustrative quadric cases, the di erences in disparity are shown in Figs. 6 and 7. A systematic error in disparity can be detected especially in the cylindrical case while the surrounding plane seems to be curved. Unfortunately, this is a consequence of an unknown error in the measurement process since the curveness can already be seen in the measured disparity map and the measured plane is known to be highly planar. The same systematic error has also modi ed the results for the conical and cylindrical regions causing the main di erences between the measurements and model. Improving segmentation might lead to more accurate results in the boundary pixels where the di erences seem to be larger than in the inside pixels. The performance of the modeling techniques themselves has been well established by synthetic data as was shown for di erent noise levels in Tables 1 through 4. The modeling techniques can be applied in quality control problems where one wants to check how well the measured object ts with a right circular cone or cylinder. The error gures show how the object should be machinetooled in order to obtain the demanded quadric shape. In real applications, the measurement system should be linked to a CAD-environment which is connected to a machine tool. This would enable to perform the model analysis in the same environment where the product has been designed. Other application areas lie in reverse engineering where the objective is to build a model for an unknown object. Techniques in this paper cover, of course, only right circular cones and cylinders as possible object models.

3. CONCLUSIONS In this paper, techniques for the reconstruction of quadric surfaces from disparity measurements were presented. The di erent stages of a disparity map modeling procedure were described with special contribution on the parameter estimation of cones and cylinders. The techniques in [14] for data-driven parameter estimation were used in the initial estimation of the cone parameters. For the initial estimation of the cylinder parameters, a new technique based on surface normal estimation, cross product of normals, and a fan of planes was developed. The parameter estimates were assured and re ned by the Levenberg-Marquardt nonlinear least squares algorithm as in [7] with two formulas corrected (one for a cone and the same for a cylinder). The method was shown to give good results in two illustrative real data cases and in some tests with synthetic data. The paper is a part of a larger work where the stereo vision system is developed for industrial 3-D quality control and reverse engineering applications. Consequently, the bene ts of disparity data over range data were investigated.

In the course of this paper, the main advantage of disparity data were in the raster format which provided an easy way to handle and analyze data and models. In the determination of the geometrical parameters of cones and cylinders, no special advantages of disparity data were found and we ended up to apply existing techniques for range data. For the other stages of the modeling procedure including multiview integration, we are hoping to make some contribution in the future.

4. ACKNOWLEDGMENTS The author wishes to thank Prof. Henrik Haggren for the continuous support and valuable comments during the research. Petteri Pontinen is acknowledged for the original disparity measurements and the Foundation of Technology in Finland for nanciating this research.

5. REFERENCES [1] G. J. Agin and T. O. Binford, \Computer description of curved objects", Proc. 3rd Int. Joint Conf. Arti cial Intell., pp. 629-640, August 1976. [2] R. M. Bolle and D. B. Cooper, \On optimally combining pieces of information, with application to estimating 3-D complex-object position from range data", IEEE Trans. Pattern Anal. Machine Intell., Vol. PAMI-8, pp. 619-638, September 1986. [3] R. M. Bolle and B. C. Vemuri, \On three-dimensional surface reconstruction methods", IEEE Trans. Pattern Anal. Machine Intell., Vol. 13, pp. 1-13, January 1991. [4] B. Cernuschi-Frias, \Orientation and location parameter estimation of quadric surfaces in 3-D from a sequence of images", Ph.D. dissertation, Brown Univ., Div. of Engineering, Providence, RI, May 1984. [5] R. D. Eastman and A. M. Waxman, \Using disparity functionals for stereo correspondence and surface reconstruction", Comput. Vision, Graphics, and Image Processing, Vol. 39, pp. 73-101, 1987. [6] S. F. El-Hakim and N. J. Pizzi, \Multicamera vision-based approach to exible feature measurement for inspection and reverse engineering", Optical Engineering, Vol. 32, pp. 2201-2215, September 1993. [7] P. J. Flynn and A. K. Jain, \Surface classi cation: hypothesis testing and parameter estimation", Proc. 1988 IEEE Comput. Soc. Conf. Computer Vision and Pattern Recognition, pp. 261-267, June 1988. [8] H. Haggren, \On system development of photogrammetric stations for on-line manufacturing control", Acta Polyt. Scand., Civil Eng. and Building Construction Series, No. 97, 1992. [9] H. Haggren, O. Jokinen, I. Niini, and P. Pontinen, \3-D digitizing of objects using stereo videography", Optical 3-D Measurement Techniques II, Eds. Gruen/Kahmen, pp. 91-97, Herbert Wichmann Verlag, Karlsruhe, 1993. [10] W. Ho and N. Ahuja, \Surfaces from stereo: integrating feature matching, disparity estimation, and contour detection", IEEE Trans. Pattern Anal. Machine Intell., Vol. 11, pp. 121-136, February 1989. [11] I. Jolli e, Principal Component Analysis, Springer-Verlag, New York, 1986. [12] V. Koivunen and R. Bajcsy, \Geometric methods for building CAD models from range data", Geometric Methods in Computer Vision II, Proc. SPIE, Vol. 2031, pp. 205-216, 1993. [13] H. Maitre and W. Luo, \Using models to improve stereo reconstruction", IEEE Trans. Pattern Anal. Machine Intell., Vol. 14, pp. 269-277, February 1992. [14] T. S. Newman, P. J. Flynn, and A. K. Jain, \Model-based classi cation of quadric surfaces", CVGIP: Image Understanding, Vol. 58, pp. 235-249, September 1993. [15] I. Niini, \Kuvien keskinainen orientointi kaksiulotteisia projektiivisia muunnoksia kayttaen" (Relative orientation of two images using projective transformations), Master's thesis, Helsinki University of Technology, Espoo, 1990 (in Finnish). [16] W. Press, B. Flannery, S. Teukolsky, and W. Vetterling, Numerical Recipes: The Art of Scienti c Computing, Cambridge, 1986. [17] K. Rao and R. Nevatia, \Generalized cone descriptions from sparse 3-D data", Proc. IEEE Conf. Comp. Vision and Pattern Recognition, pp. 256-263, June 1986. [18] G. Taubin, \Algebraic nonplanar curve and surface estimation in 3-space with application to position estimation", Brown Univ., Div. of Engin., LEMS-43, Providence, RI. [19] R. P. Wildes, \Direct recovery of three-dimensional scene geometry from binocular stereo disparity", IEEE Trans. Pattern Anal. Machine Intell., Vol. 13, pp. 761-774, August 1991.

Fig. 2. Measured disparity map over a surface consisting of conical and planar regions.

Fig. 3. Measured disparity map over a surface consisting of cylindrical and planar regions.

Fig. 4. Modeled disparity map of data in Fig. 2.

Fig. 5. Modeled disparity map of data in Fig. 3.

Fig. 6. Di erence between the measured disparity map Fig. 7. Di erence between the measured disparity map in Fig. 2 and the modeled one in Fig. 4. in Fig. 3 and the modeled one in Fig. 5.

Suggest Documents