A New Method for Uncalibrated Visual Servoing

1 downloads 0 Views 686KB Size Report
generalization of secant methods for solving nonlinear systems of equation [12], .... that is based on the modified Cholesky factorization [13] of a square matrix. 2.
A New Method for Uncalibrated Visual Servoing M. Bonkoviü*, A. Hace** and K. Jezernik** *

Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split, Croatia ** Faculty of Electrical Engineering and Computer Science, University of Maribor, Slovenia [email protected], [email protected], [email protected]

Abstract—The paper introduces a novel method for visual servoing based on the generalization of quasi-Newton methods for nonlinear optimization. The method calibrates a linear model based on several previous iterates. The difference with existing approaches is that we do not impose the linear model to interpolate the function. Instead, we prefer to identify the linear model which is as close as possible to the nonlinear function, in the least squares sense. The new system has shown to be less sensitive to noise and exhibits a faster convergence than conventional quasiNewton methods. The theoretical results are verified experimentally.

I. INTRODUCTION Information acquired from cameras for guiding a robot system has been one the main topics of robotic research for decades [1]. Demonstrated applications of vision guided control span the broad range of human activity and include well founded theoretical [2]-[6], as well as practical knowledge [7]-[8] of visual servoing themes. However, due to considerable improvement in hardware performance the research is still active, and the new applications appear frequently [9]. As a result of our intention to realize vision guide system we introduce a novel approach based on a generalization of secant methods for solving nonlinear systems of equation [12]. The new method has shown to be less sensitive to noise and exhibits a faster convergence than conventional quasiNewton methods. The theoretical results have been proved experimentally and show performance improvement over previous methods. Vision based robot control has been introduced to increase the flexibility and the accuracy of robot systems [1]. Using vision robot manipulates its environment. A formal classification [2] differentiates such systems based on the error signal domain. If the error is defined in 3D (task space) coordinates, the position based system is considered (PBVS). Otherwise, if it is defined directly in terms of image features, the system is said to be an image based (IBVS). The specification of an image-based visual servo task involves determining an appropriate error function f, such that when the task is achieved, f=0 [1]. As the robot control input is usually defined in joint or in task space coordinates, it is necessary to relate the changes in visual appearance (¨f) with the changes in joint or task space (¨q) through the image Jacobian matrix J(q). 'f

J (q )'q

0-7803-9511-5/06/$20.00 ©2006 IEEE

(1)

624

Identifying the Jacobian, for example, in the typical problem of pixel to pixel correspondences depends on camera calibration parameters, depth estimation, and the number of feature parameters related to number of degrees of freedom the robot has to be controlled. The problem could also be solved using model independent approach which describes visual servoing algorithms that are independent of hardware (robot and camera systems) types of configuration. One of the first and still today very popular solutions was offered by Jägersand [10] in which he formulated the visual servoing problem as a nonlinear least squares problem solved by a quasi-Newton method using Broyden Jacobian estimation. In visual servoing we are interested in determining the manipulator velocity, required to achieve some desired value of image features. This requires solving the system given by (1) [1]. If the number of image features differs from the number of tasks degrees of freedom, we can compute a least squares solution for Jacobian parameter estimation [3]. One of the main drawbacks of under constrained IBVS systems (systems in which the number of feature parameters is smaller than the number of robot degrees of freedom) lie in fact that some components of the object velocity are unobservable (for example those belongs to the pure rotation around optical axes). That physically mean, that in real situation, sometimes it is simply not possible to fulfill the task using the minimum energy condition or using the shortest path in image space, which is a straight line. The numerous hybrid methods is described in [4], [5], which with varying degrees of successfulness eliminate the mentioned drawback. If the target is moving, the system model has encountered the error not only as a function of robot pose but also of the pose of a moving object [11]. In this paper we propose a method for IBVS based on a generalization of secant methods for solving nonlinear systems of equation [12], which take the control over the robot joint to position the end effector into the static point or to track the moving target along the unknown trajectory. The main contributions of the paper are as follows: x The novel nonlinear optimization technique is introduced for visual servoing control problem, which generates synergy of classic Broyden estimation techniques influenced with the improvements based on the acquired information from past iterates to calibrate at best the model of a nonlinear function. The technique is particularly useful for both static and moving target tracking and is more robust in the presence of noise. x The developed algorithm is experimentally verified for uncalibrated vision-guided robotic control.

AMC’06-Istanbul, Turkey

We have also assumed that the robot kinematic model and camera calibration parameters are unknown, too. The system is described and modeled exclusively with the linear model for which we maintain a "population" of iterates which "covers" system and moving object dynamics. In the remaining of the paper, the control algorithm is explained in Section II. Section III, gives an overview of the experimental setup and achieved results. Section IV concludes the paper. II.

VISUAL SERVOING CONTROLLER

A. The Estimation Algorithm The visual servoing problem has been formulated as a nonlinear least squares problem [9], [10] and it could be solved using quasi-Newton methods, which consider at each iteration the linear model Lk ( x; Bk )

F ( xk )  Bk ( x  xk )

(2)

The model approximates F(x) in the neighborhood of xk and computes xk 1 as a solution of the linear system Lk ( x; Bk ) 0 . Quasi-Newton methods can be summarized as methods based on the equation:

xk 1

xk  Bk1 F ( xk )

(3)

followed by the computation of Bk 1 . For pure Newton method Bk is the Jacobian of F evaluated at xk , that is a nxm matrix such that entry (i,j) is wFi wx j [12]. Bk

J ( xk )

’F ( xk )T

(4)

F ( xk )

Lk 1 ( xk 1 )

F ( xk 1 )

yk

Bk 

( yk  Bk sk ) skT skT sk

J

i 0

2

2

2

F

where Lk 1 is defined by (2) and the Bk01  ƒnun is an a priori approximation of Bk 1 . The role of the second term is to overcome the under-determination of the least-square problem based on the first term and also to control the numerical stability of the method. The matrix ī contains weights associated with the arbitrary term Bk01 , and the weights Zki 1  ƒ are associated with the previous iterates. Equation (8) can be rewritten in matrix form as follows: § : arg min J S k 1 I nun ¨¨ J © 0 k uk

0 k un · ¸  Yk 1 * ¸¹



2

§ : 0 · (9) ¸¸ Bk01 ¨¨ © 0 *¹ F



where :  ƒk 1 is a diagonal matrix with weights Zki 1 on the diagonal for i=0,...,k. The normal equations of the least-square problem lead to the following formula: 1

(6)

If the dimension n is strictly greater than 1, there is an infinite number of matrices Bk 1 satisfying (6). Applying the “least-change secant update”, proposed by Broyden, leads to the following update formula Bk 1

k

arg min(¦ Zki 1 F ( xi )  Zki 1 Lk 1 ( xi ; J )  *J  *Bk01 ) (8)

Bk 1 Bk01  (Yk 1  Bk01Sk 1 ):2 SkT1 *2  Sk 1:2 SkT1 (10)

(5)

Subtracting these two equations and defining yk F ( xk 1 )  F ( xk ) and sk xk 1  xk we obtain the classical secant equation: Bk 1 sk

Bk 1

Bk 1

Broyden [14] proposed a class of quasi-Newton methods based on the secant equations, imposing the linear model Lk 1 to exactly match the nonlinear function at iterates xk and xk 1 , that is Lk 1 ( xk )

This method has been proved as successful. Jägersand [10] demonstrates the robust properties of this type of control and Piepmeier [11] develops a dynamic Broyden Jacobian estimation for moving target tracking. But, as we can find in [12], the philosophy of interpolating the linear model on only two iterates and forgetting all the information given by previous iterates could be too restrictive, incurring that the arbitrariness introduced by the method plays an overly important role. For that reason, so called “population-based” generalization has been introduced, for calibrating a linear model based on several previous iterates. Oppositely to existing approaches in which linear model is imposed to interpolate the function, population-based generalization prefer to identify the linear model which is as close as possible to the nonlinear function in the least-squares sense. At each iteration, the finite population of iterates x0 ! xk 1 are maintained. The method also belongs to quasi-Newton framework, where Bk 1 is computed as

(7)

625

where Yk 1 ( yk , yk 1,..., y0 ) , and Sk 1 (sk , sk 1,..., s0 ) . The role of the a priori matrix Bk01 is to overcome the possible under-determination of problem (8). The weights Zki 1 , capture the relative importance of each iterate in the population, and the matrix * captures the importance of the arbitrary terms defined by Bk01 for the identification of the linear model. The weights have to be finite and Zki 1  ƒ must be such that * 2  S k 1: 2 SkT1 is safely positive definite. To ensure this property we seek for a technique to guarantee both the problem of overcoming the under-determination, and numerical stability. Such problem can be solved by the method that is based on the modified Cholesky factorization [13] of a square matrix S k 1: 2 S kT1 . After the factorization is finished, the matrix

Jˆ +

Figure 1. Visual servoing block diagram

* can be created, such that * 2  S k 1: 2 SkT1 is safely positive definite matrix. An extensive convergence analysis of the method can be found in [12], where has been shown that the described class of algorithms exhibits a faster convergence and a greater robustness than quasi-Newton methods for most numerical tests, which the authors had performed.

B. The control scheme In this paper we are interested in robot visual control in a fixed camera configuration. Fig.1 shows the structure of the visual servo system used in this paper. Here, so called image-based visual servoing is considered, in which the error signal that is measured directly in the image, is mapped to the robot actuators' command input. The visual controller is constructed in order to determine the joint velocities q as

J

JR JI

Thus, the relation between joint coordinates and image features is given by (15). f

Jq

q

J Ke

(11)

where J  , K , and e f d  f are the pseudoinverse of the Jacobian matrix J that relates joint coordinates with image features, control gain, and the error signal that is obtained by comparing the desired and current image feature parameters, respectively. The robot Jacobian gives relation between robot joint angle velocities and the velocities of its end-effector in Cartesian space. x

J R q

(12)

In order to implement the visual servo controller, an image Jacobian is also required. The image Jacobian is used to describe the differential relation of the image features and position and orientation of the robot end effector [1] as in (13). f

J I x

(13)

The Jacobian matrix J is a compound of robot and image Jacobian.

626

(15)

If the expression (11) is multiplied by J then we get Jq

JJ  Ke

(16)

that after rearrangement finally yields decoupled closedloop dynamics of first order (17). f  Kf



(14)

Kf d

(17)

However, the compound Jacobian (14) depends on the system calibration parameters that are hard to obtain accurately in practical applications. In the proposed visual servoing scheme, the Jacobian J is obtained by the estimation process. The Broyden algorithm can be used for on-line estimation of the Jacobian matrix. Then, the update equation of its estimate Jˆ is given by (18), Jˆk 1

1 Jˆk  K ˜ 'f  J k 'q 'qT 'qT 'q

(18)

where the adaptation constant Ș is introduced in order to overcome the noise problems of the Broyden method [11]. In this paper, we propose to use the estimation algorithm that is presented in Section II.A. The equation (10) can be rewritten as in (19), Jˆk 1

1 Jˆk  'F  J k 'Q : 2 'QT * 2  'Q: 2 'QT (19)

where ǻF and ǻQ contain current and past values of ǻf and ǻq, respectively. Here, the Jacobian estimate from the previous iterate is applied for the a-priori approximation.

yc u

z x

robot tip

xc

y L

zc

image plane

v

y

q2 robot controller (joint servo)

TCP/IP

x

image processing Figure 3. Planar 2DOF parallel manipulator

Figure 2. Experimental system

III.

EXPERIMENTAL RESULTS

y

A. Experimental system In order to verify the proposed method extensive experiments were carried out on the Institute of Robotics, University of Maribor. The visual control scheme depicted by Fig.1 has been applied to the experimental system shown by Fig.2. The system consists of in-house made 2DOF planar parallel manipulator with four revolute joints and a camera that can provide position information of the robot tip and the target in the robot workplace. The robot direct kinematics is given by the following equations, x

ªcos(q1 )  cos(q2 ) º L« » ¬ sin(q1 )  sin(q2 ) ¼

RC

0 

1 2



1 2

0

0º 1.2 »» 1.2 » » 1 »¼

2

1

0

2L

initial position

L

(20) 4

where q1, q2 are robot joint angles, and x is a vector of robot tip coordinates in the Cartesian world coordinate frame (Fig.3). L=0.4m is the length of the robot single link. Translation and rotation of the camera frame with respect to the robot world coordinate base frame is given by the RPY homogenous transformation matrix RC (21), i.e. it is rotated around the x-axis for 135°, and translated for 1.2m in both y- and z-direction. ª1 0 «0  1 2 « 1 «0 2 « «¬ 0 0

q1

(21)

The control system for our robot visual servoing experiments consists of two personal computers. The image processing node and the robot controller are interconnected by 100 Mbit/s Ethernet. TCP/IP is utilized for data exchange between the PC control nodes. The robot controller is implemented on Dspace DS1102 motion control board which executes joint servo control algorithms at 1ms period. The image processing node

627

3

x Figure 4. Target movement

grabs the image from the camera, extracts the image features and executes the visual control algorithms. We employed a low-cost CCD camera Hitachi KP-D50 and a low-cost framegrabber The Imaging Source DFG-LC1 that provided refreshed digital image of resolution 640x480 pixel at the rate of fcam=30Hz. B. Task description In this paper, the image processing node generates the target point applied in the visual task definition within the image. When the robot tip reached the target, the target point was moved to another position in order to provide traveling of the robot tip through the whole robot workplane. The position of the target point determined corners of a square in the image plane. The projection of the target positions on the robot workplane is depicted by Fig.4. The initial robot tip position is marked with "0" and the corresponding robot joint angles have the following values q1=30°, q2=150°. The initial target point position is

b) a) u-v tracking

image contour

d) c) Figure 5. Experimental results: a) and b) Broyden method, c) and d) proposed method image contour : solid line – robot tip, dotted line – reference, circle – target point u-v feature tracking : dotted line – target point, solid line – reference and actual trajectory

marked with "1", and the consequent positions are marked "2", "3", and "4". The target point positions was generated in the following order "1"-"2"-"3"-"4"-"1", and finally, the robot was commanded to return to the initial position that is marked by "0". Beside the target position generation, the image processing node also implements a linear interpolator in order to generate reference trajectory within the image. Target positions represent input setpoints in the image intepolator. In the experiments presented in this paper, the interpolation speed was set to the following value 2pixel*fcam=60pixel/s. C. Results Extensive experiments which we carried out showed that the Broyden method is very sensitive to the noise which is present in the real visual servoing application. The application of the adaptation constant Ș improved robustness of the method, however, repeatability of the experiments retained rather low. The proposed method

628

performed much more robust then the Broyden method. Noise sensitivity was quite low. Furthermore, repeatability was relatively high in comparison with the Broyden method. Fig.5 shows the experimental results obtained by the Broyden method (a, b) and the proposed method (c,d). In the figure, both the contour of the robot tip in the u-v image plane (a, c) and u and v feature tracking (b,d) are presented. In the presented experimental results, both methods apply same value of the visual servo controller gain K=0.05. The Broyden adaptation constant was set as Ș=0.2 that allowed for smooth motion of the robot. The history length of the proposed population-based method was set to 20. We considered the newest data most important, and the oldest data least important, i.e. the values of the diagonal elements of the matrix ȍ were determined by such that Ȧ1=1, …, Ȧ10=0.5,…, Ȧ20=0 (the weight values linearly decreased from 1 to 0). The matrix ī was determined arbitrary such that ī=ȖI, where Ȗ=0.01.

The improvement in the defined task performance with the application of the proposed method over the Broyden method is obvious. The contour error in the image plane that appeared in the downright corner at the Broyden method due to slow convergence of the Jacobian matrix estimate evidently disappeared when the proposed estimation method was applied in the visual controller. However, it is worth to notice, that the proposed method involve much more parameters which is hard to select. IV. CONCLUSION The problem of visual servoing in unstructured environments, without using any a-priori camera or kinematic models has proven hard. Unfortunately, there are many such environments where robots would be useful. In this paper we proposed a new method for solving mentioned type of problems, more specifically, a novel Jacobian estimation method for IBVS robotic systems. It has been shown that classical Broyden Jacobian estimation, as a secant approximation of a nonlinear model has been sensitive to noise and sometimes numerically unstable due to arbitrariness introduced by the method itself. Therefore, the new technique has been applied, which uses the history of iterates to prevent the mentioned drawback of the classic quasi-Newton methods. The objective function has an additional term, which serves to overcome the underdetermination of the least-squares problem based on the first term and also control the numerical stability of the method. The method has been experimentally evaluated and compared with the Broyden method. We have carried out extensive experiments and found that for four bar planar manipulator in control with a fixed camera configuration, the new visual servo controller is not sensitive to noise. The experimental results have demonstrated the improved tracking and stability confirmed the effectiveness of the proposed method. Although the problem of moving target has not been explicitly addressed, the system performs well both for static target approaching and moving target tracking as

629

well. Moreover, improvement in performance over the popular Broyden method has been evident. REFERENCES [1]

[2]

[3]

[4]

[5]

[6] [7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

S. Hutchinson, G. D. Hager, P. Corke, “A Tutorial on Visual Servo Control”, IEEE Trans. On Robotics and Automation, Vol.12, No.5, Oct 1996. A.C. Sanderson, L.E.Weiss, “Image-based visual servo control using relational graph error signals”, Proc. IEEE pp.1074-1077, 1980. K. Hosoda, M. Asada, “Versatile visual servoing without knowledge of true Jacobian”, in Proc. IEEE/RSJ/GI Int. Conf. Intelligent Robots and Systems, Munich, Germany, Sept. 1994, pp. 186-193. A.Castano, S.A.Hutchinson, “Visual Compliance: Task-directed visual servo control”, IEEE Trans. Robot. Automat., vol 10, pp. 334-342, June 1994. N.Cowan, J.D.Weingarten, “Visual Servoing via Navigation Functions”, IEEE Tran. Rob. Automat., Vpl.18, pp. 521-533., Aug. 2002. E.Malis, F.Chaumette, S. Boudet, 2-1/2-D Visual Servoing, IEEE Trans. Robot. Automation, Vol.15, Apr. 1999. J.A.Gangloff, M.F.Mathelin, “Visual Servoing of a 6-DOF Manipulator for Unknown 3-D Profile Following”, Trans. On Robotics and Automation, pp. 511-520, Vol.18, No.4, Aug. 2002. J. Stavnitzky, D. Capson, “Multiple Camera Model-Based Visual Servo”, IEEE Trans. On Robotics and Automation, pp. 732739,Vol.16, No.6, Dec. 2000. D.C.Shuurman, D.W.Capson, “Robust Direct Visual Servo Using Network-Synchronized Cameras”, Trans. On Robotics and Automation, pp. 319-334Vol.20, No.2, Apr. 2004. M. Jägersand, R. Nelson, “On-line Estimation of Visual-Motor Models using Active Vision”, Proc. ARPA Image Understanding Workshop 1996. J. A. Piepmeier, G. V. McMurray, H. Lipkin “Uncalibrated Dynamic Visual Servoing”, IEEE Trans. On Robotics and Automation, Vol.20, No.1, pp. 143-147, February 2004. F. Crittin, M. Bierlaire, “A generalization of secant methods for solving nonlinear systems of equations”, in Proc. 3rd Swiss Transport Research Conference, March 19-21, 2003. R. B. Schnabel, E. Eskow, “A new modified Cholesky factorization”, SIAM Journal on Scientific and Statistical Computing 11: pp. 1136-1158, 1990. C.G. Broyden, "A class of methods for solving nonlinear simultaneous equations", Mathematics of Computation 19, pp. 577-593, 1965.