Recurrent Neural Networks for Minimum Infinity

0 downloads 0 Views 284KB Size Report
Han Ding, Member, IEEE, and Jun Wang, Senior Member, IEEE. Abstract—This .... can defined as the ratio of the norm of the end-effector velocity to the norm of ...
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 3, MAY 1999

269

Recurrent Neural Networks for Minimum Infinity-Norm Kinematic Control of Redundant Manipulators Han Ding, Member, IEEE, and Jun Wang, Senior Member, IEEE

Abstract—This paper presents two neural network approaches to minimum infinity-norm solution of the velocity inverse kinematics problem for redundant robots. Three recurrent neural networks are applied for determining a joint velocity vector with its maximum absolute value component being minimal among all possible joint velocity vectors corresponding to the desired end-effector velocity. In each proposed neural network approach, two cooperating recurrent neural networks are used. The first approach employs two Tank–Hopfield networks for linear programming. The second approach employs two twolayer recurrent neural networks for quadratic programming and linear programming, respectively. Both the minimal 2-norm and infinity-norm of joint velocity vector can be obtained from the output of the recurrent neural networks. Simulation results demonstrate that the proposed approaches are effective with the second approach being better in terms of accuracy and optimality. Index Terms—Inverse kinematics, minimum infinity-norm, recurrent neural networks, redundant manipulators.

I. INTRODUCTION

T

HE USE of kinematically redundant robots is expected to increase dramatically in the future because of their ability to avoid the internal singular configurations [1] and obstacles [2], and to optimize dynamic performances [3]. Solving the inverse kinematics problem of redundant manipulators is of vital importance in robotics. Much effort has been devoted in this area. The Euclidean norm (or 2-norm) is widely used because of its analytical tractability. One such case is when we want to obtain the lowest possible magnitudes of the joint velocities that will perform the given task. Minimizing 2-norm solution is not an ideal choice for satisfying this requirement. This is because minimizing the 2-norm (i.e., the sum of squares) of the joint velocities does not necessarily minimize the magnitudes of the individual joint velocities. This is undesirable in situations where the individual joint velocities are of primary interest. There has been very little attention to Manuscript received October 29, 1997; revised July 15, 1998 and November 8, 1998. This work was supported in part by the National Distinguished Youth Scientific Fund of China under Grant 59725514 and by the Hong Kong Grants Council under Grant CUHK4165/98E. H. Ding is with the School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China. He is also with the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Kowloon, Hong Kong. J. Wang is with the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong. Publisher Item Identifier S 1083-4427(99)03334-2.

the minimization of the infinity norm solution. Deo and Walker [4], [5] investigated the use of the infinity norm in formulating the optimization measures for computing the inverse kinematics of redundant manipulators. Simulation results show that the infinity norm is a better measure of performance than the 2-norm when a solution with low individual components is desired. However, the minimization of the infinity norm is very time-consuming. In recent years, new interest in the neural network research has been generated to reduce the computational complexity of motion planning and control for manipulators [6]. Neural networks have been studied by researchers to model the forward and inverse kinematics mapping of robot manipulators [7]. Guo and Cherkassy [8] proposed a closed-loop dynamic system for control of redundant manipulators. In this system, a local minimum of a quadratic energy function is achieved by a process of re-estimation of the neuron weights. The solution of this method is experimentally capable to demonstrate more effective convergence than achievable by the error-backpropagation solution. Simon [9] discussed the optimization neural network and its application to robot path planning. Lee and Kil [10] proposed a method for generating an inverse kinematic solution of a redundant arm based on the iterative update of joint vectors. The proposed neural network consists of a feedforward network and a feedback network forming a recurrent loop. Dermatas et al. [11] presented the error-backpropagation algorithm to solve the inverse kinematics problem of redundant manipulators. Mao and Hsia [12] investigated the neural network approach to solve the inverse kinematics problem of redundant manipulators in an environment with obstacles. Their research work has proved the effectiveness of the neural network for trajectory planning and control of robots. Wang et al. [13] developed a two-layer recurrent neural network for pseudo-inverse control of redundant manipulators which can deal with the case when the Jacobian is in or near singular configuration. Ding and Tso [4] presented the TH network for redundancy resolution of manipulators. In this paper, our attention is focused on computing the minimum infinity-norm joint velocity vector corresponding to a given end-effector velocity (i.e., a joint velocity vector that will have minimum absolute value of its largest component) by using recurrent neural networks. This enables more direct monitoring and control of the magnitudes of the individual joint velocities than does the minimization of the sum of squares of the components. First, we map the minimum

1083–4427/99$10.00  1999 IEEE

270

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 3, MAY 1999

infinity-norm solution into two recurrent neural networks. The proposed recurrent neural networks can provide parallel and distributed computational models for minimizing infinity-norm in real time. This paper is organized in six sections. The next section discusses the mathematical framework for redundant manipulators. In Section III, the infinity-norm optimization problem is formulated. Section IV presents the recurrent neural networks. In Section V, simulation results obtained using the proposed neural networks are presented and finally Section VI provides the conclusions.

can defined as the ratio of the norm of the end-effector velocity to the norm of the joint velocity [1], i.e., (6) is a symmetric positive semidefinite where and matrix having nonnegative eigenvalues , , , zero eigenvalues, with . is bounded according to Thus, (7) The inequality can be written in terms of the largest singular of the end-effector Jacobian as value

II. BACKGROUND The end-effector velocity vector and the joint of a robotic manipulator are related velocity vector through the Jacobian matrix of the direct kinematic equation as (1) is the end-effector Jacobian of the manipuwhere lator. The inverse kinematics of a redundant manipulator poses and hence for a challenging problem as in that case any given end-effector velocity, there exists an infinite number of the of solutions. The Moore–Penrose pseudo-inverse Jacobian matrix is commonly employed to compute the joint velocities of a redundant manipulator: (2)

(8) , can be maximized so that the The MVR, bounded by joint motions are mapped into the end-effector motion most efficiently. to be the square of We define a performance criterion the MVR. The null space of the Jacobian is utilized to increase using the gradient projection method [1] (9) where the coefficient is a real scalar constant that determines the convergence rate of self-motion to a locally optimal is the gradient vector of a performance configuration, defined as , , , function . Let (10)

gives the minimum 2-norm solution, is where the null-space joint velocity corresponding to the instantaneous self-motion of the manipulator as it does not cause any endis the null space of , i.e., effector motion, and (3) is the zero vector. where If the matrix has full rank [i.e., rank or is nonsingular. It is well known that then can be determined if if if

),

(11) Then (9) becomes (12) The first term of (12) represents the minimum-norm joint . The second term is used to velocity that minimizes accomplish a subtask by optimizing the performance function and it does not contribute to the end-effector velocity . III. INFINITY-NORM OPTIMIZATION PROBLEM

(4)

For any vector is defined as norm

, the infinity(13)

In our previous work [14], Tank–Hopfield network [15] is in the full-rank case. If the matrix adopted for computing is rank-deficient, one close-form solution of is [16] if (5) if

In order to find a minimum infinity-norm inverse kinematics solution, we should solve the problem (14) is the th column of the identity matrix. In where view of (11), (14) becomes (15)

For redundant manipulators, there are infinity number of joint velocities for a given end-effector velocity. This can be utilized to determine the best joint velocity corresponding to specified optimality criterion. The manipulator-velocity-ratio (MVR)

Let (16)

DING AND WANG: RECURRENT NEURAL NETWORKS

271

Equation (15) can be written as (17) (18)

s.t.

Equations (17) and (18) can be summarized in a matrix form (19) (20)

s.t.

(21) and is the where identity matrix. Let , where , , then a final form of the problem can be derived as

Fig. 1. Block diagram of the proposed neural network approaches.

algorithm, we can define the problem of (14) as (32)

(22) (23) (24) (25)

s.t.

,

where ,

,

,

The dual problem of the above linear program is defined as follows: (26) (27)

s.t.

(28) (29) where Let

is the dual decision variable,

,

. be the solution to (22)–(25), then (30)

can be calEquations (22)–(25) shows that the optimum is found, the culated by using linear programming. When joint velocity can be determined by (2). Substituting (30) into (2), we obtain

All the joint velocities will be within their specified limits if the solution is obtained from the above formulation. IV. RECURRENT NEURAL NETWORK MODELS The linear program in the previous section provides a complete mathematical formulation. Nevertheless, minimizing the infinity norm in real time is still a computational bottleneck. For real-time robot control, the existing sequential algorithms are often not competent, and parallel methods are desirable. In this section, three recurrent neural networks are applied and . First, two Tank–Hopfield for the calculation of networks for linear programming [15] are applied for deand . In view of the fact that the termining both Tank–Hopfield network may not provide accurate or optimal solution, two other recurrent neural networks are then applied and , respectively, which can guarantee to determine accuracy and optimality. Fig. 1 illustrates the proposed neural network approaches to minimal infinity-norm kinematic control of redundant manipulators. As shown in Fig. 1, two neural and the second networks are used: the first one to compute one for . A. Minimal Infinity-Norm Solution Using the Tank-Hopfield Network In their seminal work, Tank and Hopfield [15] developed a recurrent neural network for linear programming. The Tank–Hopfield network for linear programming is composed of two layers of inter-connected neurons. The dynamic equation of the Tank–Hopfield network can be described as follows: (33) (34)

(31) is the least 2-norm joint velocity vector. where The above problem formulation enables the maximum joint velocity to be minimized over the entire trajectory of the manipulator by choosing the null-space velocity vector. Minimizing the infinity-norm of the joint velocity vector offers a convenient way of directly monitoring the magnitude of each individual joint velocity. Suppose the physical specifications of a manipulator require that the actuator velocity for the th for . Then in the above joint be less than

where

and

net input vector to neurons; activation state vector of the neurons; positive diagonal matrix; scalar resistive parameter; connection weight matrix; external input vector; monotone increasing vector-valued functions.

272

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 3, MAY 1999

(40) can be viewed as the solution of (11) and it can be made arbitrarily close to the exact solution by choosing sufficiently and is then given by large . The error between

(41)

Fig. 2. The circuit schematic of the Tank–Hopfield network for linear programming.

Let an energy function be defined as follows:

, the solution from the Tank–Hopfield network As approaches the exact solution. It is worth pointing out that (40) is essentially the same as the second part of (5) when where . The neural network has a stable state which provides the best approximate solution of (11) for a given finite in the sense of the Euclidean norm. The convergence . time is generally in the order of the time constant For determining the optimal null-space velocity vector , the following Tank–Hopfield network is used:

(35) where

. Note that . Hence

(42)

if if

; (43)

Therefore, the Tank–Hopfield network is asymptotically stable. It is shown in [15] that such a network can provide good approximate solutions to linear programming problems. Fig. 2 depicts an analog circuit realization of the Tank–Hopfield network. In the following, the Tank-Hopfield network for linear programming [15] is applied to determine the pseudo-inverse by solving (11) and to determine the null-space solution by solving (22)–(25). velocity For determining the pseudo-inverse solution , , and and are linear mappings: and . In this context

(44) (45) (46) (47) , , and are state vectors and where variable representing, respectively, the estimated , , and ; , are constant matrix and scalar of capacitive , , parameters. .

(36) (37) where estimated

is the state vector corresponding to the . Substituting (37) into (36), we have (38)

is at least positive semidefinite, is Since positive definite, and therefore the Tank–Hopfield network is asymptotically stable. , , and At limiting state , then (39)

B. Minimal Infinity-Norm Solution Using Two Recurrent Neural Networks It is known that the Tank–Hopfield network can provide only approximate or near-optimal solutions with finite design parameters. In addition, the Tank–Hopfield network cannot is not of full row deal with singular configuration where rank. In view of these limitations, two other recurrent neural networks are applied for solving the infinity-norm minimization problem. In the recent literature, many results have been published on using neural networks to solve a wide variety of matrix algebra and optimization problems. Specifically, several recurrent networks have been developed for the inversion of full-rank matrices [17], [18] and for the pseudo-inverse of rank-deficient matrices [16]. In addition, many recurrent neural networks to

DING AND WANG: RECURRENT NEURAL NETWORKS

273

Fig. 3. Architecture of the recurrent neural network for pseudo-inverse solutions.

solve linear programming problem and convex programming problems, e.g., [19]–[21]. In the following, we will show that the pseudo-inverse solutions with rank-deficient Jacobian can be determined by using a two-layer recurrent neural network presented in [13] as shown in Fig. 3:

Fig. 4. Architecture of the recurrent neural network for null-space solutions.

positive diagonal matrices. Fig. 4 illustrates the architecture of this recurrent neural network. Using the same notations in the above Tank–Hopfield network, the dynamical equations can also be expressed in the following detailed form: Let an energy function for the above neural network be defined as

(48) (49) it is shown in [13] and [22] that the above two-layer recurrent neural network is asymptotically stable and capable of generating the pseudo-inverse solution as long as rank at any moment. Besides the Tank–Hopfield network, many recurrent neural networks were presented in the literature for solving linear programming problems, e.g., [19]–[21]. In particular, Xia [21] presented a two-layer recurrent neural network for linear programming by means of minimizing the duality gap of the original linear program and its dual. This neural network is proven to be globally stable and capable of providing optimal solutions to linear programs [21]. In this study, the recurrent neural network developed by Xia is used to solve the formulated linear program in (22)–(25). In the context of minimum infinity-norm robot control, the dynamic equations of the recurrent neural network for null-space solutions is presented as follows: (50) (51)

(54) (55) (56)

(57)

(58) (59) where

.. .

.. .

..

.

.. .

Let an energy function for the above neural network be defined as

(52) (53) , , , and where state vectors corresponding, respectively, to , , ; and

are , , and ; and are

(60)

274

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 3, MAY 1999

Fig. 5. Transient behavior of the two-layer neural network

NN1 .

In the above energy function, the first term is the duality gap between the primal problem defined in (22)–(25) and its dual in (26)–(29), the second and the third terms are for the nonnegativity constraints and are zero if and , the fourth and fifth terms are for the equality constraints, and the remaining two terms are for the inequality constraints of the primal and dual problems. It can be shown that the dynamic equation defined in (26)–(29) is actually defined by the negative gradient system and . This recurrent neural network is thus is convex, asymptotically stable. In addition, since and the the equilibrium state is a global minimum of original minimization problem in (22)–(25).

Fig. 6. Transient behavior of the two-layer neural network

NN2 .

optimal null-space velocity vector

can be obtained by

Simulations are performed according to (54)–(59). Fig. 6 gives the network output in which the optimal null space velocity can be obtained vector

V. SIMULATION RESULTS A. Network Simulations One example is given to illustrate how to use to calculate

network

satisfies the following conditions: . 1) can achieve the minimum-infinity-norm. 2) , . So, the output given by the network is correct. This simulation result demonstrates the effectiveness of the proposed network ) to calculate . ( B. Redundant Manipulators

On the basis of (48) and (49), the joint velocity can be obtained by a two-layer recurrent neural network as shown in is the theoretical the equation at the bottom of the page. is the steady state solution obtained by using MATLAB. output provided by the recurrent neural network. Fig. 5 gives the transient behavior of the network. Another example is given to illustrate that a two-layer ) is used to determine the recurrent neural network (

A planar robot used for the simulation has four links, with unit link lengths. The robot starts operating from a configuration of [90 , 45 , 90 , 45 ] and the desired endeffector velocity is [0.5, 0.5] . Suppose that the limit of joint /s, . velocity be less than 0.35/s, i.e., The recurrent neural networks are adopted for the solutions of minimal infinity-norm joint velocity. The software simulator written to simulate the behavior of the recurrent neural network

DING AND WANG: RECURRENT NEURAL NETWORKS

Fig. 7. Minimum 2-norm solution.

Fig. 8. Minimum

1-norm solution.

uses the fourth-order Runge–Kutta integration techniques to solve the differential equations (38) and (42)–(47) or (48), (49) and (54)–(59). When the network reaches the stable state, the output of the recurrent neural networks is . Figs. 7 and 8 show the joint velocities corresponding to the pseudo-inverse solution and the minimum-infinity-norm solution respectively. /s, In Fig. 7, the maximum joint velocity is which has exceeded the limit of the joint velocity. And the /s, which is maximum joint velocity in Fig. 8 is 22% less than the one in Fig. 7. That means the maximum magnitude of the individual components of the joint velocity is significantly decreased. Therefore, the minimum -norm solution has lower individual joint velocities. The comparison between the infinity norm of pseudo-inverse solution and minimum -norm is plotted in Fig. 9. It can be seen that the proposed method can achieve minimum-infinity norm of joint velocities during the entire motion.

275

Fig. 9. Joint velocity norms.

per, the algorithm for efficiently producing minimum-infinitynorm joint-velocity trajectories of redundant manipulators is presented. The realization of the proposed algorithm can be obtained by two recurrent neural networks. The robot configuration reached by using the proposed method can minimize the maximum joint velocity. Conventionally, the 2-norm of the vector is employed for the calculation of the inverse kinematics. However, any individual joint velocity could have in general very high magnitudes even though the sum of squares of the joint velocity is minimized. Compared with previous schemes, the proposed method enables more direct monitoring and control of the magnitudes of the individual joint velocity than does the minimization of the sum of squares of the components. Thus it can provide an indication of whether any of the individual joint velocity has exceeded its feasible limit. Simulation results demonstrate the proposed scheme is effective for real-time generation of minimum-infinity-norm joint-velocity trajectories for redundant manipulators. Other important issues that need to be discussed are the uniqueness and continuity of the solutions obtained by minimizing the infinity-norm and its behavior in the neighborhood of singularities [5]. Our discussion in this paper is only related to the situations in which a robot is not located in the neighbor of singularities, no “chattering” and discontinuities could happen. If the manipulator is located in the neighborhood of singularities, the uniqueness and continuity of solutions obtained by minimizing the infinity-norm need to be solved which we are currently working on. The behavior and characterization of minimum infinity-norm solution at and near singular configurations is the subject of future research. ACKNOWLEDGMENT The authors would like to thank Y. Xia for his help in interpreting the primal-dual network. REFERENCES

VI. CONCLUSIONS The minimum infinity-norm inverse kinematic solution is an attractive alternative to the pseudo-inverse solution when individual joint velocity magnitudes are of concern. In this pa-

[1] R. Dubey and J. Y. S. Luh, “Redundant robot control using task based performance measures,” J. Robot. Syst., vol. 5, no. 5, pp. 409–432, 1996. [2] H. Ding and S. P. Chan, “A real-time planning algorithm for obstacle avoidance of redundant robots,” J. Intell. Robot. Syst., vol. 16, pp. 229–243, 1996.

276

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS, VOL. 29, NO. 3, MAY 1999

[3] D. Li, A. A. Goldenberg, and J. W. Zu, “A new method of peak torque reduction with redundant manipulators,” IEEE Trans. Robot. Automat., vol. 13, no. 6, pp. 845–853, 1997. [4] A. S. Deo and I. D. Walker, “Minimum infinity-norm inverse kinematic solution for redundant manipulators,” in IEEE Int. Conf. Robotics Automation, 1993, pp. 388-394. , “Minimum effort inverse kinematics for redundant manipu[5] lators,” IEEE Trans. Robot. Automat., vol. 13, no. 5, pp. 767–775, 1997. [6] S. Y. Kung and J. N. Hwang, “Neural network architectures for robotic application,” IEEE Trans. Robot. Automat., vol. 5, pp. 641–657, 1989. [7] J. F. Gardner et al., “Applications of neural networks for coordinate transformations in robotics,” J. Intell. Robot. Syst., vol. 8, pp. 361–373, 1993. [8] J. Guo and V. Cherkassky, “A solution to the inverse kinematic problem in robotics using neural network processing,” in Proc. IEEE Int. Joint Conf. Neural Networks, Washington, DC, 1989, vol. 2, pp. 299–304. [9] D. Simon, “The application of neural networks to optimal robot trajectory planning,” Robot. Auton. Syst., vol. 11, pp. 23–34, 1993. [10] S. Lee and R. M. Kil, “Redundant arm kinematic control with recurrent loop,” Neural Networks, vol. 7, pp. 643–659, 1994. [11] E. Dermatas, “Error-back-propagation solution to the inverse kinematic problem of redundant manipulators,” Robot. Comput. Integr. Manufact., vol. 12, pp. 303–310, 1996. [12] Z. Mao and T. C. Hsia, “Obstacle avoidance inverse kinematics solution of redundant robots by neural networks,” Robotica, vol. 15, pp. 3–10, 1997. [13] J. Wang, Q. Hu, and D. Jiang, “A two-layer recurrent neural network for pseudoinverse control of kinematically redundant manipulators,” in Proc. IEEE Conf. Decision Control, San Diego, CA, 1997, pp. 2507–2512. [14] H. Ding and S. K. Tso, “Redundancy resolution of robotic manipulators with neural computation,” IEEE Trans. Ind. Electron., vol. 46, pp. 199–202, Feb. 1999. [15] D. W. Tank and J. J. Hopfield, “Simple neural optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Trans. Circuits Syst., vol. 33, pp. 533–541, 1986. [16] J. Wang, “Recurrent neural networks for computing pseudoinverses of rank-deficient matrices,” SIAM J. Sci. Comput., vol. 18, no. 5, pp. 1479–1493, 1997. [17] F. L. Luo and B. Zheng, “Neural network approach to computing matrix inversion,” Appl. Math. Compu., vol. 47, pp. 109–120, 1992. [18] J. Wang, “A recurrent neural networks for real-time matrix inversion,” Appl. Math. Comput., vol. 26, pp. 23–34, 1993. , “Analysis and design of a recurrent neural network for linear [19] programming,” IEEE Trans. Circuits Syst. I, vol. 40, pp. 613–618, Sept. 1993.

, “A deterministic annealing neural network for convex programming,” Neural Networks, vol. 7, no. 4, pp. 629–641, 1994. [21] Y. Xia, “A new neural network for solving linear programming problems and its application,” IEEE Trans. Neural Networks, vol. 7, pp. 525–529, Mar. 1996. [22] J. Wang and H. Li, “Solving simultaneous linear equations using recurrent neural networks,” Inf. Sci., vol. 76, pp. 255–277, 1994.

[20]

Han Ding (M’97) received the Ph.D. degree from Huazhong University of Science and Technology (HUST), Wuhan, China, in 1989. He was with the Institute B of Mechanics, University of Stuttgart, Germany, from 1993 to 1994. From 1994 to 1996, he was with the School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore. He has published more than 40 papers in international journals and proceedings. He is a Professor at HUST and is now working with the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong on the academic staff. His research interests include redundant manipulators, collision-free motion planning, neural networks, dynamic optimization, off-line programming, and advanced manufacturing technology. Dr. Ding was the recipient of the National Distinguished Youth Scientific Fund of China Award (formerly Premier Fund) in 1997.

Jun Wang (S’89–M’90–SM’93) received the B.S. degree in electrical engineering and the M.S. degree in systems engineering from the Dalian Institute of Technology (now Dalian University of Technology), Dalian, China. He received the Ph.D. degree in systems engineering from Case Western Reserve University, Cleveland, OH. He is an Associate Professor, Department of Mechanical and Automation Engineering, Chinese University of Hong Kong (CUHK). Prior to joining CUHK, he was an Associate Professor, University of North Dakota, Grand Forks. He authored and coauthored numerous papers in more than 20 journals, several books, and many conference proceedings. Dr. Wang is a senior member of the Institute of Industrial Engineers (IIE) and a member of the International Neural Network Society (INNS). He is an Associate Editor of the IEEE TRANSACTIONS ON NEURAL NETWORKS.