IROS2004 Templates - CiteSeerX

5 downloads 0 Views 907KB Size Report
the stacked Jacobian to control the system and the feasibility of the proposed the control method. Keywords-active vision; visual servoing; jerk-bounded control ...
Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems September 28 - October 2, 2004, Sendai, Japan

An Image-Based Control Scheme for an Active Stereo Vision System Do Hyoung Kim, Do-Yoon Kim, Hyun Seok Hong and Myung Jin Chung Dept. of Electrical Engineering and Computer Science Korea Advanced Institute of Science and Technology Daejeon, Republic of Korea e-mail: {bborog, nice, wiser}@cheonji.kaist.ac.kr, [email protected] Abstract—An active vision system can control the gaze direction of mounted vision sensors and can overcome many shortcomings of other passive vision systems which have fixed vision sensors. Furthermore, most visual servoing approaches are concerned with a vision sensor or a passive stereo vision system which has fixed relative coordination between two cameras. In this paper, the problem of controlling a motion by visual servoing with an active stereo vision system is addressed. The main goal is to track the object with our active vision system. Firstly, we discuss how to stack image Jacobians in an active stereo vision system and how to control the active stereo cameras based on images. Next, we introduce our active vision system, which has a fast and simple system architecture that uses a high-speed serial communication bus (IEEE 1394a). It is manipulated by a distributed and jerk-bounded control algorithm to ensure smooth system motion. Finally, simulation and experimental results show both the validity of the proposed definition of the stacked Jacobian to control the system and the feasibility of the proposed the control method. Keywords-active vision; visual servoing; jerk-bounded control; stereo camera

I.

INTRODUCTION

In many intelligent systems, a vision sensor is necessary to provide rich information about the system’s surroundings. Fixed vision sensors have many shortcomings for intelligent systems to analyze obtained images, e.g. irregularity, nonlinearity, and instability, but if the position of the vision sensors were moving, then many complex problems occurring during the image analyses of fixed vision sensors could be solved. Therefore, the research on active vision systems has been increasing for the last decade [7, 8]. One of the main advantages of an active vision system is the ease of tracking moving objects. Passive vision systems have a limited field of view, so it is quite difficult for such systems to track moving objects using fixed vision sensors: the target to be tracked should remain within the field of the fixed sensor’s view. Active vision systems, however, can move their mounted sensors, which is a clear advantage over the passive vision systems. That is why active vision systems are being used in many applications, e.g. human-computer interfaces, monitoring systems, and mobile robots. Another area of research on active vision systems is emotion expression to interact with humans. To interact socially with humans, a robot must be able to do more than

0-7803-8463-6/04/$20.00 ©2004 IEEE

3375

simply gather information about its surroundings: it must be able to express its state or “emotion,” so that humans will believe that the robot has beliefs, desires, and intentions of its own [1]. Currently, there are many active vision systems that can express their status, such as Kismet and Leonardo at MIT, the WE-4 at Waseda University, Saya at the University of Tokyo, and Pearl at Carnegie Mellon University. In this paper, a stereo visual servoing with an active vision system is present. Many works of stereo visual servoing already exist [12, 13]. Malis et al. [12] have formalized a multi-cameras visual servoing approach. They consider a system with n cameras which delivers a set T s = ⎡⎣s1T , sT2 ," , sTn ⎤⎦ of sensor signals. In our work, this scheme is basically applied to an active stereo vision system, but active stereo vision systems generally have unique characteristics in mechanics compared with other stereo rigs. For example, the left camera and the right camera of our active vision system each have four degrees of freedom (DOF), two of which,are common DOF and two independent DOF to each other. Our visual servoing is focused on how to control joints for two cameras to simultaneously track an object where there are some duplicated joints. In addition, we will introduce our robot, called “Ulkni,” which exploits human social tendencies to convey intentionality through motor actions like Kismet. The robot around a real-time and jerk-bounded control scheme and simple system architecture using a high-speed serial communication bus, IEEE 1394a is presented. A block diagram of our system is shown in Fig. 1. In addition, a simple bell-shaped velocity profile method is addressed to limit jerk of actuators and to control 12 actuators simultaneously in real-time.

Figure 1. Visual servo of an active stereo vision system

In Section 2, the image Jacobian and control law is presented. In Section 3, we introduce the system architecture and the motor control scheme of Ulkni. In Section 4, we present simulations and experiments to show the validity of the proposed system. Finally, some conclusions and further works are presented. II.

IMAGE JACOBIAN AND CONTROL LAW

A.

Image Jacobian and Robot Jacobian In an active stereo vision system, we suppose that joint angles, θ ∈ R n×1, are divided into those for the right camera, n ×1 θ r ∈ R n ×1, and those for the left camera, θl ∈ R , where n1≤n and n2≤n. For instance, if the right camera is mounted on an end-effector which is moving by joints 1, 2, 3, and 4, and the left camera is mounted on another end-effector which is moving by joints 1, 2, 5, and 6, the duplicated joints would be joints 1 and 2 and some joint angles in θr and θl would be duplicated. Our active stereo visual servoing is focused on controlling each end-effector where there are some duplicated joints. In this case, once we define the masking matrices denoted by Mr and Ml respectively, we can find the relationship between the masking matrix and joint angles as (1). 2

1

⎛1 ⎜ 0 θr = ⎜ ⎜0 ⎜⎜ ⎝0

⎛1 ⎜ 0 θl = ⎜ ⎜0 ⎜⎜ ⎝0

0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0

0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0

⎛ θ1 ⎞ ⎜ ⎟ 0 ⎞ ⎜θ2 ⎟ ⎟ ⎜θ ⎟ 0⎟ 3 ⎜ ⎟  Mrθ 0 ⎟ ⎜θ4 ⎟ ⎟ 0 ⎟⎠ ⎜ θ5 ⎟ ⎜⎜ ⎟⎟ ⎝ θ6 ⎠ ⎛ θ1 ⎞ ⎜ ⎟ 0 ⎞ ⎜θ2 ⎟ ⎟ ⎜θ ⎟ 0⎟ 3 ⎜ ⎟  Mlθ 0 ⎟ ⎜θ4 ⎟ ⎟ 1 ⎟⎠ ⎜ θ 5 ⎟ ⎜⎜ ⎟⎟ ⎝ θ6 ⎠

Figure 2. The whole system architecture. The image processing runs on a personal computer (PC) with a commercial microprocessor. In addition, the motion controller operates in a fixed point digital signal processor (DSP). An interface board with a floating point DSP decodes motor position commands and transfers camera images to the PC.

⎛ −1/ Z 0 Li = ⎜ − Z 0 1/ ⎝ , where i = ri , li ⎛α ⎛u⎞ ⎜ u ⎜ ⎟ ⎜ ⎜v⎟ = ⎜ 0 ⎜1⎟ ⎝ ⎠ ⎜⎜ 0 ⎝

(1)

Robot Jacobian describes a velocity relation between joint angles and end-effectors. Ulkni has two end-effectors equipped with two cameras, respectively. Therefore robot Jacobian relations are described as v re = J r θ r , v le = J l θ l for two end-effectors, where vre and vle are the right and the left end-effector velocities, respectively. Image Jacobian relations are s r = Lr v rc and sl = Ll vlc for the right and the left camera, respectively, where s r and sl are velocity vectors of image features, Lr and Ll are image Jacobian, and vrc and vlc are the right and the left camera velocity. Image Jacobian is composed by stacking for each feature point in the right and the left image (see (2)). In (2), Z is rough depth value of features and (x, y) is a feature position in a normalized image plane. Equation (3) shows the relationship between the quantities x and y in units of length and u and v in pixel units using intrinsic parameters; scale parameters, αu and αv, a skew angle, θ, and translation parameters, u0 and v0.

3376

x/Z

−(1 + x 2 )

xy

y / Z 1+ y

− xy

2

y⎞ ⎟ −x ⎠

(2) −α u cot (θ ) u0 ⎞ ⎟⎛ x ⎞ αv ⎜ ⎟ v0 ⎟ ⎜ y ⎟ ⎟⎜ ⎟ sin(θ ) ⎟ 1 0 1 ⎠⎟ ⎝ ⎠

(3)

The velocity of a robot end-effector frame, denoted by ve (e=re or le), has a relationship with the velocity of a camera frame, denoted by vc (c=rc or lc). When the homogeneous transformation matrix between the c-th camera frame and its end-effector frame is given as c

⎡cR He = ⎢ T e ⎣0

te ⎤ ⎥ 1 ⎦

c

(4)

, the relationship between the screws is described by v c = c Te v e

(5) c

, where the transformation matrix Τ e is c

⎡ c R [ c t e ]× c R e ⎤ Τe = ⎢ e ⎥ c Re ⎦ ⎣ 0

(6)

According to the multi-cameras visual servoing formulated in [12], the relationship between the time derivative of the feature vector and the robot joint angles is obtained as

Figure 3. The outline of a modified visual task function. To move the current feature point near the line segment connecting the initial feature point to the goal feature point, a virtual goal feature is introduced.

0 ⎞⎛ v rc ⎞ ⎛ s ⎞ ⎛ L s = ⎜ r ⎟ = ⎜ r ⎟⎜ ⎟  Lv c ⎝ s l ⎠ ⎝ 0 Ll ⎠⎝ v lc ⎠ 0 ⎞ ⎛ rc Tre 0 ⎞ ⎛ v re ⎞ ⎛L =⎜ r ⎟ ⎜ ⎟  LTv e ⎟⎜ lc Tle ⎠ ⎝ v le ⎠ ⎝ 0 Ll ⎠ ⎝ 0 0 ⎞ ⎛ rc Tre 0 ⎞ ⎛ J r 0 ⎞⎛ M r ⎞  ⎛L  =⎜ r ⎟⎜ ⎟⎜ ⎟⎜ ⎟ θ  LTJMθ lc Tle ⎠ ⎝ 0 J l ⎠⎝ M l ⎠ ⎝ 0 Ll ⎠ ⎝ 0 s  L total θ , where L total = LTJM

(7) (8)

B. Control Law With k1 and k2 visual features from right and left camera images, respectively, and n DOF of an active vision system, we control the system using the modified visual task function [4, 11, 12]: e = ∑ Wi ei = W1e1 + W2e 2 i

(

)

(

= W1C s ( r ( t ) ) − s* + W2C s ( r ( t ) ) − s′

)

, where C  Lˆ +

−1

(9)

For image features to remain within the field of view during the robot tracking, it is best for the trajectory of image features to be near to the line which connects the initial feature points to goal feature points in an image plane. In (9), we make a i-th virtual goal feature, si*, from the sum of si ( r ( t ) ) − s*i and si ( r ( t ) ) − s′i (see Fig. 3), and the point, s′i , is obtained by two orthogonal line segments as ⎡0 −1⎤ s′i = si + Ai ( sinit − s*i ) , where Ai = ⎢ i ⎥ ⎣1 0 ⎦

Figure 4. Ulkni’s mechanisms. The schematic below the photograph shows the configuration of the system with 12 degrees of freedom. The eyes and the neck can pan and tilt independently. The eyelids also have two degrees of freedom to roll and to blink. The lips can tilt independently.

∂e ⎞ ∂e ⎞ −1 ⎛ ⎛ ∂e ⎞ ⎛ v c _ desired = ⎜ ⎟ ⎜ −λe − ⎟ = ( CL ) ⎜ −λe − ⎟ ∂t ⎠ ∂t ⎠ ⎝ ∂r ⎠ ⎝ ⎝

(12)

According to the velocity relation between the camera and the end-effectors of the active stereo vision system, we can get ∂e ⎞ + ⎛ v e _ desired = T + v c _ desired = ( CLT ) ⎜ −λ e − ⎟ ∂t ⎠ ⎝

(13)

Finally, the joint velocity vector to be controlled is written as (10)

Assuming W1=W2=I to simply the equation, we can obtain (11) by differentiation of (9). ∂e ∂e ∂e v c + with = CL ∂r ∂t ∂r (11) To obtain ideally an exponential decrease of the error e = −λ e , we select the control velocity of the camera frame as e =

3377

+ θ desired = ( JM ) v e _ desired

∂e ⎞ +⎛ = ( CLTJM ) ⎜ −λe − ⎟ ∂t ⎠ ⎝ ∂e ⎞ +⎛ = ( CL total ) ⎜ −λ e − ⎟ ∂t ⎠ ⎝

(14)

Equation (12) and (13) show the control laws for the cameras and in end-effectors respectively. In (14), a robot Jacobian relation between end-effector velocity and joint

velocity is shown; furthermore, (14) directly describes the relation between the error defined in (14) and joint velocity. III.

SYSTEM DESCRIPTION

A.

System Architecture We have designed our system architectures to meet the challenges of real-time visual-signal processing (about 30Hz) and real-time position control of all actuators (1KHz) with minimal latencies. Ulkni’s vision system is built around a 2.6 GHz commercial PC. Ulkni’s motivational and behavioral systems run on a TMS32LF2407A processor and 12 internal position controllers of commercial RC servos. The cameras in the eyes are connected to the PC by the IEEE 1394a interface, and all position commands of actuators are also sent by the IEEE 1394a serial bus (see Fig. 2). Ulkni has 12 DOF to control its gaze direction, two DOF for its neck, four DOF for its eyes, and six DOF for other expressive facial components, which are the eyelids and lips (see Fig. 4). According to [1], the positions of the system’s eyes and neck are important for expressive posture, as well as for gazing toward an object of its attention. B. Motor Control Scheme Our control scheme is based on the distributed control method owing to RC servos. A commercial RC servo generally has an internal controller; therefore, the position of a RC servo is easily controlled by feeding the signal that has the proper pulse width, which indicates a desired position, to the RC servo and then letting the internal controller operate until an RC servo reaches the desired position. If the target position of an actuator is changed drastically, severe jerking of the actuator will frequently occur. Jerking causes electric noise in the power source of the system, worsens the characteristic of the system controller, and causes mechanical components to break down; therefore, our goal is to reduce jerk, if possible. There have been some previous suggestions on how to solve the problems caused by jerk. Lloyd proposed the trajectory generating method which uses blend functions [9]. Bazaz [2] proposed a trajectory generator, based on a low-order spline method. Macfarlane proposed a trajectory generating method using an s-curve acceleration function. Nevertheless, these previous methods spend a large amount of computation time on calculating an actuator’s trajectory, so none of these methods would enable real-time control of Ulkni, which has 12 actuators. The two objectives of our research are to reduced jerk and to determine the trajectories of twelve actuators in realtime; therefore, we propose a position trajectory generator using a high-speed, bell-shaped velocity profile to reduce jerk and to control twelve actuators in real-time. In this section, we will describe the method involved in achieving our objectives. 1) A Simple Bell-shaped Velocity Profile The bigger the magnitude of the jerk is, the bigger the variation of acceleration is [6, 10, 14], so we can say that

3378

Figure 5. The proposed bell-shaped velocity profile using a sinusoidal function. First, we make a bell-shaped velocity profile using a cosine function within the given period of time, 0≤t≤1. Then, the position function is calculated by integration of the velocity function. The acceleration function is calculated by a differentiation of the velocity function. Finally, we can obtain the jerk motion function using a differentiation of the acceleration function.

reducing jerk is making the function of acceleration be differentiable during the all period of time. Presently, nearly all of the position control methods use a trapezoid velocity profile to generate position trajectories. Such methods are based on the assumption of uniform acceleration. Uniform acceleration causes the magnitude of the jerk to be quite large. If the function of acceleration is not differentiable in any period of time, the magnitude of jerk will occur very large. Therefore, we should generate a velocity profile with a differentiable acceleration of an actuator. Currently, researchers working with intelligent systems are trying to construct an adequate analysis of human motion. Human beings can grab and reach objects naturally and smoothly. Specifically, humans can track and grab an object smoothly even if the object is moving fast. Some researchers working on the analysis of human motion have begun to model certain kinds of human motions [3, 5, 6]. In the course of such research, it has been discovered that the tips of a human’s fingers move with a bell-shaped velocity profile when a human is trying to grab a moving object [5]. A bell-shaped velocity is generally differentiable, so the magnitude of the jerk is not large and the position of an actuator changes smoothly. The basic idea of our control method to reduce jerk is equal to a bell-shaped velocity profile. The proposed algorithm is used to achieve the computation time necessary to control twelve actuators in real-time; therefore, we import a sinusoidal function because it is infinitely differentiable. We developed a bell-shaped velocity profile by using a cosine function. We denote the time by the variable t, the position of an actuator at time t by the variable p(t), the velocity of that at time t by the variable v(t), the acceleration of that at time t by the variable a(t) and the goal position by the variable pT. Equation (15) shows the proposed bell-shaped velocity, the position, the acceleration, and jerk of an actuator by using

two constraints, the signed maximum velocity, Vm, and the desired time interval, Tdes (see Fig. 5). As seen in (15), the acceleration function, a(t), is a differentiable, as is the velocity function; therefore, we can obtain a bounded jerk. To implement this method, the position function is used in Ulkni. ⎧⎪ ⎛ t v ( t ) = Vm ⎨1 − cos ⎜ 2π ⎪⎩ ⎝ Tdes

⎞ ⎫⎪ ⎟⎬ , ⎠ ⎪⎭

⎧⎪ T t ⎛ t ⎞ ⎫⎪ p ( t ) = ∫ v ( t ) dt = Vm ⎨t − des sin ⎜ 2π ⎟⎬ , 0 ⎝ Tdes ⎠ ⎪⎭ ⎩⎪ 2π dv ( t ) ⎛ V t ⎞ = 2π m sin ⎜ 2π a (t ) = ⎟, dt Tdes ⎝ Tdes ⎠ da ( t ) ⎛ V t ⎞ = 4π 2 m2 cos ⎜ 2π j (t ) = ⎟, dt Tdes ⎝ Tdes ⎠ where 0 ≤ t ≤ Tdes

(15)

Figure 6. A simulation rresult of image based visual servoing. The top two images show the trajectories of four features respectively, and the bottome two images show the image pixel errors in the x and y axes, respectively.

We can determine other constraints as follows: Pdes = VmTdes , Vmax = 2 Vm , Amax = 2π

Vmax V , Amin = − 2π max , Tdes Tdes

J max = 4π 2

IV.

Vmax V , J min = − 4π 2 max 2 2 Tdes Tdes

(16)

SIMULATIONS AND EXPERIMENTS

A. Simulations for Image Based Visual Servoing It must be noted that our task is under-constrained. The object is modeled as a square, and the feature vector is composed of four points. Due to Ulkni’s kinematics, our visual servoing tasks does not consider the rotation, but the translation. Fig. 6 presents the trajectories of the 2D points (left and right) in the image plane and the errors of 2D (xaxis and y-axis). B. Simulations and Experiments for Motor Control First, we evaluated the proposed jerk-bounded control method to one joint actuator. The initial position of an actuator is zero, p(0)=0. Then, we commanded three goal positions: 40, 10, and -30, at 0 second, 1 second, and 1.5 seconds respectively. That is, pT(0)=40, pT(1)=10, pT(1.5)=-30. Depicting this situation, Fig. 7 shows the simulation results, which were calculated by Matlab, and Fig. 8 shows the experimental results, with a comparison between the linear segment parabolic blended method and the proposed simple bell-shaped velocity profile. Fig. 8 illustrates that there is less noise than in the previous method and the position trajectory of the motor is smoother than that of the previous method. In addition, an infinite magnitude of jerk is not found in Fig. 7. That is, jerk is bounded within the limited area.

3379

Figure 7. A simulation result. The initial position of an actuator is zero. The three goal positions are commanded at times 0, 1, and 1.5 seconds respectively: p ( 0 ) = 40 , p (1) = 10 , p (1.5) = −30 . The velocity T

T

T

function can be obtained by merging the three bell-shaped velocity profiles, which are created at 0, 1, and 1.5 seconds, respectively.

Ulkni is composed of 12 RC servos, with four DOF to control its gaze direction, two DOF for its neck, and six DOF for its eyelids and lips. Fig. 9 shows the experimental results of various facial expressions, such as normality, surprise, drowsiness, anger, happiness, and winsomeness. To represent these kinds of facial expressions, we used the distributed control structure and the proposed fast bellshaped velocity profiler. Our control scheme can simultaneously move the 12 actuators of Ulkni in real-time and limit the jerk of the actuators. The controller can update a desired change of position easily and smoothly, even if an actuator has not reached the previous goal position yet. V.

CONCLUSION AND FURTHER RESEARCH

In conclusion, we introduced an image-based visual servoing approach for an active stereo camera system. By introducing the masking matrices, we formulated a relationship between the time derivative of the feature

to reduce the magnitude of jerk and used this method to control twelve actuators in real-time. We proved that our distributed control structure and the proposed bell-shaped velocity profiler are practical.

Figure 8. An experimental result. The position data is only obtained by a variable resister. The initial position of an actuator is zero. Three goal position is commanded at times 0, 1, 1.5 second respectively, that is, pT(0)=40, pT(1)=10, pT(1.5)=-30. The black line represents the trajectory of an actuator using a trapezoid velocity profile, and the yellow line represents the trajectory of an actuator using a fast bell-shaped velocity profile.

Motor control for a social robot poses challenges beyond stability and accuracy. Humans can easily perceive motor actions semantically and intuitively, regardless of what the robot intends. Our research lacks a sound understanding of natural and intuitive social interactions among humans; therefore, our future research will focus on perceiving human actions and on using obtained vision sensor data to produce appropriate behavior that will convey Ulkni’s intentionality. In addition, convergence and stability of our image-based control scheme will be considered. The goal of future research is to create natural and intuitive social interactions between the robot and humans and to make our image-based visual servoing robust. REFERENCES [1] [2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] Figure 9. Ulkni’s various facial expressions. There are six facial expressions which show his status; normality, surprise, drowsiness, anger, happiness, and winsomeness.

vector and the robot joint angles. We also introduced a simple visual task function in order to keep image features within image bound. We have developed an active vision system for social interaction with humans. We utilized an approximation to the human ocular motor system and applied it to Ulkni. Ulkni’s eyes can periodically saccade to new targets and track them as they are smoothly transmitted to the controller by using the IEEE 1394a serial bus; furthermore, we proposed a simple bell-shaped velocity profile method

3380

[11]

[12]

[13]

[14]

C. Breazeal, “Designing Sociable Robots,” MIT Press, 2002. S. Bazaz and B. Tondu, “Minimum Time On-line Joint Trajectory Generator Based on Low Order Spline Method for Industrial Manipulators,” Robotics and Autonomous Systems, vol. 29, pp. 257-268, 1999. R. Caminiti, “Control of Arm Movment in Space: Neurophysiological and Computational Appraoches,” SpringerVerlog, pp. 285-306, 1991. B. Espiau, F. Chaumette, and P. Rives, “A New Approach to Visual Servoing in Robotics,” IEEE Transactions on Robotics and Automation, vol. 8, no. 3 , pp 313-326, June 1992. S. R. Gutman, G. Gottlieb, and D. Corcos, “Exponential Model of a Reaching Movement Trajectory with Nonlinear Time,” Comments Theoretical Biology, vol. 2, no. 5, pp. 357-383, 1992. A. Hauck, “Vision-based Reach-to-grasp Movements: From the Human Example to an Autonomous Robotic System,” Ph. D. thesis, Munchen University, 2000. D. Kim, J. Ryoo, H. Park, and M. J. Chung, “Real Time Gaze Control of Active Head-eye System without Calibration,” Proceedings of the 6th International Symposium on Artificial Life and Robotics, Oita, Japan, Jan. 2001. D. Kim, D. H. Kim, and M. J. Chung, Design of a Controller for Reducing Jerk-motion in an Active Vision System, Proceedings of 2003 KIEE Annual Summer Conference, Yongpyung, Korea, pp. 2429-2431, Jul. 2003. J. Lloyd and V. Hayward, “Real-Time Trajectory Generation Using Blend Functions,” Proceedings of IEEE International Conference on Robotics and Automation, vol. 1, pp. 784-789, Apr. 1991. S. Macfarlane and E. Croft, “Design of Jerk Bounded Trajectories for On-line Industrial Robot Applications,” Proceedings of IEEE International Conference on Robotics and Automation, pp. 979-984, 2001. E. Malis, F. Chaumette, and S. Boudet, “2-1/2-D Visual Servoing,” IEEE Transactions on Robotics and Automation, vol. 15, no. 2, pp. 238-250, April 1999. E. Malis, F. Chaumette, and S. Boudet, “Multi-cameras visual servoing,” In Proceedings of the IEEE International Conference on Robotics and Automation, vol. 4, pp. 2579-2764, San Fransisco, California, USA, 24-28 April 2000. ICRA’2000. P. Martinet and E. Cervera, “Stacking Jacobians Properly in Stereo Visual Servoing,” In Proceedings of the 2001 IEEE Internal Conference on Robotics and Automation, pp. 717-722, Seoul, Korea, 21-26 May 2001. Y. Tanaka, T. Tsuji, and M. Kaneko, “Bio-mimetic Trajectory Generation of Robotics using Time Base Generator,” Proceedings of IEEE International Conference on Intelligent Robots and Systems, pp. 1301-1315, 1999.