Leader-follower Formation for Nonholonomic Mobile ...

6 downloads 571 Views 1MB Size Report
Keywords Mobile Robots, Leader-Follower Formation,. Discrete-time, Relative Distances. 1. Introduction. The formation and cooperation of mobile robots are ...
International Journal of Advanced Robotic Systems

ARTICLE

Leader-follower Formation for Nonholonomic Mobile Robots: Discrete-time Approach Regular Paper

Raul Dali Cruz-Morales1, Martin Velasco-Villa1*, Rafael Castro-Linares1 and Elvia R. Palacios-Hernandez2 1 CINVESTAV-IPN, Departamento de Ingeniería Eléctrica, Sección de Mecatrónica, Ciudad de México, Mexico 2 Universidad Autónoma de San Luis Potosí, Facultad de Ciencias, San Luis, México *Corresponding author(s) E-mail: [email protected] Received 10 October 2015; Accepted 01 February 2016 DOI: 10.5772/62344 © 2016 Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

1. Introduction

This paper presents a novel solution for the classical leaderfollower formation problem considering the case of nonholonomic mobile robots. A formation control strategy is proposed in a discrete-time context by considering the exact discrete-time discretization of the non-linear contin‐ uous-time kinematic model of the vehicle. The geometric formation of the robots allows us to derive an alternative model that describes the time evolution of the relative distance and angle between the robots. These variables are obtained in real-time by a vision-based localization system on board, in which the follower robot is equipped with a Kinect device, together with a recognition board mounted on the leader robot. The boundedness of the relative position error is formally proven by considering a feedback law that is delayed by one sampling period of time. Numerical simulations and real-time experiments are presented to verify the performance of the control strategy.

The formation and cooperation of mobile robots are found in different fields: for example, in the aerial environment for navigation, exploration tasks, recognition of hostile terrain, aerial photography, terrain mapping or in the search and capture of ground targets [1, 2]. They are also considered for tracking and obstacle avoidance, patrolling an environment to prevent intruders in arbitrary topolo‐ gies, 3D mapping, archaeological exploration and terrain recognition [3, 4]. In general, in order to perform such tasks, it is necessary to have a measuring device on board as in [5], where encoders and a compass on board are used to maintain the formation. In some cases, multiple devices are required to accomplish recognition of the environment: usually a laser system or a video camera [6, 7]. A special case is the detection of a leader robot or a particular pattern mounted on the robot [8].

Keywords Mobile Robots, Leader-Follower Formation, Discrete-time, Relative Distances

The leader-follower formation problem has been analysed by considering several control strategies, mainly by means of the feedback linearization approach [9-13]. In [9], an onboard camera is used to estimate the orientation and the Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

1

distance between the robots, and in [11], three different controllers are proposed and a camera is used for measur‐ ing the distance and angles between the vehicles, verifying the controller’s performance by numerical simulation. Lyapunov-based control strategies have also been consid‐ ered; for instance, in [14], two different controls are proposed using a camera fixed to the ceiling, while in [5, 15], the strategy is complemented by a receding horizon approach together with a single camera mounted on the ceiling. In addition, backstepping strategies have been implemented in the solution of the leader-follower prob‐ lem; for example, in [16], the leader-follower formation is assured using Cartesian coordinates to avoid singularities that appear in polar coordinates. Backstepping and cascade system theory is used in [17] to control multiple robots and in [18], the method is complemented by a bioinspired dynamic to avoid impractical velocity jumps. The back‐ stepping is merged with a fuzzy control strategy in [19] to perform an obstacle avoidance strategy. Game theory is used in [4] for terrain patrolling tasks, modelling the problem as a set of two players developing an optimal strategy for the solution of the problem. In [20], the leaderfollower formation is analysed by measuring the relative distance between the vehicles from a mobile frame located on the follower robot, which is allowed to adopt a variable position under an input-constraints strategy. One important aspect of the problem is the real-time implementation of the mentioned solutions. In this sense, there are two main approaches. The first one is based on a camera mounted on the ceiling of the working space as in [5] or [15], and the more general approach that considers an on-board camera used to estimate the orientation and the distance between the robots as in [9] and [21], in which observer-based solutions are proposed. In [22], the solution is based on an entropy segmentation that provides the relative-position estimation of the robots. The objective of this paper is to analyse the leader-follower formation problem using a discrete-time approach: an approach that has not been considered in the literature. The consideration of a discrete-time strategy is based on the necessity of a practical implementation of the solution of the leader-follower problem stated in this work, which should be carried out, in general, by means of microproc‐ essors or computer-based platforms. In addition, the discrete-time nature of the vision system seems to be more suitable for the approach proposed. In particular, a leaderfollower formation based on relative distances between the robots is suitable for outdoor environments where an absolute localization system is difficult to implement. An exact discretization strategy based on direct integration, such as the one used in [23], is considered here. The control of nonholonomic mobile robots of the type considered in this work is a challenging problem due to the fact that they do not satisfy the well-known Brocket’s rank condition [24], which prevents the existence of a smooth time-invariant feedback for the solution of the stabilization 2

Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

problem. This problem is related to the control of the vehicle’s posture. The standard formation problem is addressed in a discrete-time context assuming that the leader robot follows a free trajectory; it is intended that the follower robot tracks the leader within a specified distance using only a camera on board as a sensor. In this form, the solution does not depend on positions and velocities measurements, and only considers relative measurements between vehicles. To solve this problem, and following [16], a coordinate transformation is considered in which the relative time evolution of the position and angle between the robots is used. Instead of a standard video camera, a Kinect device is used to estimate the relative distance and the angle between the leader and the follower robot. The solution of the problem is based on a non-standard discrete-time feedback that is retarded by one sampling period of time, which produces a closed-loop perturbed system. It is formally proven that the follower tracks the leader by using bounded tracking errors. The proposed formation strategy is evaluated by numerical simulation as well as real-time experiments by means of two differentially driven mobile robots. The rest of the work is organized as follows. Section 2 describes the discretization procedure and the mathemati‐ cal preliminaries that are used throughout the document. In Section 3, the leader-follower formation problem is geometrically stated and the control objective based on the relative errors is described. In Section 4, the solution to the leader-follower formation problem (the formation-error analysis and the main procedure to obtain the control law) is shown. The numerical and experimental evaluation of the proposed formation strategy is described in Section 5 and some conclusions are given in Section 6. 2. Discrete-time Model of the Mobile Robots A differentially driven mobile robot (type (2.0)), such as the one depicted in Figure 1, is considered. The kinematic model of this robot is given by [25], x& = V cos(q ) y& = V sin(q ) q& = W

(1)

where x and y are the positions along the axes X and Y , respectively, and θ corresponds to the orientation angle of the vehicle. V and W are the linear and angular velocities of the robot. The kinematic evolution of the point α , located at the front of the robot along the longitudinal axis, can be described by considering the changes to the following coordinates,

a x = x + l cos(q ) a y = y + l sin(q ) aq = q

(2)

where l > 0 is the constant distance between point P and α . In these coordinates, the kinematic model of the mobile robot takes the form,

a& x = V cos(aq ) - lW sin(aq ) a& y = V sin(aq ) + lW cos(aq )

(3)

a&q = W .

The linear and angular velocities of each robot are related to the angular velocity of the right wheel, wr , and the

angular velocity of the left wheel, wl , by means of the

relation,

é wr (t ) ù 1 é1 m ù é V (t ) ù ê ú= ê úê ú ë wl (t ) û r ë1 - m û ëW (t ) û

(4)

where r is the wheel ratio and 2m is the distance between the wheels. This transformation is non-singular for all r , m > 0. The control of nonholonomic mobile robots of the type considered in this work is a challenging problem due to the fact that they do not satisfy the well-known Brocket’s rank condition [24], which prevents the existence of a smooth time-invariant feedback for the solution of the stabilization problem. The use of Lyapunov techniques has provided solutions for a single robot by means of discontinuous feedback [26] or by means of time-varying control laws [27, 28]. Instead of considering the posture of the vehicle, in this work, controlling the Cartesian coordinates of the point α was preferred, while obtaining simultaneously the bound‐ edness of the orientation angle.

by considering a sampling period T > 0, a time interval tk = {t ∈ kT ,kT + T )} with k = 0,1,2,..., and assuming that the

control signals between the time intervals tk are constant, this is, v (t ) = v (kT ), w (t ) = w (kT ), ∀ t ∈ kT ,kT + T ). For the

sake of simplicity, the compact notation ζ ±n = ζ (kT ± nT ), ζ ± = ζ (kT ± T ) and ζ 0 = ζ for any positive integer n , is considered in all the subsequent developments. Following [23], the time integration of the dynamics (3), with the initial condition evaluated at t = kT and the final condition t = kT + T , allows us to obtain the discrete-time representation for both the leader (i = L ) and follower ( i = F ) robot in the form, é ù êcos(g ) -l sin(g ) ú éa xi+ ù éa ù i i ú ê ê + ú ê xi ú é Vi ù êa yi ú = êa yi ú + Ty i (Wi ) ê sin(g i ) l cos(g i ) ú ê ú ê ú ëWi û ê +ú ê ú 1 ê 0 ú ëêaq i ûú ëaq i û ê y i (Wi ) úû ë

where ψi (W i ) =

( T2 W )

sin

T W 2 i

i

(5)

T

and γi = αθi + 2 W i .

Remark 1 It should be noted that ψi (W i ) is a well-defined

function of W i and, in particular, of limW i →0ψi (W i ) = 1. In

addition, note that model (5) depends directly on the sampling period T and, thus, the consideration of different values of this variable does not affect the properties of the model. In addition to (5), the following output signals are considered: é y ù éa ù yi = ê i 1 ú = ê i 1 ú . ë yi 2 û ëa i 2 û

(6)

Model (5)–(6) is used in the sequel to study a leaderfollower formation problem.

l P

3. Leader-follower Formation Problem The configuration of a leader-follower formation is shown in Figure 2. The complexity of this formation is due to the difficulty of measuring the different relative positions and velocities that exist between the leader and the follower robot. The problem can be stated as follows.

Figure 1. Representation of a mobile robot (2,0) in the plane

2.1 Exact discrete-time representation Taking into account the discrete-time nature of a real time implementation, an exact discrete-time representation of the kinematic equations (3) is considered. The non-linear kinematic model (3) can be discretized in an exact manner

Control objective Suppose that the leader robot follows a free path that is subject to the nonholonomic restrictions of the vehicle. The objective of the leader-follower formation is that the follower robot tracks the leader robot while maintaining a constant ͟ distance l 0. With respect to Figure 2, it is necessary to design a feedback law such that,

lim l0 ( kT ) = l0 Î R. k ®¥

(7)

Raul Dali Cruz-Morales, Martin Velasco-Villa, Rafael Castro-Linares and Elvia R. Palacios-Hernandez: Leader-follower Formation for Nonholonomic Mobile Robots: Discrete-time Approach

3

lx+ = -[a xL + TVLy L cos(g L ) - TlWLy L sin(g L ) -(a xF + TVFy F cos(g F ) - TlWFy F sin(g F ))]cos(aq L + TWL ) -[a yL + TVLy L sin(g L ) + TlWLy L cos(g L ) -(a yF + TVFy F sin(g F ) + TlWFy F cos(g F ))]sin(aq L + TWL ). After a number of manipulations and substitutions of some trigonometric identities, one finds that the dynamics of lx can be written as:

lx+ = -(a xL - a xF )cos(aq L )cos(TWL ) - (a yL - a yF )sin(aq L )cos(TWL ) + (a xL - a xF )sin(aq L )sin(TWL )

Figure 2. Representation of the leader-follower formation of mobile robots type (2,0) in the plane

Remark 2 Model (5) considers the evolution of the frontal point α described by (2). In fact, since the leader robot has a free displacement in the working space; thus, it may not be necessary to consider model (5) in order to describe its dynamics (for example, a different discrete-time dynamic model obtained from (1) could be used). Here, model (5) is considered for the sake of simplicity of notation in the subsequent developments. From Figure 2, it is possible to obtain the geometrical relationship between the leader and the follower robot. Note first that the distance l0 between the points (αxF ,α yF )

and (αxL ,α yL ) can be decomposed on the local Cartesian

T - (a yL - a yF )cos(aq L )sin(TWL ) - TVLy L cos( WL ) 2 WF - 2WL T - TlWLy L sin( WL ) + TVFy F cos( eq + ( )T ) 2 2 WF - 2WL - TlWFy F sin( eq + ( )T ) 2

where the orientation error between the vehicles eθ , is

defined as eθ = αθF − αθL .

Considering equation (8), it is possible to rewrite equation (11) in the form: T lx+ = lx cos(TWL ) + ly sin(TWL ) - TVLy L cos( WL ) 2 T - TlWLy L sin( WL ) + TVFy F cos ( g ) - TlWFy F sin ( g ) 2

components lx , ly that describe the position of the point

(αxF ,α yF ) with respect to the moving frame X m − Y

m

located

at the point (αxL ,α yL ) of the leader robot. With respect to

the fixed frame X − Y , the distances lx and ly can be written

as:

lx = -(a xL - a xF )cos aq L - (a yL - a yF )sin aq L ly = (a xL - a xF )sin aq L - (a yL - a yF )cos aq L .

(8)

It is clear that the control objective (7) can be rewritten in terms of the evolution of lx and ly . That is, it is required that, ͟ ͟ ͟ ͟ limk →∞ lx (kT ) = l x , limk →∞ ly (kT ) = l y , for l x ,l y ∈ R , such that, ͟ ͟ ͟ l 2x + l 2y = l 20.

The time evolution of lx and ly can be obtained by consid‐

ering a direct forward shift of equation (8), this is: lx+ = - éëa x+L - a x+F ùû cos aq+L - éëa y+L - a y+F ùû sin aq+L

(9)

ly+ = éëa x+L - a x+F ùû sin aq+L - éëa y+L - a y+F ùû cos aq+L .

(10)

Substituting αx+L , αx+F , α y+L , α y+F and αθ+L from equation (5) into

(9), leads to: 4

Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

(11)

where γ = eθ + T

(W

F

(12)

)

− 2W L . 2

Applying a similar procedure to ly+ in equation (10), one

obtains the dynamics:

T ly+ = -lx sin(TWL ) + ly cos(TWL ) + TVLy L sin( WL ) 2 T - TlWLy L cos( WL ) + TVFy F sin ( g ) + TlWFy F cos ( g ) . 2

(13)

The leader-follower formation problem can be alternatively described in terms of relative position errors, defined as elx = lxd − lx , ely = lyd − ly , where lxd and lyd are the desired relative

distances between the vehicles. From Figure 2, it is easy to see that lxd and lyd can be expressed as:

(

)

(

lxd = l0 cos j d ( kT ) ,lyd = l0 sin j d ( kT )

)

(14)

for a desired time-varying angle φ d . Note that equation ͟ (14) satisfies the relation (lxd )2 + (lyd )2 = l 20.

The time evolution of elx can be obtained by considering, again, a direct forward shift; this is:

To obtain a solution to the leader-follower problem, the virtual control signals, ξ1 = T ψF V F and ξ2 = T ψF W F are

el+x = lxd + - ( lx cos(TWL ) + ly sin (TWL ) æ æ Tö Tö - TVLy L cos ç WL ÷ - TlWLy L sin ç WL ÷ 2 2ø è ø è + TVFy F cos ( g ) - TlWFy F sin ( g ) .

considered. Note also that, by defining el = elx ,ely T , l d = lxd ,lyd

)

, ξ = ξ1,ξ2

T

can be rewritten as:

which can be rewritten in the form:

and uL = V L ,W L

T,

equation (17)

el+ = A1 (WL )el - A1 (WL )l d

el+x = lxd + + elx cos (TWL ) + ely sin (TWL ) æT ö æT ö + TVLy L cos ç WL ÷ + TlWLy L sin ç WL ÷ è2 ø è2 ø - TVFy F cos ( g ) + TlWFy F sin ( g )

T

(18)

+ Ty L A2 (WL )uL + A3 (g )x + l d + . (15)

A feedback law can now be synthesized based on the virtual controls ξ as:

- lxd cos (TWL ) - lyd sin (TWL ) .

x = A3 ( g ) éë v - A1 ( WL ) el + A1 ( WL ) l d - Ty L A2 ( WL ) uL - l d + ùû -1

Following a similar procedure for ely , one obtains the dynamics:

(19)

where the new input signal v is selected as:

el+y = lyd + - elx sin (TWL ) æT ö + ely cos (TWL ) - TVLy L sin ç WL ÷ 2 è ø æT ö + TlWLy L cos ç WL ÷ è2 ø

(16)

k1,k2 ∈ R , with ͟ T γ = eθ + (W F− − 2W L ). 2

+ lxd sin (TWL ) - lyd cos (TWL ) .

4. Delayed Feedback Solution to the Leader-follower Problem According to the definition of the leader-follower problem as stated in Section 3, the follower robot must be able to follow any path described by the leader robot while maintaining a constant distance with respect to it. There‐ fore, the design of the control strategy should be based on the follower’s control signals V F , W F . In order to simplify the developments of the required feedback, equations (15) and (16) can be rewritten in the compact form:

é cos(TWL ) sin(TWL ) ù A1 (WL ) = ê ú, ë - sin(TWL ) cos(TWL ) û é ù T T ê cos( 2 WL ) l sin( 2 WL ) ú A2 (WL ) = ê ú, ê - sin( T W ) l cos( T W ) ú 2 L 2 L ûú ëê é - cos ( g ) l sin ( g ) ù A3 (g ) = ê ú. ëê - sin ( g ) -l cos ( g ) ûú

| k 1 | < 1,

(20)

| k2 | < 1

and

The original control signals V F , W F can be derived from ξ1 and ξ2, more precisely: VF =

æx ö x1 2 ,WF = sin -1 ç 2 ÷ . Ty (WF ) T è 2ø

(21)

Remark 3 It should be pointed out that control signals V F , W F

cannot be synthesized directly from equation (17) due to the dependence of the matrix A3(γ) on the signal W F , since

(W

)

− 2W L . For this reason, equation (19) is defined 2 ͟ in terms of the delayed function γ that allows us to synthesize the

γ = eθ + T

F

virtual control signal ξ as a function of W F− = W F (kT − T ). (17)

Let us now consider the formation error dynamics (18), together with the feedback (19); this is: el+ = Al ( WL ) el - Al ( WL ) l d + T Y L A2 ( WL ) uL + A3 ( g ) A3 ( g

where,

0 ù é e lx ù úê ú k 2 û ê e ly ú ë û

where

- TVFy F sin ( g ) - TlWFy F cos ( g )

é el+ ù é el - lxd ù éV ù ê x ú = A1 (WL ) ê x ú + Ty L A2 (WL ) ê L ú ê el+ ú ê el - lyd ú ëWL û ë yû ë y û d+ é V ù é lx ù + Ty F A3 (g ) ê F ú + ê d + ú ëWF û êëly úû

ék v = Kel = ê 1 ë0

)

-1

[v - A1 ( WL ) el + A1 ( WL ) l d - T Y L A2 ( WL ) uL - l d + ] + l d + .

(22)

Note first that, écos ( g - g ) - sin ( g - g ) ù A3 (g ) A3 (g )-1 = ê ú ëê sin ( g - g ) cos ( g - g ) ûú ͟

T

x

where γ − γ = 2 (W F − W F−). Since cos(x) = 1 − 2sin2( 2 ), it is possible to write:

Raul Dali Cruz-Morales, Martin Velasco-Villa, Rafael Castro-Linares and Elvia R. Palacios-Hernandez: Leader-follower Formation for Nonholonomic Mobile Robots: Discrete-time Approach

5

é ¶ (g - g ) ¶(g - g ) ù A3 (g ) A3 (g )-1 = I - D(g - g ) = I - ê 1 ú ë -¶(g - g ) ¶ 1 (g - g ) û

(23)

͟

with Γ = Γ x Γ y

T

(24)

K + C1 < 1

(29)

is satisfied, then the formation error el = elx ely

T

is ultimate‐

ly bounded.

Proof. The time evolution of the closed-loop formation error

being a non-linear term of the form: G = h1 + h2

| ki | < 1 for i = 1,2. Then, if the condition:

͟

where I ∈ R 2 is an identity matrix and ∂ (γ − γ ) = sin(γ − γ ), ͟ ͟ ∂1 (γ − γ ) = 2sin2(γ − γ ). Hence, equation (22) takes the form: el+ = Kel - G

discrete-time feedback (19)–(21) where the real parameters k1 and

k2 are such that

e can be determined by considering the solution of equation

(25)

(24); this is:

n -1

where

el (n) = K n el (0) - åK ( n -1- m )G( m)

(30)

m =0

h1 = DK - A1 ( WL ) éë el ùû , h2 = DA1 ( WL ) l d - Ty ( WL ) A2 ( WL ) uL - l d + .

which can be majored by using bound (28) as follows:

It is possible to show that the terms h 1 and h 2 are appropri‐

ated bounded functions. The term h 1 can be majored as: h1 = DK - A1 (WL )el £ D [ K - A1 (WL ) ] el .

Then, due to the boundedness of matrices Δ, K and A1(W L ), there exists a non-zero real positive C1, such that: h1 £ C1 el .

el (n) £ K

WL

lim el (n) £ n ®¥

d

, l and l

{C1 el ( m) + C 2 }.

(31)

( n - 1- m )

ö 1 . ÷= 1 K ø

Therefore, from equation (31), the ultimate bound can be obtained as:

d+

are bounded by design, it

is possible to proof that the function ψL and matrices Δ, A1(W L ) and A2(W L ) are also bounded. Then, it is possible

to write:

h2 £ C 2

(27)

for a non-zero positive constant C2. Hence, from (26) and (27), it is obtained that:

G £ C1 e l + C 2 .

(28)

4.1 Formation error analysis Based on the previous developments, it is now possible to state the main result on the evolution of the formation error, as follows. Lemma 4 Let us consider a pair of differentially driven mobile robots, each of which is described by (5), together with the

6

m =0

( n - 1- m )

æ n -1 n lim( K ) = 0 and lim ç å K n ®¥ n ®¥ è m =0

h2 £ D é A1 (WL )l d - Ty L A2 (WL )uL - l d + ù £ D é A1 (WL ) l d + T y L A2 (WL ) uL + l d + ù . ë û ë û

Since uL = V L

n -1

el (0) + å K

Since the eigenvalues of the matrix K are all within the unit circle, one finds that:

(26)

The term h 2 can be majored as:

T

n

Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

C2 . 1 - K - C1

(32)

Then, if condition (29) holds, the formation error el = elx ely

T

is ultimately bounded.

Remark 5 If the approximation of the angular acceleration as a function of the angular velocity W F is considered, this is aW F ≈

͟ T2 (W F − W F−) , then γ − γ ≈ 2 aW F . Therefore, consideration T

of a low angular acceleration, together with a͟ sufficiently small sampling period, allows us to ensure that γ − γ ≈ 0. Remark 6 Note that the magnitudes of constants C1, C2 in the

ultimate bound (32) directly depend on the norm of matrix Δ in equation (23). It is clear that when Δ → 0, one has C1,C2 → 0,

ensuring in this way that limn→∞ e(n) = 0. Since the magnitude ͟

of Δ depends on γ − γ , a small ultimate bound is obtained in terms of Remark 5. Remark 7 Note that in the special case in which the angular velocity is constant, as it is in the case of a straight or a circular path, the perturbation term Γ becomes a vanishing perturbation.

In this case, the closed-loop formation error asymptotically converges to zero. The result stated in Lemma 4 assures the boundedness of the formation error but it cannot determine the evolution of the orientation of the follower robot. This is an important issue since the control of the follower vehicle is carried out by considering the point αF = (αxF ,α yF ), which is located at the front of the longitudinal axis of the vehicle. A good reference for the orientation of the follower vehicle is the orientation of the leader vehicle; that is, the error eθ = αθF − αθL , which has a time evolution that can be

obtained by considering its corresponding forward shift, more precisely: eq+ = eq + T ( WF - WL ) .

(33)

Since W L is an input to the leader robot that generates the tracking trajectory, it can be assumed to be bounded. Also, from (21) it is clear that W F is also a bounded function.

Therefore, the evolution of the formation error is bounded. The evolution of αθL is determined by the trajectory described by the leader robot. To determine the evolution of αθF , the first two equations of the discrete time model (5) are considered in order to obtain:

(a

+ xF

)

(

)

- a xF sin(g F ) - a y+F - a yF cos(g F ) + ly FTWF = 0.

Then, by considering the last equation in (5) and the definition of γF , one obtains:

(a

+ xF

)

- a xF sin(

aq F + aq F

æ aq+ - aq F +2l sin ç F ç 2 è

+

2

(

)

aq F + aq F +

) - a y+F - a yF cos(

ö ÷ = 0. ÷ ø

2

tions and by real-time experiments carried out using a pair of differentially driven wheeled mobile robots of the type Pioneer 3-DX from MobileRobots Inc. 5.1 Numerical results A numerical experiment (NE) was carried out by consid‐ ering that the leader robot follows a circular path with a radius of 1.8m centred at (x,y) = (3m,3m). The trajectory described by the leader robot is generated by considering a linear velocity V L = 1m / s and an angular velocity W L = 0.5rad / s . It is considered that the follower robot

remains at a constant distance of one metre behind the ͟

leader robot, this is l 0 = 1m and φ d = 0rad (equivalently, lxd = 1m and lyd = 0m ). The initial conditions considered in the

experiment are xL (0) = 3m, yL (0) = 1m, θL (0) = 0rad and xF (0) = 2.5m, yL (0) = 2.5m, θL (0) = 0rad . A sampling period of

T = 0.1s was used, together with gains k1 = 0.4 and k2 = 0.4.

Figure 3 depicts the evolution of the leader-follower formation on the Cartesian plane. It can be observed that the trajectory tracked by the follower robot is different from the one developed by the leader robot. This is a conse‐ quence of the control strategy that considers the control of the relative distance between the robots and not a specific path. The tracking errors elx and ely , associated with the desired ͟

relative distance l 0, are shown in Figure 4 where it can be observed that elx and ely converge to the origin due to the constant angular velocity considered in the experiment (see Remark 7).

) (34)

5.5 Leader 5

Equation (34) is the discrete-time counterpart of the continuous time nonholonomic restriction:

robot αθF must satisfy restriction (34). This fact shows the

boundedness of αθF .

Follower initial condition

4.5

4

a& x sin q - a& y cosq + lq& = 0

3.5

Y (m)

associated with the original system (3). The importance of equation (34) is that since the proposed leader-follower control strategy determines the evolution of the point (αxF ,α yF ), the evolution of the orientation of the follower

Follower

3

2.5

2

1.5

1

5. Experimental Evaluation

0.5 0.5

The solution proposed for the leader-follower formation problem (21) is initially evaluated by numerical simula‐

1

1.5

2

2.5

3

3.5

4

4.5

5

5.5

X (m)

Figure 3. NE. Time formation evolution on the Cartesian plane.

Raul Dali Cruz-Morales, Martin Velasco-Villa, Rafael Castro-Linares and Elvia R. Palacios-Hernandez: Leader-follower Formation for Nonholonomic Mobile Robots: Discrete-time Approach

7

the depth camera obtains the depth information of each pixel in the image. A detailed analysis of the accuracy and resolution of the Kinect can be found in [31].

e lx (m)

0 -0.2 -0.4 0

2

4

6 Time (s)

8

10

0

2

4

6 Time (s)

8

10

e ly (m)

0 -0.5 -1 -1.5

Figure 4. NE. Cartesian relative errors elx , ely

5.2 Experimental platform As mentioned above, two differentially driven mobile robots of the type Pioneer 3-DX that support loads of up to 21kg were used. For these robots, l = 0.1m in equation (2), and r = 0.0973m, m = 0.34m in equation (4). To carry out the experiment, the leader robot carries a recognition graphical pattern placed at the midpoint of the wheel’s axis. The pattern consists of three white rectangles within a black rectangle (see Figure 6). The follower robot has a Kinect device positioned at the midpoint of the wheel’s axis and an on-board laptop in which image processing and calcu‐ lations of the control law programmed in C++ are per‐ formed. An RGB camera in the Kinect device captures images that are smoothed and then binarized. Then, a program in the laptop searches the recognition pattern and detects the centroids of each of the three rectangular contours within a larger rectangular contour.

Before its use, the Kinect has to be calibrated to merge the data of the images obtained by the cameras and to process the raw data of the depth sensor. This calibration was done as in [32], following the procedure developed in [33]. The depth information obtained is linearly perpendicular to the longitudinal axis sensor. Thus, all the objects that are over the same line will have the same depth value, regardless of whether they are at the edge or at the centre of the image; this distance is denoted as l xr . Figure 5 shows how the distance between the centre of the image and a centroid of the recognition pattern board is calculated; first by obtaining the depth information of the central point of the image and then by determining the number of pixels that exist between these two points at the same depth. The distance covered by each pixel is obtained experimentally using the approximate relation cmppixel = 0.169 ∗ depth + 0.016 obtained for a distance between 0.9m and 3.5m, where the first constant depends on the standard geometric characteristics of the cameras and the free constant term is a correction factor. See [32, 33] for more information. Knowing the number of pixels NP between the centre of the image and a centroid of the recognition pattern, the length l yr is obtained as l yr = NP ∗ cmppixel . If the centroid is

on the right of the image this length will be positive, and negative if it is on the left.

5.2.1 Vision-based localization system This section provides an overview of the major steps of our method for detecting the leader robot from the visual information provided by the Kinect device that is on-board the follower robot for feedback purposes. The Microsoft Kinect device contains a depth sensor, an RGB camera and a four-microphone array. The Kinect device has the capabilities to perform 3D motion capture, and face and voice recognition [29]. The depth sensor consists of an infrared projector to differentiate depth by infrared vision. The infrared projector and a monochrome CMOS sensor in the Kinect device is combined to capture video data in 3D under any ambient light conditions. The vision sensor provides a maximum of 30 frames per second with a resolution of 640 × 480 pixels for each camera. Due to its limitations, the depth sensor can be used for indoor navigation only. The Kinect depth sensor can detect the distance of an object so that an autonomous robot can navigate while avoiding obstacles in an indoor environment [30]. The threedimensional space is reconstructed by merging the two images obtained by the RGB and the depth camera. The RGB camera obtains the image of the environment while 8

Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

Horizontal Distance

Figure 5. Distances between the centre of the image to the centroid of the recognition pattern

Figure 6 shows how the recognition pattern on the leader robot is used to obtain the orientation error eθ . Once the

distance of the three centroids are determined from the Kinect information, the distances A and B (see Figure 6) are used to compute the relative orientation angle, eθ , between the robots. This error is calculated using basic trigonome‐ try; since the distance between the external centroids on the graphical pattern ( C = 11.5cm ) is known, and the distances A and B are known, it is a simple matter to obtain the C

orientation error as eθ = sin−1 A − B . The tracking distances lx ,

ly and the orientation error eθ are calculated in real-time at

each iteration of the control.

5.75 cm

φ d = 0rad (that is, lxd = 1m and lyd = 0m ). A sampling time of

T = 0.05s was considered for the control loop. The Kinect

device obtained the image and the depth data of all the pixels. Then, using the software OpenCV, the image was smoothed and binarized so it was possible to recognize the pattern board on the leader. The centroids of each rectangle were obtained knowing the pixels of the centroids, and the linear distances were obtained from the depth data; in this way, the measurement of the relative distances lx , ly were

2.8 cm

4.8 cm

13.5 cm

obtained, so that it was possible to compute l0 and angle φ

11.5 cm 19 cm

e Figure 6. Relative distance and orientation recognition using the graphical pattern and Kinect device

From the geometric relation between the robots, as shown in Figure 7, it is clear that the relative distances lx , ly used

to obtain the solution to the formation problem can be obtained as: lx = l xr + l xα , ly = l yr + l yα where l xα = lcos(eθ ), l yα = lsin(eθ ).

Then, the control variables were calculated and sent to the follower robot every 0.3s . It is important to point out that even when the control loop was carried out with a faster sampling time, the new control signal in the follower robot changed until a new relative distance measurement was obtained. For the real-time experiments, it is not considered to be a local absolute localization system and, therefore, it is not possible to show the evolution of the formation on the Cartesian plane. 5.3.1 Straight path experiment (StPE) Even when, in the presented solution to the problem, internal or external disturbances have not been considered, a disturbance was introduced on the straight path experi‐ ment as a practical evaluation of the robustness of the discrete-time strategy. At approximately t = 11s , the follow‐ er robot was manually shifted from its current position, thus introducing instantaneous tracking errors and causing the vision system to lose the tracking reference pattern; incorrect estimations of the relative distances and angles between the robots thus appeared. The straight line was generated by considering V L = 1m / s and W L = 0rad / s . As an initial condition, the follower robot was placed at l0(0) = 1.5m and φ(0) = 3.3685rad (that is, lx (0) = − 1.438m and ly (0) = − 0.425m ) from the leader robot.

The evolution of the relative errors elx , ely is depicted in Figure 8, which shows their convergence to a neighbour‐ hood of the origin until the mentioned disturbance ap‐ peared. After the new relative position was measured, the convergence of elx , ely was recovered. The evolution of the non-controlled error signal eθ is shown in Figure 9, where

Figure 7. Geometrical representation of the alpha point and the middle point of each robot

5.3 Real-time results Two real-time experiments were carried out in order to evaluate the proposed solution. The first one considered the tracking of a straight path by the leader robot, and a sinusoidal path for the second experiment. In both cases, the follower robot must track the͟ leader vehicle at a constant distance of one metre l 0 = 1m with an angle

it can be appreciated that it remained constant for the period that the disturbance was not affecting the experi‐ ment. Finally, Figure 10 shows the time evolution of the control signals. Note that the effects of the external disturbance are more evident in Figure 9, where eθ becomes an unbounded signal

and, because of the physical characteristics of the experi‐ ment, this signal should have a magnitude between ±π rad. The control signal W F is shown in Figure 10. It should be pointed out that during the interval of time when the disturbance was acting, the control signal (based on measured distances) was not acting on the mobile robot.

Raul Dali Cruz-Morales, Martin Velasco-Villa, Rafael Castro-Linares and Elvia R. Palacios-Hernandez: Leader-follower Formation for Nonholonomic Mobile Robots: Discrete-time Approach

9

evolving in a gap of ±5cm for elx and ±8cm for ely . Note that

elx (m)

1.5

this gap depends on the constant associated with the norm of the perturbation term Γ, as well as on the magnitude of the relative error el . The main disadvantage in our experi‐

1 0.5 0 0

2

4

6

8

10 12 Time (s)

14

16

18

20

quently on the magnitude of the neighbourhood’s conver‐ gence. The evolution of the non-controlled signal eθ is

0.4 0.2 ely (m)

ment is the low refresh rate of the vision system, which induces an increment on the relative error el and conse‐

shown in Figure 12 and it is bounded, as expected, from equation (34), oscillating in a gap of ±0.3rad . The time evolution of the control signals is shown in Figure 13.

0 −0.2 0

2

4

6

8

10 12 Time (s)

14

16

18

20

Figure 8. StPE. Behaviour of the relative errors elx , ely . 20

Finally, the disturbance terms Γ x and Γ y , in equations (24)– (25) are depicted in Figure 14, where it can be seen that they are appropriated bounded, as it is required in order to obtain a small ultimate bound for the relative errors elx and ely .

18 0

16 elx (m)

−0.2

14

−0.4 −0.6

eθ (rad)

12 −0.8 0

5

10

15

20

15

20

15

20

Time (s)

10 8 0.05 ely (m)

6 4

0 −0.05

2 0

0 0

5

10 Time (s)

2

4

6

8

10 12 Time (s)

14

16

18

20

Figure 11. SiPE. Behaviour of the errors elx , ely .

Figure 9. StPE. Evolution of the non-controlled error eθ .

0.2

2

0.1

0

1

eθ (rad/s)

VF (m/s)

0.3

3

0 0

2

4

6

8

10 12 Time (s)

14

16

18

−0.1

−0.2

20

WF (rad/s)

−0.3

0

−0.4

−2

−0.5 0

−4

10 Time (s)

−6

Figure 12. SiPE. Evolution of the non-controlled error eθ .

−8 0

5

2

4

6

8

10 12 Time (s)

14

16

18

20 2 1 VF (m/s)

Figure 10. StPE. Control signals V F , W F .

5.3.2 Sinusoidal path experiment (SiPE)

0 −1 −2 −3 0

5

10

15

20

15

20

Time (s)

0.5 WF (rad/s)

The sinusoidal path described by the leader robot was obtained by considering: V L = 0.16m / s and W L = − 0.647sin(kT )rad / s for k = 0,1,2,3,... and T = 0.1s . The initial conditions of the experiment were set to l0(0) = 0.8m and φ(0) = πrad .

0 −0.5 −1 −1.5 0

The relative errors elx , ely are shown in Figure 11; it can be

noted that they converge to a neighbourhood of the origin, 10

Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

5

10 Time (s)

Figure 13. SiPE. Control signals V F , W F .

−3

x 10 10

[2]

Γx (m)

5

0

−5 0

5

10

15

20

Time (s) −4

x 10 4

Γy (m)

2

[3]

0 −2 −4 −6 −8 0

5

10

15

20

Time (s)

Figure 14. SiPE. Perturbation signals evolution Γ x , Γ y .

[4]

6. Conclusions The leader-follower formation problem for nonholonomic differentially driven mobile robots is addressed in this work by considering the discrete-time representation of the kinematic models of the robots. To deal with the associated kinematics constraints, a control located at the front of the robot is considered. A solution to the problem is obtained by considering a single sampling time delayed feedback, based on relative distances and angles between the robots that are measured by an on-board camera mounted on the follower robot, together with a graphical pattern mounted on the leader robot. It is formally proven that the relative Cartesian errors with respect to a desired relative distance between the robots are ultimately bounded. The main advantage of the proposed discrete-time strategy lies in its ease of implementation on microprocessors or computer-based platforms, and the consideration of the vision system that is of a discrete nature. The main disadvantage of the solution proposed is the closed-loop practical stability property of the conver‐ gence errors, since they converge to the origin only for constant angular velocities. The proposed strategy was evaluated via numerical and real-time experiments in a laboratory environment, using a low-cost measurement device. The use of relative distances between the leader and the follower robot has been previously used in the literature in a continuous time context; it is shown in this work that a discrete-time approach can also be exploited in the solution to the problem.

[5]

[6]

[7]

[8]

[9]

[10]

[11]

7. Acknowledgements Part of the work of M. Velasco-Villa was done while on a sabbatical supported by Conacyt-México (No. 260936) at the SEPI of the ESIME-Culhuacan IPN.

[12] 8. References [1] K. Krishnamoorthy, D. Casbeer, P. Chandler, M. Pachter, and S. Darbha. UAV search & capture of a moving ground target under delayed information.

[13]

In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pages 3092–3097, Dec 2012. USA. A. M. Samad, N. Kamarulzaman, M. A. Hamdani, T. A. Mastor, and K. A. Hashim. The potential of unmanned aerial vehicle (UAV) for civilian and mapping application. In System Engineering and Technology (ICSET), 2013 IEEE 3rd International Conference on, pages 313–318, Aug 2013. Malaysia. T. Dierks and S. Jagannathan. Control of nonholo‐ nomic mobile robot formations: Backstepping kinematics into dynamics. In Control Applications, 2007. CCA 2007. IEEE International Conference on, pages 94–99, Oct 2007. Singapore. N. Basilico, N. Gatti, and F. Amigoni. Leaderfollower strategies for robotic patrolling in environ‐ ments with arbitrary topologies. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1, pages 57– 64, 2009. Budapest, Hungary. Yanyan Dai and Suk-Gyu Lee. The leader-follower formation control of nonholonomic mobile robots. International Journal of Control, Automation and Systems, 10(2):350–361, 2012. A. Gopalakrishnan, S. Greene, and A. Sekmen. Vision-based mobile robot learning and navigation. In Robot and Human Interactive Communication, 2005. ROMAN 2005. IEEE International Workshop on, pages 48–53, 2005. USA. H. J. Min, A. Drenner, and N. Papanikolopoulos. Vision-based leader-follower formations with limited information. In Robotics and Automation, 2009. ICRA ’09. IEEE International Conference on, pages 351–356, 2009. Kobe, Japan. R. Carelli, C. M. Soria, and B. Morales. Vision-based tracking control for mobile robots. In Advanced Robotics, 2005. ICAR ’05. Proceedings., 12th Interna‐ tional Conference on, pages 148–152, 2005. WA, USA. G. L. Mariottini, F. Morbidi, D. Prattichizzo, N. Vander Valk, N. Michael, G. Pappas, and K. Daniilidis. Vision-based localization for leaderfollower formation control. Robotics, IEEE Transac‐ tions on, 25(6):1431–1438, Dec 2009. H. G. Tanner, G. J. Pappas, and V. Kumar. Leaderto-formation stability. Robotics and Automation, IEEE Transactions on, 20(3):443–455, June 2004. O. A. A. Orqueda, X. T. Zhang, and R. Fierro. An output feedback nonlinear decentralized controller for unmanned vehicle coordination. International Journal of Robust and Nonlinear Control, 17(12):1106– 1128, 2007. Fabio Morbidi, Gian Luca Mariottini, and Domeni‐ co Prattichizzo. Observer design via immersion and invariance for vision-based leader–follower forma‐ tion control. Automatica, 46(1):148–154, 2010. Jaydev P Desai, James P Ostrowski, and Vijay Kumar. Modeling and control of formations of

Raul Dali Cruz-Morales, Martin Velasco-Villa, Rafael Castro-Linares and Elvia R. Palacios-Hernandez: Leader-follower Formation for Nonholonomic Mobile Robots: Discrete-time Approach

11

nonholonomic mobile robots. Robotics and Automa‐ tion, IEEE Transactions on, 17(6):905–908, 2001.

entropy-based segmentation. Journal of Intelligent & Robotic Systems, 68(1):21–41, 2012.

[14] M. Ou and C. Li, S.and Wang. Finite-time tracking control for nonholonomic mobile robots based on visual servoing. Asian Journal of Control, 16(3):679– 691, 2014.

[23] M. Velasco-Villa, E. Aranda-Bricaire, and R. Orosco-Guerrero. Discrete-time modeling and path-tracking for wheeled mobile robots. Computa‐ ción y Sistemas, 13(2):142–160, 2009.

[15] Jian Chen, Dong Sun, Jie Yang, and Haoyao Chen. Leader-follower formation control of multiple nonholonomic mobile robots incorporating a receding-horizon scheme. The International Journal of Robotics Research, 29(6):727–747, 2010.

[24] R. W. Brocket. Asymptic Stability and Feedback Stabilization, in Diferential Geometric Control Theory. Birkhauser, Boston, MA, 1983.

[16] X. Li, J. Xiao, and Z. Cai. Backstepping based multiple mobile robots formation control. In Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on, pages 887–892, Aug 2005. AB, Canada. [17] M. Ou, S. Li, and C. Wang. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing. International Journal of Control, 86(12):2175–2188, 2013. [18] Zhaoxia Peng, Guoguang Wen, Ahmed Rahmani, and Yongguang Yu. Leader–follower formation control of nonholonomic mobile robots based on a bioinspired neurodynamic based approach. Robotics and autonomous systems, 61(9):988–996, 2013. [19] J. Ghommam, H. Mehrjerdi, and M. Saad. Leaderfollower formation control of nonholonomic robots with fuzzy logic based approach for obstacle avoidance. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 2340–2345, Sept 2011. CA, USA. [20] L. Consolini, F. Morbidi, D. Prattichizzo, and M. Tosques. Leader-follower formation control of nonholonomic mobile robots with input con‐ straints. Automatica, 44(5):1343 – 1349, 2008. [21] Xiaohan Chen and Yingmin Jia. Adaptive leaderfollower formation control of non-holonomic mobile robots using active vision. Control Theory Applications, IET, 9(8):1302–1311, 2015. [22] Hyeun Jeong Min and Nikolaos Papanikolopoulos. Robot formations using a single camera and

12

Int J Adv Robot Syst, 2016, 13:46 | doi: 10.5772/62344

[25] G. Campion, G. Bastin, and B. D’Andréa-Novel. Structural properties and clasification of kinematics and dynamics models of wheeled mobile robots. IEEE Transactions on Robotics and Automation, 12(1): 47–61, 1996. [26] A. Astolfi. Discontinuous control of nonholonomic systems. System and Control Letters, 27:37–45, 1996. [27] J. B. Pomet. Explicit design of time-varying stabiliz‐ ing control laws for a class of controllable systems without drift. System and Control Letters, 18:147–158, 1992. [28] W. E. Dixon. Global exponential setpoint control of wheeled mobile robots: a Lyapunov approach. Automatica, 36:1741–1746, 2000. [29] Z. Zhengyou. Microsoft kinect sensor and its effect. MultiMedia, IEEE, 19(2):4–10, Feb 2012. [30] R. A. El-laithy, Jidong Huang, and M. Yeh. Study on the use of microsoft kinect for robotics applications. In Position Location and Navigation Symposium (PLANS), 2012 IEEE/ION, pages 1280–1288, April 2012. SC, USA. [31] Kourosh Khoshelham and Sander Oude Elberink. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors, 12(2):1437– 1454, 2012. [32] Valentino Frati and Domenico Prattichizzo. Using kinect for hand tracking and rendering in wearable haptics. In World Haptics Conference (WHC), 2011 IEEE, pages 317–321. IEEE, 2011. [33] The openkinect proyect. https://openkinect.org/ wiki/Main_Page, 2012.