Integrated motion measurement of rigid ... - Wiley Online Library

1 downloads 0 Views 164KB Size Report
1 Integrated navigation and motion measurement systems .... gyros (u3, u4), and three short range radar units measuring distances to a nearby wall (y1, y2, y3).
PAMM · Proc. Appl. Math. Mech. 4, 173–174 (2004) / DOI 10.1002/pamm.200410068

Integrated motion measurement of rigid multibody systems J¨org F. Wagner ∗ Institute for Statics and Dynamics of Aerospace Structures, University of Stuttgart, Pfaffenwaldring 27, D-70569 Stuttgart During the last years, integrated navigation systems based on gyros, accelerometers, and GPS receivers became powerful devices for the guidance of aircraft and ships. Comparable equipment using especially wheel sensors exists for cars. The kernel of such systems is a Kalman filter estimating the relevant vehicle motion. The filter design in turn requires a kinematical model to settle on the motion components considered and to describe the mechanical meaning of the measurements employed. Until now, usual models consider only one to six degrees of freedom of a single rigid body. The assumption of a solitary rigid body is not a consequence of the basic concept of integrated navigation; it reflects merely classical navigation requirements. In principle, determining the motion of multibody systems, representing certain vehicle types of varying shape, is possible if appropriate kinematical models and sensor arrangements are available. Based on the theory of integrated navigation systems, the paper outlines the fundamentals of designing integrated motion measurement systems for rigid multibody structures. The example of a double pendulum with a movable inertial support and with equipment of microelectromechanical inertial sensors and of small radar units illustrates this approach. © 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

1 Integrated navigation and motion measurement systems Modern sophisticated navigation equipment is based on the combination of sensors with high signal availability (like gyros and accelerometers, i.e. inertial sensors) and of sensors with good long term accuracy (like GPS receivers or radar units), which form together integrated systems. This includes the fusion of quite different sensor signals representing distances, angular rates, accelerations, etc. [1]. The fusion in turn is realised mostly by a Kalman Filter and requires a kinematical model to settle on the vehicle motion components considered and to describe the mechanical meaning of the measurements employed. For this, Figure 1 contains the integration scheme showing especially the signal flow of two parallel branches. The upper one contains blocks for the vehicle (“moving structure”) and for one part of the sensors (“aiding equipment” with good long term accuracy). Here, the left segment symbolises the transformation of the input u (e.g. acceleration components) into the vehicle motion state x (e.g. position, velocity), and the right one stands for generating measurements y (e.g. ranges) on the basis of the vehicle motion. In parallel, the other part of the sensors (inertial measurement unit “IMU” with high signal availability) measures u forming the input of a simulation of the vehicle motion. The simulation makes use of the kinematical ˆ and y ˆ of the motion state and of the aiding measurements. Finally, the difference model mentioned. It leads to estimates x ˆ is the input of a “control” device having the task to keep small the deviation between x ˆ and x [2]. y−y input u

filter boundary

moving structure

state x u

IMU, motion model

ˆ estimate x u

aiding measurements y equipment aiding model

estimate ˆ – y

+

“control”

Fig. 1 Integration scheme for the sensor fusion.

The kinematical model of the filter comprises two parts, a set of nonlinear ordinary differential equations (“motion model”) and a set of algebraic equations (“aiding model”) with the general form ˆ˙ = f (ˆ x x, u) ,

(1)

ˆ = h(ˆ y x, u) .

(2)

Usual model realisations refer to a single rigid body representing the vehicle and consider one to six degrees of freedom [3]. However, the assumption of such a solitary body reflects merely classical navigation requirements. Determining the motion of multibody systems, describing certain vehicle types of varying shape like mobile robots, is possible just as well if appropriate kinematical models are available and if in addition inertial and aiding sensors are now distributed over the moving structure [4]. Moreover, this offers an alternative to the application of pure inertial motion measurement systems to robots, which ∗

Corresponding author: e-mail: [email protected], Phone: +49 711 685 7046, Fax: +49 711 685 3706 © 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

174

Section 3

required in the past a sensor quality of unacceptable costs [5]: Microelectromechanical inertial sensors developed during the last decades [6] can be a substitute if they are combined as described with additional aiding equipment. To this, Section 2 outlines the derivation of the necessary kinematical models for holonomical rigid multibody systems. (More details about this procedure and the aspect of sensor positioning are given in [7].)

2 Required kinematical modelling of rigid multibody structures As usual, a minimal set of generalised coordinates q shall describe the multibody system geometry, which varies with time t. Moreover, the assumption is made that the structure is equipped with accelerometers (index ι = 1, . . . , µ) and gyros (index κ = 1+µ, . . . , ν+µ), each of them strapped down to one of the rigid components. With respect to an inertial coordinate system, the following general equations describe the position rι , velocity r˙ ι , and acceleration ¨rι of an accelerometer attachment point as well as the attitude γ κ (e.g. three Euler angles), angular rate ω κ , and angular acceleration ω˙ κ at a gyro attachment point. rι = rι (q, t) ,

r˙ ι = Jι (q, t) q˙ + rι (q, t) ,

γ κ = γ κ (q, t) ,

γ κ (q, t)

ω κ = Jκ (q, t) q˙ + rι ,

rι , γ κ ,

,

¨rι = Jι (q, t) q ¨ + rι (q, q, ˙ t) ,

(3)

¨ + ω˙ κ = Jκ (q, t) q

(4)

˙ t) γ κ (q, q,

.

and γ κ

The matrices J are Jacobians, represent completing functions of this first modelling step. Customary inertial sensors detect the projection of the local acceleration or the local angular rate vector respectively on the relevant measurement axis denoted by iι or iκ , which is a function of γ at the attachment point. Uniting all acceleration and all angular rate projections into two vectors a and Ω is the second modelling step, leads to the elements of u, and results in: ⎤ ⎤ ⎡ ⎡ ⎤ ⎤ ⎡ ⎡ i1+µγ 1+µ i1 r1 i1 J1 i1+µ J1+µ . .. .. ⎥ ⎢ . ⎥ ⎦ q˙ + ⎢ ¨ + ⎣ .. ⎦ , a = ⎣ .. ⎦ q Ω=⎣ (5) ⎦ . ⎣ . .   iµ rµ iµ Jµ iν+µ Jν+µ iν+µγ ν+µ The third step is the formation of the state vector x, which consists of q and of a subset of q˙ for those generalised coordinates which are detected by accelerometers (see e.g. q1 in Figure 2). Finally, an appropriate rearrangement of equations (5) with ¨ leads to x˙ = f being the first model part. The second model part y = h follows typically directly from the respect to q˙ and q first equations of the sets (3) and (4). Figure 2 gives an example of a double pendulum with a movable inertial support. It is equipped with two accelerometers (u1 , u2 ), two gyros (u3 , u4 ), and three short range radar units measuring distances to a nearby wall (y1 , y2 , y3 ). Figure 2 contains also the special cases of x, f , and h. Based on simulated and on experimental data, results for the performance of this integrated measurement system are given also in [7]. e2

O

y1

e1 q1 q2

y2

x2 ⎡ ⎤ ⎢(u − l u& )cosx − (u − l u 2 )sinx ⎥ 3 2 1 3 3 f =⎢ 1 1 3 ⎥ , u3 ⎢ ⎥ ⎢⎣ ⎥⎦ u4

y3

d − x1 ⎡ ⎤ ⎥ . h=⎢ d − x1 − l1sinx3 ⎢ ⎥ ⎣d − x1 − l1sinx3 − l2sinx4 ⎦

l1 u3 u1

u2

l2 q3

d

T T x = [x1 x2 x3 x4 ] = [q1 q&1 q2 q3 ] ,

u4

Fig. 2 Geometry, generalised coordinates, measurement axes, and the kinematical model of the double pendulum.

References [1] D.H. Titterton and J.L. Weston, Strapdown Inertial Navigation Technology (Peter Peregrinus, London, 1997). [2] A. Gelb (ed.), Applied Optimal Estimation, 11th printing (The M.I.T. Press, Cambridge, MA, 1989). [3] J.F. Wagner and G. Kasties, in: Symposium Gyro Technology 2003, edited by H. Sorg (University of Stuttgart/DGON, Stuttgart, 2003), pp. 14.1-14.20. [4] J.F. Wagner, Zur Verallgemeinerung integrierter Navigationssysteme auf r¨aumlich verteilte Sensoren und flexible Fahrzeugstrukturen (VDI-Verlag, D¨usseldorf, 2003). [5] H. Everett, Sensors for Mobile Robots (Peters, Wellesley, MA, 1995). [6] R.L. Greenspan, Inertial Navigation Technology from 1970–1995, Navigation 42, 165–185 (1995). [7] J.F. Wagner, Adapting the Principle of Integrated Navigation Systems to Measuring the Motion of Rigid Multibody Systems, Multibody System Dynamics 11, 87–110 (2004). © 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim