Shared autonomy system for tracked vehicles ... - Wiley Online Library

2 downloads 1267 Views 2MB Size Report
e-mail: [email protected], [email protected], ... troller for subtracks and validate the reliability of a shared autonomy system on actual ...
Shared Autonomy System for Tracked Vehicles on Rough Terrain Based on Continuous Three-Dimensional Terrain Scanning •







































































Yoshito Okada, Keiji Nagatani, and Kazuya Yoshida∗ Department of Aerospace Engineering, School of Engineering, Tohoku University, 6-6-01, Aramaki-Aoba, Aoba-ku, Sendai, Miyagi 980-8579, Japan e-mail: [email protected], [email protected], [email protected]

Satoshi Tadokoro Department of Applied Information Sciences, School of Information Sciences, Tohoku University, 6-6-01, Aramaki-Aoba, Aoba-ku, Sendai, Miyagi 980-8579, Japan e-mail: [email protected]

Tomoaki Yoshida and Eiji Koyanagi Future Robotics Technology Center, Chiba Institute of Technology, 2-17-1, Tsudanuma, Narashino, Chiba 275-0016, Japan e-mail: [email protected], [email protected] Received 25 January 2011; accepted 23 August 2011

Tracked vehicles are frequently used as search-and-rescue robots for exploring disaster areas. To enhance their ability to traverse rough terrain, some of these robots are equipped with swingable subtracks. However, manual control of such subtracks also increases the operator’s workload, particularly in teleoperation with limited camera views. To eliminate this trade-off, we have developed a shared autonomy system using an autonomous controller for subtracks that is based on continuous three-dimensional terrain scanning. Using this system, the operator has only to specify the direction of travel to the robot, following which the robot traverses rough terrain using autonomously generated subtrack motions. In our system, real-time terrain slices near the robot are obtained using two or three LIDAR (laser imaging detection and ranging) sensors, and these terrain slices are integrated to generate three-dimensional terrain information. In this paper, we introduce an autonomous controller for subtracks and validate the reliability of a shared autonomy system on actual rough terrains through C 2011 Wiley Periodicals, Inc. experimental results. 

1. INTRODUCTION 1.1. Shared Autonomy Mobility and ease in teleoperation are both important for mobile robots on rough terrain. A general approach to enhance the mobility of the robot is to increase the degrees of freedom (DOF) of its leg, wheel, steering, or track. However, this approach naturally takes the ease in teleoperation away from the robot because the human driver has to handle the increased DOF. A typical solution for overcoming the trade-off between mobility and the ease of teleoperation is to generate the locomotion of the robot by collaboration between the human driver and an autonomous control. This approach is known as shared autonomy; for instance, the NASA Mars Exploration Rovers Spirit and Opportunity (Maimone, Biesiadecki, Tunstel, Cheng, & Leger, 2006) could au-



Website: http://www.astro.mech.tohoku.ac.jp.

Journal of Field Robotics 28(6), 875–893 (2011) View this article online at wileyonlinelibrary.com

 C 2011



tonomously travel to the location specified by the driver. Also, BigDog developed by Boston Dynamics (Raibert, Blankespoor, Nelson, & Playter, 2008) autonomously generates its leg motions to maintain stability and realize the desired direction of travel specified by the driver.

1.2. Search-and-Rescue Robots A number of research institutes are currently developing search-and-rescue robots for exploring disaster areas and obtaining information on victims during the initial stages of investigation (Tadokoro, Matsuno, & Jacoff, 2005). These robots are expected to support rescue operations and minimize the risk of injury to rescuers and victims from secondary disasters. It is extremely important for these robots to have high mobility over the rough terrains of disaster areas that are so often strewn with rubble; therefore, tracked vehicles are frequently used for such applications (Arai, Tanaka, Hirose, Kuwahara, & Tsukui, 2008; Borenstein and Granosik, 2007; Guarnieri, Debenest, Inoh, Fukushima, & Hirose, 2005;

Wiley Periodicals, Inc. DOI: 10.1002/rob.20416

876



Figure 1. sensors.

Journal of Field Robotics—2011

Tracked vehicle test bed Kenaf with three LIDAR

Figure 3. Similarly shaped tracked vehicles having subtracks in group photograph of RoboCupRescue Robot League 2010.

1.3. Research Objective and Approach

Figure 2. sors.

Quince, successor of Kenaf, with two LIDAR sen-

Miyanaka et al., 2007; Borenstein & Granosik, 2007; Micire, 2008). To enhance the traversal ability and stability of tracked vehicles for search and rescue, some are equipped with swingable subtracks, which negotiate steps and bumps in hazardous environments. Our tracked vehicle test bed Kenaf (Yoshida et al., 2007) and its successor Quince (Rohmer, Yoshida, Nagatani, & Tadokoro, 2010) are shown in Figures 1 and 2, respectively. Both robots possess four subtracks, one at each of the four corners of the body, covered with the two main tracks. This has proven to be one of the best configurations, which enables good locomotion over rough terrain, and Kenaf won the best mobility award in the RoboCupRescue Robot League (Jacoff, Messina, Weiss, Tadokoro, & Nakagawa, 2003) in 2007 and 2009. In recent years, a number of robots that have been developed possess this configuration (Figure 3). However, we observed that the use of subtracks also increases the workload of the operator controlling the robot. In particular, manual operation of the robot with subtracks becomes more difficult when the operator teleoperates it with limited camera views sent from the robot itself.

Our research objective is to overcome the dilemma between the traversal ability and ease of teleoperation that is derived from the use of subtracks. In particular, we aim to achieve smooth traversal and turning of a tracked vehicle, even by an unskilled operator, using autonomous control for the subtracks. Our shared autonomy system comprises a manual controller for the main tracks and an autonomous controller for the subtracks. The performance of an unskilled operator should be comparable to that of a skilled operator using fully manual control. Our system was successfully incorporated into Kenaf and Quince, and it was confirmed that the autonomous controller reduces the operator’s workload and maintains a stable pose of the robot while it traverses and turns on an actual rough terrain. Some parts of this study were reported in 2009 (Okada, Nagatani, & Yoshida, 2009) and 2010 (Okada, Nagatani, Yoshida, Yoshida, & Koyanagi, 2010a, 2010b). The controller is based on real-time terrain scanning, which is achieved by two or three LIDAR (laser imaging detection and ranging) sensors (Kawata, Ohya, Yuta, Santosh, & Mori, 2005). Two LIDAR sensors are attached on both sides of the robot, and the omissible one is located at the front of the robot. The use of the front LIDAR sensor enriches the terrain shape information and enables the controller to produce better subtrack motion for traversing rough terrain. Figure 4 shows the configuration of LIDAR sensors. The autonomous controller for the subtracks can be used with up to three LIDAR sensors. The optional front LIDAR sensor obtains a slice of the shape of the terrain being traversed by the robot at any instant. Moreover, the controller integrates the current terrain slices obtained from the LIDAR sensors and various recent terrain slices obtained from the front sensor if available, on the basis of the estimated positions and postures tagged to each slice to estimate the three-dimensional shape of the terrain near the robot. Thus, the controller with three LIDAR sensors Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

Table I.



877

Basic specifications of Kenaf and Quince.

Specification

Kenaf

Quince

Dimensions (mm) Weight (kg) Length of subtracks (mm) DOF

W 400 × L 500 W 370 × L 710 20 26 235 250 6 (2 main tracks and 4 subtracks)

Figure 4. Comparison between two-LIDAR and three-LIDAR systems.

can produce better subtrack motions, negotiating narrow steps/bumps between the subtracks, which are out of the scanning range of the side sensors. The rest of this paper is organized as follows. In Section 2, we introduce related work on tracked vehicles automatically traversing rough terrain. In Section 3, we describe our tracked vehicle test beds Kenaf and Quince, in brief. In Section 4, we present our strategy for the autonomous control of subtracks; this strategy is based on the motions of subtracks teleoperated by expert operators. In Section 7, we explain an algorithm for realizing the strategy described in Section 4. We then report our experiments; we applied the proposed shared autonomy system, including the autonomous controller for the subtracks, to Kenaf and Quince and performed actual experiments in simulated disaster environments to validate it. In Section 8, we report our experimental results and discuss our findings. Finally, we present the conclusions of our study in Section 9.

2. RELATED WORK There have been several studies on mobile robots automatically traversing unknown rough terrain using a controller/mechanical behavior. RHex (Saranli, Buehler, & Koditschek, 2001) is a robot with six compliant legs that rotate full circle. The design of RHex was biologically inspired by hexapods. Although the legs are simply actuated by open-loop control without any intelligence, it was experimentally confirmed that RHex enables good locomotion like a hexapod on various general terrains such as grass, bumps, and step field pallets (Jacoff, Downs, Virts, & Messina, 2008). This simple mechJournal of Field Robotics DOI 10.1002/rob

Figure 5.

ROBHAZ-DT3.

anism, which does not employ external sensors, is tough and practical. However, RHex seems to be more likely to shake its body while traversing rough terrain than tracked vehicles with active subtracks. Therefore, it could be said that tracked vehicles with subtracks are more suitable for equipping a precision manipulator or continuous acquisition of environment information by external sensors. ROBHAZ-DT3 (Lee, Kang, Kim, & Park, 2004) contains a passive joint between the anterior and the posterior tracks; as shown in Figure 5, the joint and tracks are specially designed for the case in which the robot has to ascend or descend stairs. The passive motion of the anterior track triggers the rotation of this joint to enable good mobility over stairs. The HELIOS carrier (Guarnieri et al., 2009) is equipped with an active tail-like mechanism; this tail is intended to maintain a stable attitude of the robot body on stairs or steps. It can be operated either manually or by means of an autonomous controller, and it stabilizes the attitude of the robot body by pressing its tail to the ground as shown in Figure 6. The autonomous controller produces motion of the tail on the basis of the attitude of the robot and distance from the ground. It assists the robot to move over stairs or steps. Ohno, Morimura, Tadokoro, Koyanagi, and Yoshida (2007) also proposed an autonomous controller for subtracks (Figure 7); this controller employs current sensors

878



Journal of Field Robotics—2011

Figure 6.

HELIOS carrier.

to realize fully autonomous exploration of the tracked vehicle. Either controller is based on a fuzzy rule derived from manual navigation induced by expert operators; one controls the torques of the main tracks according to the differential of the center of gravity of the vehicle, and the other controls the position of the flippers according to the pitch angle of the robot and its differential. Those fuzzy controllers effectively work for stairs and bumps in the simulation experiments. Thus, related studies focused on traversing stairs or steps have not employed detailed information on the terrain shape around the robot. These works are reasonable approaches for studying traversal of uncomplicated stairs or steps; however, our interest lies in traversal of unknown and more complex rough terrain.

3. CONTROL TARGET

Figure 7.

Autonomous subtrack controller by Ohno.

that measure the torque of each subtrack and position sensitive detector range sensors that are located at the front and back of the robot body to judge whether the robot is in contact with the ground. The velocity of each subtrack is determined on the basis of the above-mentioned judgment and the posture of the robot. This controller is quite simple and useful for traversing stairs and steps; however, it cannot generate advanced motions for the subtracks against a more complex, rough terrain because the controller is not based on detailed terrain shape information. Kadous, Sammut, and Sheh (2006) reported on autonomous traversal of a tracked vehicle having no active subtracks using behavioral cloning. Their approach was to use a learning technique that is known as situation-action behavioral cloning. They simply represented the situation by the slope of the ground in front of the robot and the posture of the robot. They previously gave pairs of the situation and corresponding appropriate action of the main tracks to their autonomous robot. In an autonomous traversal, the robot continuously obtained the current situation using a range imager and accelerometer and controlled the main tracks by cloning the associated action with the similar situation. Their application could be called one of shared autonomy systems because it allowed a human observer to intervene to specify the action of the main tracks. With this system, they realized semiautonomous navigation on step field pallets (Jacoff et al., 2008) having short gaps. Chonnaparamutt and Birk (2008) reported a simulation study on fuzzy control for tracked vehicles with active rear flippers; they constructed two different autonomous controllers for the main tracks and the nontracked flippers

The shared autonomy system introduced in this paper employs a generic algorithm for tracked vehicles with several subtracks that can widely change the attitude of the robot body by swinging themselves. In particular, we have considered a robot with four subtracks as a good application of the system, because the use of two front and two rear subtracks enables better stabilization performance, and in this configuration it is easy to watch the ground shape along the subtracks using LIDAR sensors. It should be noted that our approach is not designed for robots with only front or rear subtracks such as a PackBot developed by iRobot (Yamauchi, 2004) because their subtracks would not take strong control of the attitude of the robot body in these designs. In this study, the shared autonomy system was incorporated into the tracked vehicle test bed Kenaf (Figure 1) and its successor Quince (Figure 2). Kenaf and Quince are six-DOF tracked vehicle test beds for rescue operations. They have two main tracks covering the body and four subtracks, one at each corner of the body. Kenaf contains three LIDAR sensors at the front and both sides of the body to obtain real-time terrain slices. All motors in Kenaf are encoder equipped, and the circumferential velocities of the main tracks and angular positions of the subtracks are obtainable. Kenaf also contains a threeDOF gyroscope and a gravity sensor. Moreover, we have incorporated a three-dimensional odometry method (Nagatani, Tokunaga, Okada, & Yoshida, 2008) in Kenaf, which uses the outputs of the main tracks’ encoders, gyroscope, and gravity sensor to estimate the position and posture of its body. This method is a variation of gyrodometry (Borenstein & Feng, 1996; Maeyama, Ishikawa, & Yuta, 1996) that includes threedimensional posture estimation using a gyroscope and a kinematic model of tracked vehicles with compensation for track slippage (Nagatani, Endo, & Yoshida, 2007). Quince’s configuration is almost the same as that of Kenaf. However, Quince does not have the front LIDAR Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

sensor because it interferes with the manipulator on its top face. On rough terrain, it is quite difficult to estimate a position with high accuracy over long distances when a dead-reckoning technique such as odometry is used. However, the proposed controller requires reliable positions only along short trajectories over the entire length of the test bed, which is approximately 90 cm. This is why we have used position estimation based on three-dimensional odometry.

4. CONTROL STRATEGY As mentioned in Section 1.3, we aim to achieve smooth traversal and turning by a tracked vehicle even by an unskilled operator using our shared autonomy system, which comprises a manual controller for the main tracks and an autonomous controller for the subtracks. The performance of the unskilled operator should be comparable to that of a skilled operator using a fully manual controller. Thus, we apply control strategies based on the subtrack motions of the robot when it is operated by skilled operators to the autonomous controller for the subtracks. Through experiments and robot competitions we participated in, we observed the following four features of fully manual operations performed by skilled operators:

• To enable the robot to smoothly traverse the terrain, its posture must be maintained according to the slope of the ground. • To enable good locomotion, the main tracks and subtracks should be in contact with the ground as much as possible. • An operator spreads the subtracks while directing the robot along a straight path and folds them while turning the robot. • When the pose of the robot becomes unstable, rolling over should be prevented using the motion of the subtracks. Considering the above-mentioned four features, we applied the following strategy to the subtracks and the robot body: 1. The posture of the robot body must be maintained parallel to the least-squares plane of the ground surface, and the robot body must make contact with the ground. 2. The desired posture can be realized by changing the angular positions of the subtracks. 3. The subtrack controller must employ the spreading and folding modes. The operator can switch between these modes manually. 4. The desired pose (desired posture and subtrack positions) must be evaluated and redefined if it is unstable. Journal of Field Robotics DOI 10.1002/rob



879

To realize the above strategy, we (1) constructed a system to scan in detail the terrain shape, (2) considered the geometry of a generic-shaped subtrack, and (3) designed an algorithm to realize the proposed control strategy. These three issues are discussed in the following sections.

5. TERRAIN SCANNING In this study, LIDAR sensors are typically used. The left and right LIDAR sensors obtain slices of the terrain shape along the subtracks, and the omissible front LIDAR sensor obtains a slice of the shape of the terrain in front of the robot, which is immediately traversed by it. The terrain slices obtained from the LIDAR sensors are stored and tagged with the estimated position and posture of the robot body at the instant of each terrain scan. These stored slices were integrated according to the procedure described later in this section to generate three-dimensional information of the terrain near the robot, which is used in the algorithm described in Section 7. First, we describe the coordinate system used in this study. Let the robot’s coordinate system be right handed, its origin be the center of the robot, its x axis be orthogonal to the front face, and its z axis be orthogonal to the top face. The position and posture of the robot can be represented by the relationship between the global and the robot coordinate systems. In addition, we adopted the quaternion representation (Horn, 1987) to describe the positions and postures of the robot. For example, let quaternion p denote the position vector (xpos , ypos , zpos )T in the global system and quaternion q denote the θrot rotation about the axis of the unit vector (xrot , yrot , zrot )T . The coordinate conversion from the local system {p, q} to the global system can then be described by the following equations: pglobal = q × plocal × q −1 + p,

(1)

p = [0, xpos , ypos , zpos ]T ,

(2)

q = [cos(θrot /2), xrot sin(θrot /2), yrot sin(θrot /2), zrot sin(θrot /2)]T .

(3)

The scanned points U in the robot coordinate system at the moment of scanning are first obtained by the LIDAR sensor on the robot. We tag U with the estimated position p and posture q in the global system at the moment of the scan and define them as the twodimensional terrain information S = {U, p, q}. Let subscripts l, r, and f denote terrain slices from the left, right, and front sensors, respectively, and let subscript n denote a terrain slice obtained during the nth control loop. We

880



Journal of Field Robotics—2011

can now describe the two-dimensional terrain information obtained during the nth control loop as Sl,n , Sr,n , and Sf,n . In this loop, we use Sn to denote the union of Sl,n , Sr,n , Sf,n , and the terrain information from the front LIDAR sensor in recent loops as described by the following equations: (4) Sn = {Sl,n , Sr,n , Sf,m , Sf,m+1 , . . . , Sf,n },   n−1       |pf,j +1 − pf,j  < Lthreshold . (5) m = min i ∈ Z    j =i

It should be noted that we apply only the front terrain information obtained during the last Lthreshold length trajectory to take into account accumulative errors for position estimation. The scanned points to be used in the autonomous subtrack controller are then selected from Sn . If the desired pose determined in the algorithm described in Section 7 is realized by t later, the robot position p after t can be described by the following equation: −1 , p  = pcur + qcur × [0, Vcur t, 0, 0]T × qcur

(6)

where V is the translational velocity of the robot and the subscript cur denotes a current value. We trim the scanned points in Sn based on p  through the following steps. The coordinate conversion of a scanned point u from the tagged system {p, q} to the robot system {p  , qcur } is described by the following equation: −1 × (q × u × q −1 + p − p  ) × qcur . u = qcur

(7)

We apply this conversion to each S = {U, p, q} ∈ Sn and generate the three-dimensional terrain information Un , in which the scanned points are represented in the uniform coordinate system:      , Ur,n , Uf,m , Uf,m+1 , . . . , Uf,n }. Un = {Ul,n

(8)

Figure 8.

Generic-shaped subtrack.

the spaded state, side S supports the robot; on the other hand, when it is in the folded state, side F is in contact with the terrain. We call the case in which side S is in contact with the ground the “spreading mode” and the case in which side F is in contact with the ground the “folding mode.” In addition, we have assumed the angle of the subtrack to be 0 deg when its straight section on side S is in contact with the flat ground; we have also assumed the direction in which the subtrack lifts its toe from 0 deg to be positive.

6.1. Spreading Mode In the case of the spreading mode, two geometric models can be considered; in one model, the contact point lies on the straight section on side S, and in the other model, the contact point lies on the round section on side S. Figure 9 shows a subtrack that makes contact with the straight section. The angular position of contact of the

We then select the target points Utarget for the following procedures from Un according the following equation: Utarget = {u ∈ Un | − Lmax /2 ≤ x ≤ Lmax /2 − W/2 ≤ y ≤ W/2},

and (9)

where Lmax is the entire length of the robot, including the length of the subtrack, and W is the width of the robot.

6. GEOMETRY OF SUBTRACK The geometry of a generic-shaped subtrack is the key issue that should be discussed for determining the desired subtrack angle on the basis of the terrain shape obtained using LIDAR sensors. As shown in Figure 8, a tracked subtrack generally has four divisions; because it is tracked, it comprises a round toe and straight sections. Moreover, it can be in two states, namely, folded and spread states. When the subtrack is in

Figure 9.

Contact with straight section (spreading mode).

Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

Figure 11. Figure 10.



881

Contact with straight section (folding mode).

Contact with round section (spreading mode).

subtrack is described by the following equation: θcontact = θ1 + θ2 = tan−1

z x − xsupport

+ sin−1 

r (x − xsupport )2 + z2

.

(10) Figure 12.

Contact with round section (folding mode).

Figure 10 shows a subtrack in contact with the round section. The contact angular position is described by the following equation: θcontact = θ3 − θ4 = cos−1

situation as follows:

d 2 + L2 − R 2 z + tan−1 2Ld x − xsupport

− sin−1

R−r . L

θcontact = (θ1 + θ2 ) − θoffset (11)

6.2. Folding Mode In the case of the folding mode, two geometric models can be considered: in one model, the contact point lies on the straight section on side F, and in the other model, the contact point lies on the round section on side F. Figure 11 shows a subtrack that makes contact with the straight section in the folding situation. The contact angular position can be derived from the equations of the spreading Journal of Field Robotics DOI 10.1002/rob

= (θ1 + θ2 ) − 2(θ2 + θ4 ) z = tan−1 x − xsupport − sin−1  −2 sin−1

r (x − xsupport )2 + z2

R−r . L

(12)

Figure 12 shows a subtrack that makes contact with the round section in the folding situation. The contact angular position here can also be described on the basis of the

882



Journal of Field Robotics—2011

equations of the spreading situation, as follows: θcontact = (θ3 − θ4 ) − θoffset = (θ3 − θ4 ) − 2θ5 z = tan−1 x − xsupport − sin−1

R−r d 2 + L2 − R 2 − cos−1 . (13) L 2Ld

6.3. Contact Angle of Subtrack with Ground Let the ground shape be represented by a set of points {u1 , u2 , . . . , un }. Then the contact angle of the subtrack with the ground is determined using the following equation:  min(θcontact,1 , . . . , θcontact,n ) : in spreading mode . θref = in folding mode max(θcontact,1 , . . . , θcontact,n ) : (14)

7. CONTROL ALGORITHM 7.1. Design of the Algorithm In this section, we present an algorithm that realizes the strategy described in Section 4 based on terrain scanning and geometry of the subtrack discussed in Sections 5 and 6. Figure 13 shows a flowchart of the algorithm for the autonomous controller for the subtracks. The control algorithm is divided into six steps and summarized as follows: (1) Slices of the shape of the terrain around the robot are first obtained from the LIDAR sensors attached to the robot body, and the three-dimensional terrain shape near the robot is estimated. (2) The desired posture of the body is then calculated on the basis of the estimated terrain shape. (3) The desired positions of the subtracks that realize the

Figure 13.

Algorithm for autonomous control of subtracks.

desired posture of the robot body are also determined. (4) Next the stability of the desired posture and subtrack positions is evaluated. (5) If the desired pose (posture of the robot body and subtrack positions) is unstable, then the desired pose is redefined and steps 3–5 are repeated. (6) When the desired subtrack positions that realize a stable posture are generated, position control of the subtracks is finally performed. In the following subsections, we explain each step of the algorithm in detail after describing the coordinate system used in this study.

7.2. Ground Detection and Trimming of the Scanned Data In this step, we obtain the scanned points Utarget near the robot using the LIDAR sensors attached to the robot; ground detection is performed according to the approach described in Section 5. We first obtain two-dimensional terrain slices from the LIDAR sensors and integrate them according to each tagged and estimated position of the robot to generate the three-dimensional information of the terrain shape. At the end of this step, we filter distant points from the integrated terrain shape: Utarget = {u1 , u2 , . . . , um }.

(15)

7.3. Determination of Desired Posture We then determine the desired posture on the basis of the least-squares plane for the target terrain Utarget . Parameters a, b, and c of the least-squares plane z = ax + by + c are determined using the following equations: a=

αz,x αy,y − αx,y αy,z , αx,x αy,y − αx,y αx,y

(16)

b=

αy,z αx,x − αx,y αz,x , αx,x αy,y − αx,y αx,y

(17)

c = zv − xv a − yv b,

(18)

αx,y = xu · yu − xu · yu ,

(19)

αy,z = yu · zu − yu · zu ,

(20)

αz,x = zu · xu − zu · xu ,

(21)

αx,x = xu · xu − xu · xu ,

(22)

αy,y = yu · yu − yu · yu .

(23)

Then a translation to the coordinate system, if the robot body is parallel to the least-squares plane of the ground surface and makes contact with the ground, is described by the Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain



883

following equations in quaternion algebra: ⎤ ⎡ ⎤ ⎡ ⎤ 0 0 0 ⎢xu ⎥ ⎢x ⎥ ⎢ ⎥ 0 ⎢ i⎥ ⎢ ui ⎥ ⎢ ⎥ −1 ⎢  ⎥ = q × ⎢ ⎥ × q − ⎢ ⎥, ⎣yui ⎦ ⎣yui ⎦ ⎣ ⎦ 0 zui max(zu ) zui ⎡ θrot ⎤ cos 2 ⎢ ⎥ ⎢ b sin θrot ⎥ 2 ⎥, q = ⎢ ⎢ θrot ⎥ ⎣−a sin 2 ⎦ 0 ⎡

θrot = cos−1 

1 a 2 + b2 + 1

.

(24)

(25)

(26) Figure 14.

NESM of Kenaf.

7.4. Determination of Desired Subtrack Positions In this step, using the desired body posture and the integrated terrain slices, we determine the desired subtrack positions on the basis of the geometry of the subtrack described in Section 6. Our shared autonomy system allows the operator to manually switch between the spreading and the folding control modes. Each control mode employs a corresponding geometric calculation method described in Section 6. In other words, the autonomous controller uses the corresponding calculation to determine the desired subtrack positions according to the control mode specified by the operator. In both modes, to realize control strategy 2, we determine the desired subtrack positions to make contact with the ground through the desired robot posture. In particular, we calculate the angular position of each subtrack that makes contact with each point scanned by the left and right LIDAR sensors. The desired position of each subtrack is determined for the maximum or minimum angular position of the subtrack in the spreading or folding mode, respectively.

if this criterion is applied to the stability of tracked vehicles having subtracks. In the case of a tracked vehicle with four subtracks, four contact points (front-right, front-left, rear-right, and rear-left) can be determined by the step described in Section 7.4. In addition, the four axes of tumbling that pass through the front, rear, right, and left contact points can be assumed. Hence, the stability of a tracked vehicle with four subtracks can be determined from the minimum value of the NESM about these four axes. In a real environment, robots may not tumble about a static axis owing to a shift in their contact points. However, simulating such a situation requires a complex analysis that may require information of a region that lies beyond the scanning range of the LIDAR sensors. Therefore, for the sake of simplicity, we achieve a trade-off between the accuracy and the convenience of evaluation and assume that the robot tumbles about a static axis in the proposed controller. 7.5.2. Calculation of NESM

7.5. Stability Evaluation of Desired Pose 7.5.1. Stability Criterion of Robot Pose In the proposed controller, we have adopted the normalized energy stability margin (NESM) (Hirose, Tsukagoshi, & Yoneda, 2006) as the stability criterion for the desired pose. The stability of a desired pose is evaluated by NESM, and the pose is redefined if the stability is insufficient. The NESM is a criterion that is used to evaluate the stability of a robot on the basis of the vertical distance between the initial position of its center of gravity and its highest position during tumbling (Figure 14). Although it is mainly used for walking robots, its evaluation requires only the positions of contact points with the ground and the center of gravity of the robot. In other words, there is no conceptual difference Journal of Field Robotics DOI 10.1002/rob

Let g1 and g2 be contact points and c be the center of gravity of the robot in the robot coordinate system. Conversion to a coordinate system having the z axis as the vertical axis is described by the following equations including the posture quaternion of the robot q: g1 = q × g1 × q −1 ,

(27)

g2

(28)

= q × g2 × q

−1

c = q × c × q −1 .

,

(29)

For descriptive purposes, we have assumed that zg1 < zg2 . Then let ug and uc be the vectors that are generated by g1 , g2 , and c , respectively, and pfoot be the foot of a perpendicular from c to ug . These are given by the following

884



Journal of Field Robotics—2011

equations: ug = g2 − g1 , uc = c



(30)

− g1 ,

pfoot = g1 +

(31)

|ug · uc | ug . |ug | |ug |

(32)

Then the highest position phighest of the center of gravity when the robot tumbles about ug is described by the following equations: phighest = pfoot + |pfoot − c | uhighest = [0, 0, 1]t −

uhighest |utop |

,

|[0, 0, 1]t · ug | ug . |ug | |ug |

(33)

In all trials except the field tests, we measured the posture of the robot body as it traversed the experimental terrains using its built-in gyroscopes. In addition, the cycle of subtrack control is 100 ms and is dependent on the cycle of terrain scanning performed by the LIDAR sensors for all the trials.

(34)

The NESM SNE about the contact points g1 and g2 is given as SNE = zphighest − zc .

1. Kenaf: spreading mode control using two LIDAR sensors 2. Kenaf: spreading and folding mode control using two LIDAR sensors 3. Kenaf: spreading mode control using three LIDAR sensors 4. Kenaf and Quince: field report

(35)

7.6. Stabilization of Desired Pose When the NESM of the robot is less than a predetermined threshold, we repeat the following steps until a desired, stable pose is realized. This step is intended to realize strategy 4. 1a. If the NESM about the front or rear is adopted, reduce the pitch angle of the desired posture toward zero. 1b. When the NESM about the right or left is adopted, reduce the roll angle of the desired posture toward zero. 2. Redefine the desired subtrack positions by recalculating them to realize the redefined desired posture. 3. Evaluate the NESM about the redefined posture and subtrack positions.

7.7. Position Control of Subtracks Finally, we perform position control of the subtracks to realize the desired subtrack position produced through the above-mentioned steps on the basis of the strategy described in Section 4. All subtracks of Kenaf and Quince are controlled by microprocessors present on the built-in motor drivers. Each desired subtrack position is sent to the corresponding microprocessor as a reference for position control using the conventional proportional–integral–derivative controller.

8.1. Spreading Mode Control Using Two LIDAR Sensors 8.1.1. Overview At an early stage in the development, we performed a fundamental experiment based on a simple step (Figure 15) comprising eight concrete blocks using Kenaf. We assumed that the estimated time delay t until the desired pose is achieved to be 0.3 s and the threshold of the NESM to be half of that on level ground. In every traversal, Kenaf was teleoperated, and it moved with a velocity of approximately 10 cm/s. We employed two comparative cases with static subtrack positions of 45 deg and subtrack motions appropriate for the step. The following appropriate subtrack motions were prescribed by a skilled operator: 1. From the beginning of the traversal until the main tracks make contact with the top surface of the step, all subtrack positions of 45 deg 2. Until the center of gravity of Kenaf is above the top surface of the step, all subtrack positions of −45 deg 3. While the center of gravity of Kenaf is above the top surface of the step, all subtrack positions of 0 deg 4. Until the front subtracks make contact with the floor surface, all subtrack positions of −45 deg

8. EXPERIMENTS To validate the reliability of our shared autonomy system, we incorporated it into our tracked vehicle test bed Kenaf and its successor Quince, and we have thus far performed all actual experiments and field tests on rough terrain. In this section, we introduce the results of the following four experiments and one field report and discuss them:

Figure 15.

A step comprising eight concrete blocks.

Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

20 proposed w/o NESM routine

15

pitch [deg]

10 5 0 -5 -10 -15 -20 0

0.2

0.4

0.6

0.8

1

t/T

Figure 16.

Change in pitch angles while traversing the step.

5. Until the traversal is complete, all subtrack positions of 45 deg In addition, to confirm the validity of the stability evaluation and the desired pose stabilization presented in Section 7, one traversal was conducted using only the autonomous subtrack controller without these steps.



885

ratios obtained by dividing the elapsed times by the total time required to traverse a step. The graph and snapshots show that the proposed controller maintains the pitch angle negative (nose-up posture) and positive (nose-down posture) while ascending and descending the step, respectively, even without any stability evaluation. This behavior is similar to the traversal performed with expert-simulated routine motion. This result indicates that the proposed method can generate subtrack motions that replicate those of an expert, as described in Section 7. A difference between the control methods can be observed in Figure 16. The proposed controller without stability evaluation and routine motions generates pitch angles ranging from −20 to 20 deg. However, the proposed controller limited the pitch angles to between −10 and 10 deg. This difference is attributed to the redefinition of the desired pose based on the NESM evaluation. This result may not indicate whether the proposed controller is optimal because of the trade-off between the height of a step and the maximum value of the pitch angle on climbing it. However, from the perspective of the extent to which control of the subtrack motion reduces the risk of the robot rolling over, we consider that the proposed controller generated the most optimal motions in this environment.

8.1.2. Results and Discussion

8.2. Spreading and Folding Mode Control Using Two LIDAR Sensors

Figure 16 shows the change in the pitch angles of Kenaf during the traversal, and Figure 17 shows snapshots of the traversal when the proposed controller was being used. For comparison, the horizontal axis in Figure 16 indicates the

To validate the combination of the spreading and folding mode controls, we performed another comparative experiment on a rough terrain standardized as a step field pallet (Jacoff et al., 2008). Step field pallets are repeatable terrains

Figure 17.

Journal of Field Robotics DOI 10.1002/rob

Snapshots while traversing the step.

886



Journal of Field Robotics—2011

8.4. Spreading Mode Control Using Three LIDAR Sensors 8.4.1. Overview

Figure 18. pallet.

Experimental path and configuration of step field

designed to evaluate the mobility of a search-and-rescue robot; they were formulated by NIST/ASTM. We set up a medium-sized step field pallet in the configuration shown in Figure 18 as our experimental terrain. For the experiment, we adopted two comparative cases with static subtrack positions of 75 deg and subtrack motions appropriate for the pallet as prescribed by an expert operator. We obtained the changes in the robot posture for each trial using its built-in, three-DOF gyroscope. We assumed the time delay t until the desired pose is achieved to be 0.5 s, the threshold of stability to be 20% of that on level ground, and the given constant of proportionality C for the maximum angular velocity of the subtracks to be 1.3.

8.3. Results and Discussion In both the trials with the autonomous controller for the subtracks and specialized subtrack motions, the robot maintained a stable posture and did not overturn. On the other hand, in the trial with static subtrack positions, the robot was stuck in the second division of the path and did not turn because of ineffective subtrack positions. These results clearly show the advantage of using the subtracks on rough terrain. Figures 19–21 and 22–24 show the changes in the pitch and roll angles, respectively, of the robot body during the trials. For the sake of comparison, the horizontal axes in these graphs indicate the ratios obtained by dividing the elapsed time by the total time required in each division. The graphs indicate that the postures with the proposed autonomous controller and those with appropriate specialized subtrack motions were quite similar. Thus, we confirmed that the control algorithm generated stable motions of the robot body and its performance is comparable to an expert operator according to the control strategy.

We performed experiments on two rough terrains comprising concrete blocks to validate the proposed shared autonomy system with an autonomous subtrack controller having three LIDAR sensors (three-LIDAR system). We set up a narrow bump (Figure 25) and a complex field (Figure 26) that simulates a disaster field as the experimental terrains and performed a comparative experiment with the threeLIDAR system on each field. In addition, we obtained the changes in the postures of Kenaf using the built-in, threeDOF gyroscope for every traversal and compared them. For every traversal, we assumed the range Lthreshold of integration of terrain information back to the recent trajectory to be 100 cm, the time delay t until realization of the desired pose to be 0.5 s, the threshold of the NESM to be 10 cm, which equals 50% of that on level ground, and the given constant of proportionality C for the maximum angular velocity of the subtracks to be 1.3.

8.4.2. Comparative Experiment with Two-LIDAR System Overview: First, a comparative experiment between the cases with three and two LIDAR sensors was performed on a bump that was narrow as compared to the width of Kenaf and comprised three concrete blocks (Figure 25). We manually operated the main tracks of Kenaf and automatically operated the subtracks using separate autonomous controllers, and we made Kenaf interact with the short side of the bump to observe whether it was assisted by the autonomous controller for the subtracks to traverse the bump. Results and discussion: Figure 27 shows the change in pitch angles of Kenaf’s body during the traversal using the three-LIDAR system, and Figure 28 shows its snapshots. The graph and snapshots indicate that the subtrack motions generated by the autonomous controller in the three-LIDAR system maintained a stable posture of Kenaf, maintained contact of Kenaf with the surface of the bump during traversal, and made Kenaf traverse the entire length of the bump. In contrast, in the two-LIDAR system using LIDAR sensors on either side of the robot body, the autonomous controller for the subtracks could not detect the bump, which was located beyond the scanning planes, and Kenaf could not negotiate the bump. Because the bump could not be detected by the two-LIDAR system, the system did not generate swinging-down motions in the front subtracks to lift the robot body over the top face of the bump, as was seen in contrast to the three-LIDAR system. These results are typical and indicate the advantages of the three-LIDAR system, which uses the optional LIDAR sensor at the front of the robot, over the two-LIDAR system. Journal of Field Robotics DOI 10.1002/rob

50 40 30 20 10 0 -10 -20 -30 -40 -50

roll angle [deg]

pitch angle [deg]

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

shared autonomy specialized flipper motions

30 20 10 0 -10 -20 -30

0.2

Figure 22. 0.2

Figure 19.

0.4 0.6 elapsed/total [-]

0.8

0.2

Figure 23.

Figure 20.

0.4 0.6 elapsed/total [-]

0.8

1

Pitch angles on division 2.

0.2

Figure 24.

0.2

Figure 21.

0.4 0.6 elapsed/total [-]

0.8

1

Pitch angles on division 3.

8.4.3. Comparative Experiment with an Expert Operator Overview: Second, a comparative experiment was performed with manual control of the subtracks by an expert operator in a complex field with 20 concrete blocks (Figure 26) to simulate the terrain of a disaster area. To normalize the trial conditions, an operator manually controlled the circumferential velocities of the main tracks to be 10 cm/s in both cases. Thus the robot was fullmanually controlled without autonomy in the comparative case. Results and discussion: Figures 29 and 30 show the change in pitch and roll angles, respectively, of Kenaf’s body, and Journal of Field Robotics DOI 10.1002/rob

1

Roll angles on division 1.

0.4 0.6 elapsed/total [-]

0.8

1

Roll angles on division 2.

30 20 10 0 -10 -20 -30 0

50 40 30 20 10 0 -10 -20 -30 -40 -50 0

0.8

30 20 10 0 -10 -20 -30

Pitch angles on division 1.

0

0.2

0.4 0.6 elapsed/total [-]

1

50 40 30 20 10 0 -10 -20 -30 -40 -50 0

887

shared autonomy specialized flipper motions

0

0



0.4 0.6 elapsed/total [-]

0.8

1

Roll angles on division 3.

Figure 31 shows snapshots of the traversal when the proposed three-LIDAR system was used. The elapsed times were 22 s for the proposed system and 23 s for the fullmanual control by an expert operator. For comparison, the horizontal axes shown in Figures 29 and 30 indicate the ratios obtained by dividing the elapsed times by the total time required to traverse the field. As shown in Figure 31, the three-LIDAR system achieved stable traversal even in a complex field. In Figures 29 and 30, we can see that the behaviors of the posture of Kenaf’s body during the traversals caused by the autonomous and manual motions of the subtracks were quite similar. In particular, considering the pitch angles, the three-LIDAR system maintained the attitude of Kenaf’s body low along the entire length of the trajectory. However, we note that the stability of Kenaf is more likely to depend on the pitch angle rather than the roll angle because the entire length of Kenaf, including the length of the subtracks, is about twice its width. Nevertheless, we can say that traversal on employing the three-LIDAR system is as stable as that when an expert operator manually controls the subtracks.

888



Journal of Field Robotics—2011

Figure 25.

Narrow bump comprising three concrete blocks.

In 2009, we entered Kenaf in the RoboCupRescue Robot League at Graz, Austria, to evaluate its performance on uneven terrain. In the competition, our team employed multiple robots, including Kenaf, which had fully autonomous systems (Nagatani et al., 2009). It used a navigator that determines the direction of travel and a subtrack controller with two LIDAR sensors. The autonomous subtrack controller worked successfully for traversing bumps as well as slopes in the competition field. In the same year, we performed another field test at Hyogo Prefectural Emergency Management and Training Center, Japan. We obtained comments from firefighters who performed the tests with the shared autonomy system. In 2010, we incorporated the shared autonomy system into Quince and examined its performance at Tachikawa Regional Disaster Prevention Base, Japan. There is a rubble square for training firefighters, on which we performed our tests. In addition, we obtained comments from firefighters who tried to navigate Quince with the shared autonomy system. In the following subsections, we present reports on the three above-mentioned field tests.

8.5.2. RoboCupRescue Robot League

pitch angle [deg]

Figure 26. Complex field comprising 20 randomly positioned concrete blocks. 25 20 15 10 5 0 -5 -10 -15 0

Figure 27.

2

4

6 8 time [sec]

10

12

14

Change in pitch angle while traversing bump.

From these results, we have confirmed the validity of the control strategy derived on the basis of subtrack motions induced by expert operators and the reliability of the control algorithm realizing the strategy.

8.5. Field Test on Spreading and Folding Mode Controls 8.5.1. Overview We performed several field tests on the shared autonomy system throughout its development. In this section, we have presented typical reports on three different simulated fields of disaster areas.

Overview: The autonomous subtrack controller was tested in a robot competition. We entered Kenaf as a fully autonomous robot in the RoboCupRescue Robot League 2009, Graz, Austria. For the competition, we combined a variant of the frontier-based navigator (Yamauchi, 1997) customized for the competition and the autonomous subtrack controller with two LIDAR sensors. The competition field of RoboCup was divided into yellow, orange, and red areas. The yellow area is designed for autonomous robots, and the red area is designed for manual robots. The orange area is a bridge for manual robots crossing over from the yellow to the red area. The yellow and orange areas contain wooden slopes and gaps, which are steeper or higher in the orange area. Our goal was to build a map of a competition field and find simulated victims in the field using the fully autonomous Kenaf with the two-LIDAR controller for the subtracks. Results and discussion: The performance of the autonomous subtrack controller in the games was highly stable, and the controller was effective in traversing the yellow area designed for autonomous robots. Autonomous Kenaf explored the entire yellow area, built a map, and found one simulated victim. Moreover, the system performed an autonomous exploration in the orange area. Figure 32 shows snapshots of Kenaf autonomously crossing the largest gap in the orange area. The height of the gap was approximately 30 cm. Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

pitch angle [deg]

Figure 28.

25 20 15 10 5 0 -5 -10 -15 -20 -25

roll angle [deg]

Figure 29.

0.2

0.4 0.6 elapsed/total [-]

0.8

1

Change in pitch angle in complex field.

15 10 5 0 -5 -10 -15 -20 -25

shared autonomy full manual

0

Figure 30.

0.2

0.4 0.6 elapsed/total [-]

0.8

1

Change in roll angle in complex field.

8.5.3. Tachikawa Regional Disaster Prevention Base Overview: The latest and most typical field test was conducted at the rubble square at the Tachikawa Regional Journal of Field Robotics DOI 10.1002/rob

889

Snapshots of Kenaf while traversing bump.

shared autonomy full manual

0



Disaster Prevention Base, Tokyo, Japan, in September 2010. This square is a realistic replica of an urban disaster field containing rubble and is regularly used for training firefighters from the Tokyo fire department, including a special force called “Hyper Rescue.” Figure 33 shows the experimental path cutting across the rubble square. The path was approximately 30 m long and had a maximum slope angle of 40 deg, and its largest bump was 35 cm high. We performed a field test for the shared autonomy system using Quince. As previously noted, Quince is the successor of Kenaf and was developed for practical use in urban search and rescue operations. The electric system and software of Quince are highly compatible with those of Kenaf, and therefore we could easily incorporate the shared autonomy system with two LIDAR sensors into Quince. In this field experiment, we assumed the time delay t until realization of the desired pose to be 0.5 s, the threshold of the NESM to be 20% of that on level ground, and the given constant of proportionality C for the maximum angular velocity of the subtracks to be 1.3. Results and discussion: Typical snapshots of the traversal are shown in Figure 34. With the proposed shared autonomy system, Quince was able to traverse the rubble field without manual intervention in the control of subtrack motions. As shown in Figure 34, the operator suitably switched the control modes of the autonomous subtrack controller; for traversing straight sections along the path, the operator employed the spreading mode control to enable the best locomotion and stability, as shown in snapshots A and C.

890



Journal of Field Robotics—2011

Figure 31.

Figure 32.

Snapshots of Kenaf while traversing complex field.

Crossing over a 30-cm-high bump in the RoboCupRescue Robot League 2009 field.

On the other hand, for turning in the rubble, the operator employed the folding mode control to realize smooth turning without any hindrance from the spread subtracks, as shown in snapshot B. Although Quince was not equipped with the front LIDAR sensor because of the conflict with the manipulator on its top face and because a high-end autonomous subtrack controller with three LIDAR sensors was unavailable, its performance was quite good. One of the reasons for this is that Quince is slightly larger than Kenaf, as shown in

Table I; it has higher mobility over rough terrain and is less likely to be stuck owing to a narrow bump located out of the range of the left and right LIDAR sensors. Through the traversal, we observed that the autonomous subtrack controller has difficulty in crossing tall grasses. Sometimes our controller generated unnecessary subtrack motions on grassed blocks because it could not determine which obstacle is soft and has no impact on crossing over. Thus the operator had to navigate Quince while avoiding grassed rocks as possible. Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

Figure 33. tion Base.



891

Experimental path and shooting locations for snapshots in the rubble square in Tachikawa Regional Disaster Preven-

Figure 34. Typical snapshots on the rubble square at Tachikawa: (A) Traversing a 35-cm-high rock using the spreading mode control, (B) turning using folding mode control, and (C) traversing unstructured terrain.

Nevertheless, we can say that the two-LIDAR system in Quince is effective on practical rubble terrains affected by a disaster. The operator can navigate Quince over uneven terrain without complicated manual control of the subtracks by choosing the suitable control mode of the autonomous subtrack controller for different aspects of traversal. 8.5.4. Trials by Firefighters We performed several field tests at training facilities for firefighters, including the test presented in the last report. Through these tests, we also had two opportunities for trial navigation using the shared autonomy system by active-duty firefighters. Before trials, we simply explained to the firefighters the procedure to specify the direction of Journal of Field Robotics DOI 10.1002/rob

travel to the robot using our console. Naturally, the firefighters were not trained to operate a robot, and each trial was the first time they operated a robot. Figure 35 shows a trial navigation at Hyogo Prefectural Emergency Management and Training Center, Japan. This center has a 10-story building as the training facility. Two firefighters attempted and succeeded in making Kenaf ascend and descend the stairs only through the camera mounted on the robot. They commented that the shared autonomy system reduced the operator’s workload and it gave them the opportunity to observe the situation surrounding the robot. Figure 36 shows a trial navigation at the rubble square in Tachikawa Regional Disaster Prevention Base, Tokyo, Japan. As can be seen in this figure, because our system requires the operator to specify only the direction of travel,

892



Journal of Field Robotics—2011

track motions performed through manual control by an expert operator. The autonomous controller generates subtrack motions that control the posture of the robot body according to the average slope above the ground surface, unless the robot has sufficient stability on rough terrain. We carried out actual experiments on several terrains using our test bed and implemented the proposed shared autonomy system. The results showed that the proposed system achieves stable traversal that is as smooth as fullmanual operation, including manual control of the subtracks by an expert operator, when the operator specifies only the desired direction to the robot. Figure 35. Teleoperation using the shared autonomy system by a firefighter via a camera.

ACKNOWLEDGMENTS This work was supported by Strategic Advanced Robot Technology, an R&D project of NEDO, Japan.

REFERENCES

Figure 36. One-handed navigation of the robot over rubble by a firefighter.

navigation over the rubble was performed using only one hand. Two firefighters on behalf of more than 10 attempted and succeeded in navigating Quince along a 5-m-long path in the rubble square. They considered the autonomous subtrack controller to be effective and convenient for directing the robot over rubble. Although the LIDAR sensors worked without any problems during the trials, they also claimed that the LIDAR sensors must be impervious to water and dust for practical use.

9. CONCLUSIONS In this study, we constructed a shared autonomy system for tracked vehicles with subtracks; the system consists of a manual controller for the main tracks and an autonomous controller for the subtracks. It is based on continuous terrain scanning using two or three LIDAR sensors. The terrain shapes obtained are integrated on the basis of the estimated positions and postures of the robot tagged with each shape and are used for the autonomous control of the subtracks. The algorithm for the autonomous control for the subtracks is based on the control strategy derived from sub-

Arai, M., Tanaka, Y., Hirose, S., Kuwahara, H., & Tsukui, S. (2008). Development of “Souryu-IV” and “SouryuV”: Serially connected crawler vehicles for in-rubble searching operations. Journal of Field Robotics, 25(1–2), 31–65. Borenstein, J., & Feng, L. (1996, April). Gyrodometry: A new method for combining data from gyros and odometry in mobile robots. In 1996 IEEE International Conference on Robotics and Automation (ICRA1996), Minneapolis, MN (vol. 1, pp. 423–428). Borenstein, J., & Granosik, G. (2007). The OmniTread OT-4 Serpentine Robot—Design and performance. Journal of Field Robotics, 24(7), 601–621. Chonnaparamutt, W., & Birk, A. (2008, July). A fuzzy controller for autonomous negotiation of stairs by a mobile robot with adjustable tracks. In RoboCup 2007: Robot Soccer World Cup XI, Atlanta, GA (pp. 196–207). Guarnieri, M., Debenest, P., Inoh, T., Fukushima, E., & Hirose, S. (2005). Helios VII: A new vehicle for disaster response—Mechanical design and basic experiments. Advanced Robotics, 19(8), 901–927. Guarnieri, M., Debenest, P., Inoh, T., Takita, K., Masuda, H., Kurazume, R., Fukushima, E., & Hirose, S. (2009, May). HELIOS carrier: Tail-like mechanism and control algorithm for stable motion in unknown environments. In 2009 IEEE International Conference on Robotics and Automation (ICRA2009), Kobe, Japan (pp. 1851–1856). Hirose, S., Tsukagoshi, H., & Yoneda, K. (2006, May). Normalized energy stability margin and its contour of walking vehicles on rough terrain. In 2001 IEEE International Conference on Robotics and Automation (ICRA2001) Seoul, Korea (vol. 1, pp. 181–186). Horn, B. K. P. (1987). Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America A, 4(4), 629. Jacoff, A., Downs, A., Virts, A., & Messina, E. (2008, August). Stepfield pallets: Repeatable terrain for evaluating robot

Journal of Field Robotics DOI 10.1002/rob

Okada et al.: Shared Autonomy System for Tracked Vehicles on Rough Terrain

mobility. In 2008 Performance Metrics for Intelligent Systems Workshop, Gaithersburg, MD (pp. 29–34). Jacoff, A., Messina, E., Weiss, B., Tadokoro, S., & Nakagawa, Y. (2003, October). Test arenas and performance metrics for urban search and rescue robots. In 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003), Las Vegas, NV (pp. 3396–3403). Kadous, M., Sammut, C., & Sheh, R. (2006, December). Autonomous traversal of rough terrain using behavioural cloning. In the 3rd International Conference on Autonomous Robots and Agents (ICARA2006), Palmerston North, New Zealand. Kawata, H., Ohya, A., Yuta, S., Santosh, W., & Mori, T. (2005, August). Development of ultra-small lightweight optical range sensor system. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2005), Edmonton, Canada (pp. 3277–3282). Lee, W., Kang, S., Kim, M., & Park, M. (2004, September). ROBHAZ-DT3: Teleoperated mobile platform with passively adaptive double-track for hazardous environment applications. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan (vol. 1, pp. 33–38). Maeyama, S., Ishikawa, N., & Yuta, S. (1996, December). Rule based filtering and fusion of odometry and gyroscope for a fail safe dead reckoning system of a mobile robot. In 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI1996), Washington, DC (pp. 541–548). Maimone, M., Biesiadecki, J., Tunstel, E., Cheng, Y., & Leger, C. (2006). Surface navigation and mobility intelligence on the Mars Exploration Rovers. In A. M. Howard & E. W. Tunstel (Ed.), Intelligence for Space Robotics (pp. 45–69). San Antonio, TX: TSI Press. Micire, M. J. (2008). Evolution and field performance of a rescue robot. Journal of Field Robotics, 25(1–2), 17– 30. Miyanaka, H., Wada, N., Kamegawa, T., Sato, N., Tsukui, S., Igarashi, H., & Matsuno, F. (2007, April). Development of a unit type robot “KOHGA2” with stuck avoidance ability. In Proceedings 2007 IEEE International Conference on Robotics and Automation (ICRA2007), Rome, Italy (pp. 3877–3882). Nagatani, K., Endo, D., & Yoshida, K. (2007, April). Improvement of the odometry accuracy of a crawler vehicle with consideration of slippage. In 2007 IEEE International Conference on Robotics and Automation (ICRA2007), Rome, Italy (pp. 2752–2757). Nagatani, K., Okada, Y., Tokunaga, N., Yoshida, K., Kiribayashi, S., Ohno, K., Takeuchi, E., Tadokoro, S., Akiyama, H., Noda, I., Yoshida, T., & Koyanagi, E. (2009, November). Multi-robot exploration for search and rescue missions: A report of map building in RoboCupRescue 2009. In 2009 IEEE International Workshop on Safety, Security & Rescue Robotics, Denver, CO (pp. 1–6).

Journal of Field Robotics DOI 10.1002/rob



893

Nagatani, K., Tokunaga, N., Okada, Y., & Yoshida, K. (2008, October). Continuous acquisition of three-dimensional environment information for tracked vehicles on uneven terrain. In 2008 IEEE International Workshop on Safety, Security and Rescue Robotics (SSRR2008), Sendai, Japan (pp. 25–30). Ohno, K., Morimura, S., Tadokoro, S., Koyanagi, E., & Yoshida, T. (2007, October). Semi-autonomous control system of rescue crawler robot having flippers for getting over unknown-steps. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2007), San Diego, CA (pp. 3012–3018). Okada, Y., Nagatani, K., & Yoshida, K. (2009, October). Semiautonomous operation of tracked vehicles on rough terrain using autonomous control of active flippers. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2009), St. Louis, MO (pp. 2815–2820). Okada, Y., Nagatani, K., Yoshida, K., Yoshida, T., & Koyanagi, E. (2010a, October). Shared autonomy system for tracked vehicles to traverse rough terrain based on continuous three-dimensional terrain scanning. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2010), Taipei, Taiwan. Okada, Y., Nagatani, K., Yoshida, K., Yoshida, T., & Koyanagi, E. (2010b, October). Shared autonomy system for turning tracked vehicles on rough terrain using real-time terrain scanning. In 2010 International Conference on Advanced Mechatronics (ICAM2010), Osaka, Japan (no. 5). Raibert, M., Blankespoor, K., Nelson, G., & Playter, R. (2008, July). Bigdog, the rough-terrain quadruped robot. In the 17th World Congress The International Federation of Automatic Control (IFAC2008), Seoul, Korea (pp. 10822– 10825). Rohmer, E., Yoshida, T., Nagatani, K., & Tadokoro, S. (2010, October). Quince: A collaborative mobile robotic platform for rescue robots research and development. In 2010 International Conference on Advanced Mechatronics (ICAM2010), Osaka, Japan. Saranli, U., Buehler, M., & Koditschek, D. E. (2001). RHex: A simple and highly mobile hexapod robot. International Journal of Robotics Research, 20(7), 616–631. Tadokoro, S., Matsuno, F., & Jacoff, A. (2005). Special issues on rescue robotics. Advanced Robotics, 19(3,8). Yamauchi, B. (1997, July). A frontier-based approach for autonomous exploration. In 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA1997), Monterey, CA (pp. 146–151). Yamauchi, B. M. (2004). PackBot: A versatile platform for military robotics. In Proceedings of SPIE, 5422, 228–237. Yoshida, T., Koyanagi, E., Tadokoro, S., Yoshida, K., Nagatani, K., Ohno, K., Tsubouchi, T., Maeyama, S., Noda, I., Takizawa, O., & Hada, Y. (2007, July). A high mobility 6crawler mobile robot “Kenaf.” In 4th International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disaster, Atlanta, GA (p. 38).