Planning and Control for Autonomous Mobile

0 downloads 0 Views 6MB Size Report
Research results of students outside the scope of an employment con- .... automatically coordinate the motion of the mobile base. ...... The controller becomes opera- ...... The trial was conducted on a large soccer field with specifications simi-.
Prof. Dr. Marco Hutter

Master Thesis

Planning and Control for Autonomous Mobile Manipulation Spring Term 2016

Supervised by: C. Dario Bellicoso Philipp Kr¨ usi

Author: Jan Carius

Declaration of Originality

I hereby declare that the written work I have submitted entitled Planning and Control for Autonomous Mobile Manipulation is original work which I alone have authored and which is written in my own words.1

Author(s) Jan

Carius

Student supervisor(s) C. Dario Philipp

Bellicoso Kr¨ usi

Supervising lecturer Marco

Hutter

With the signature I declare that I have been informed regarding normal academic citation rules and that I have read and understood the information on ‘Citation etiquette’ (https://www.ethz.ch/content/dam/ethz/main/education/rechtlich es-abschluesse/leistungskontrollen/plagiarism-citationetiquette.pdf ). The citation conventions usual to the discipline in question here have been respected. The above written work may be tested electronically for plagiarism.

Place and date

Signature

1 Co-authored work: The signatures of all authors are required. Each signature attests to the originality of the entire piece of written work in its final form.

Intellectual Property Agreement The student acted under the supervision of Prof. Hutter and contributed to research of his group. Research results of students outside the scope of an employment contract with ETH Zurich belong to the students themselves. The results of the student within the present thesis shall be exploited by ETH Zurich, possibly together with results of other contributors in the same field. To facilitate and to enable a common exploitation of all combined research results, the student hereby assigns his rights to the research results to ETH Zurich. In exchange, the student shall be treated like an employee of ETH Zurich with respect to any income generated due to the research results. This agreement regulates the rights to the created research results. 1. Intellectual Property Rights 1. The student assigns his/her rights to the research results, including inventions and works protected by copyright, but not including his moral rights (“Urheberpers¨ onlichkeitsrechte”), to ETH Zurich. Herewith, he cedes, in particular, all rights for commercial exploitations of research results to ETH Zurich. He is doing this voluntarily and with full awareness, in order to facilitate the commercial exploitation of the created research results. The students’ moral rights (“Urheberpers¨ onlichkeitsrechte”) shall not be affected by this assignment. 2. In exchange, the student will be compensated by ETH Zurich in the case of income through the commercial exploitation of research results. Compensation will be made as if the student was an employee of ETH Zurich and according to the guidelines “Richtlinien f¨ ur die wirtschaftliche Verwertung von Forschungsergebnissen der ETH Z¨ urich”. 3. The student agrees to keep all research results confidential. This obligation to confidentiality shall persist until he or she is informed by ETH Zurich that the intellectual property rights to the research results have been protected through patent applications or other adequate measures or that no protection is sought, but not longer than 12 months after the collaborator has signed this agreement. 4. If a patent application is filed for an invention based on the research results, the student will duly provide all necessary signatures. He/she also agrees to be available whenever his aid is necessary in the course of the patent application process, e.g., to respond to questions of patent examiners or the like. 2. Settlement of Disagreements Should disagreements arise between the parties, the parties will make an effort to settle them between them in good faith. In case of failure of these agreements, Swiss Law shall be applied and the courts of Zurich shall have exclusive jurisdiction.

Place and date

Signature

Contents Abstract

vii

Symbols

ix

1 Introduction 1.1 Goal and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 State of the Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Project Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 System Overview and Modeling 2.1 System Specifications . . . . . . . . . . . 2.2 Kinematic and Dynamic Modeling . . . 2.2.1 Kinematics . . . . . . . . . . . . 2.2.2 Kinematic Driving Model . . . . 2.2.3 Dynamics . . . . . . . . . . . . . 2.2.4 Constraint Consistent Dynamics

1 1 1 3

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

5 5 9 9 12 14 14

3 Control and Simulation 3.1 Control Formulation . . . . . . . . . . . . . . . 3.1.1 Desired Task Space Acceleration . . . . 3.1.2 Contact Detection . . . . . . . . . . . . 3.1.3 Contact Handling . . . . . . . . . . . . 3.1.4 Base Motion Coordination . . . . . . . . 3.1.5 Inverse Dynamics and Joint Commands 3.2 Results . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Reference Tracking in Free Space . . . . 3.2.2 Reference Tracking with Contact . . . . 3.2.3 Manipulability Maximization . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

17 17 17 19 21 22 23 24 25 25 27

4 Mission 4.1 Navigation . . . . . . . . . . . . . . . . . . . 4.1.1 Features of the Navigation Software 4.1.2 Detection of the Manipulation Site . 4.2 Manipulation . . . . . . . . . . . . . . . . . 4.2.1 Gripper . . . . . . . . . . . . . . . . 4.2.2 Vision . . . . . . . . . . . . . . . . . 4.3 Mission Control . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

31 31 31 32 35 36 36 37

iii

. . . . . .

. . . . . .

. . . . . .

. . . . . . .

. . . . . . .

4.4

Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

5 Future Work 5.1 Mobile Base Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Manipulation Control . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Mission and System Deployment . . . . . . . . . . . . . . . . . . . .

43 43 43 44

6 Conclusion

47

Bibliography

51

Appendix: Data Sheets

53

List of Figures 2.1 2.2

Mobile base manipulator . . . . . . . . . . . . . . . . . . . . . . . . . Gripper with wrench . . . . . . . . . . . . . . . . . . . . . . . . . . .

6 8

3.1 3.2 3.3 3.4 3.5

Control block diagram . . . . . . . . . . Reference tracking plot (free space) . . . Reference tracking plot (with contact) . Manipulability maximization illustration Manipulability time evolution . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

18 24 26 28 29

4.1 4.2 4.3 4.4

Panel detection in laser scan . . . . . Mission state machine . . . . . . . . Manipulation routine state machine Wrench gripping simulation . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

33 39 40 42

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

List of Tables 2.1 2.2 2.3 2.4

Link names . . . . . . . . . . . . Joint names . . . . . . . . . . . . Modeling quantities and symbols Modeling parameters . . . . . . .

. . . .

7 8 10 10

3.1

Controller tuning parameters . . . . . . . . . . . . . . . . . . . . . .

25

v

. . . .

. . . .

Abstract This work documents the development of several major components for an autonomous mobile manipulator. The first part of the paper at hand is concerned with modeling and control of a wheeled mobile manipulator. We present the robot hardware and derive a kinematic and dynamic model of the system. Subsequently, a unified force-motion controller is developed, capable of task-space reference tracking and collision handling. Furthermore, we show how maximizing a manipulability measure can be used to automatically coordinate the motion of the mobile base. In the second part, a manipulation mission is considered. We describe additional components related to navigation and identification of the manipulation site as well as aspects of a visual servoing scheme for autonomous object manipulation. A scalable mission control module is developed that coordinates tasks between navigation, manipulation control, visual servoing, and gripping. Data on the performance of the manipulation controller are obtained from a physics simulation which show accurate reference tracking behavior. The manipulation site identification module is tested on a recorded dataset from a field test.

vii

Symbols Symbols Unless noted otherwise, physical or mathematical symbols obey the following convention: Scalars are denoted in light (a), vectors in bold (a) and matrices in uppercase bold (A). We use the diacritics tilde (˜ a) to indicate a dimension-reduced quantity and overdots (a, ˙ a ¨) for time derivatives. The following symbols and notations are frequently used: In 0n ˆ a I B Ax B Aex rBC RAB vBC vC ωBC ωC

Identity matrix in Rn×n Zero matrix in Rn×n ˆb Skew-symmetric matrix in R3×3 , such that a × b ≡ a Inertial (i.e., world-fixed) coordinate frame Base frame (fixed onto robot’s base) Vector x represented as a coordinate tuple of frame A Unit basis vector in x-direction of frame B, represented in frame A Translation vector from origin of frame B to origin of frame C Rotation matrix (passive rotation), such that Ar = RAB B r Relative translational velocity of frame C with respect to frame B Absolute translational velocity of frame C (shorthand for vIC ) Relative rotational velocity of frame C with respect to frame B Absolute rotational velocity of frame C (shorthand for ωIC )

Indices Special sub- and superscripts in this work include the following: ee gc sa des est cmd curr †

End-effector Ground contact Singularity avoidance Desired Estimated Commanded Current Pseudo-inverse

ix

Acronyms and Abbreviations DOF ETH GPS ICP IMU MBZIRC PCL PID RANSAC RBDL ROS RRT SLAM

Degree of freedom Eidgen¨ ossische Technische Hochschule Global Positioning System Iterative closest point (matching) Inertial measurement unit Mohamed Bin Zayed International Robotics Challenge Point Cloud Library Proportional-integral-derivative (controller) Random sample consensus Rigid Body Dynamics Library Robotic Operating System Rapidly-exploring random trees Simultaneous localization and mapping

Chapter 1

Introduction 1.1

Goal and Motivation

This project aims at developing a mobile manipulator capable of executing a complete manipulation mission autonomously. We intend to solve several sub-tasks, including the identification of the location where manipulation is supposed to happen, path planning and navigation, and completion of a manipulation routine. Today’s robotic systems have become increasingly complex in maintenance and operation. Usually, a significant amount of training and experience is needed to supervise a robotic mission and to operate it manually. Our goal is to build and program a robot that exhibits a high degree of autonomous and intelligent behavior. The responsibilities of a human should be reduced to overseeing the mission, commanding high-level tasks, and intervening in case of an emergency. Furthermore, we require the robot to safely interact with a partially unknown environment. This includes both contacts with static objects and dynamic ones, such as humans or other robots. The versatility of a mobile manipulator makes it an especially interesting system: The mobile base overcomes the typical limitations of a confined workspace and in conjunction with a robot arm, a large number of different manipulation tasks can be solved. Therefore, this system is particularly suitable for scenarios where the surroundings and manipulation site are not well defined a priori and designing a specialized robot would prove too costly and time consuming. An immediate application for this robotic system is the upcoming Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 20171 , which consists of three tasks in total. The mission of the second task of MBZIRC entails finding a control panel on a large field (100 m by 60 m) and navigating there whilst avoiding obstacles. On the panel, one out of six standard wrenches needs to be picked and used to turn a valve stem. During the course of this work, this challenge is treated as a benchmark for the capabilities of our robot.

1.2

State of the Art

Floating base manipulators have attracted academic and commercial interest for the past two decades. A brief overview of the transition from fixed manipulators to wheeled mobile ones is provided by Bayle et al. [2] with a survey on existing platforms documented by van Eden et al. [31]. Many designs have been proposed, 1 See

http://www.mbzirc.com/

1

Chapter 1. Introduction

2

including a description of each robot’s respective advantages and drawbacks. In this section, however, we restrict ourselves to the case of wheeled mobile platforms. This is a particularly interesting and challenging configuration due to the frequent occurrence of non-holonomic constraints. A few systems already exist that show promising behavior in tasks similar to our mission [24, 26]. In many cases, robots presented in literature have a higher number of degrees of freedom (DOFs) than any specific task requires. Therefore, some form of redundancy resolution is performed to make use of the additional agility. Oftentimes, a hierarchical approach is followed which allows an importance ranking of several (possibly contradictory) objectives: The first priority tends to be safety and collision avoidance, followed by task execution and tertiary objectives [1, 10, 13, 24, 32]. The latter include avoidance of singular configurations and joint limits as well as maximizing some form of manipulability measure or reducing torque loads on the actuators. Some approaches find it helpful to minimize wheel slippage or wheel driving torques [17]. Tertiary objectives are often achieved by construction of a potential function that exhibits a global minimum at the desired configuration and maxima at collision or singular configurations. Actuator commands are extracted from this potential through gradient descent techniques. A task hierarchy can be implemented by subsequent null-space projections or hierarchical optimization procedures [18]. Both methods ensure that higher level tasks remain unaffected by less important ones. Such formulations make it possible that the base motion of a mobile manipulator need not be specified explicitly, obviating any additional planner or trajectory generator [14]. Instead, the base automatically follows the arm’s end-effector. Some cases exist where the base trajectory is explicitly specified [23], but in the general case it is difficult to determine how such an optimal trajectory should look like. Several tasks can also be assigned concurrently with a relative importance weighting to resolve conflicting objectives. This is in fact equivalent to a joint optimization of multiple objectives [12]. Planning is often undertaken in the operational space only [10] because of the lower dimensionality compared to the joint space. Alternatively, one may follow a purely reactive approach [13] that avoids planning altogether and hence does not need to know where the manipulator’s end-effector is supposed to move in the future. Furthermore, there is a strong emphasis on compliant behavior [8, 10, 21] due to the fact that manipulation naturally involves contact with static and dynamic objects in the environment and should be safe for interaction with humans. Padois et al. [26] even propose a controller that aims at minimizing impact inertia by keeping the manipulator arm in a suitable configuration. In terms of modeling and motion control, there appear to be two distinct trends in literature. Several approaches consider exclusively the kinematics of motion. They either use fast low-level controllers to achieve desired motions [1, 13, 14, 27] or, alternatively, employ a back-stepping strategy where dynamics are only introduced after a kinematic controller has output desired joint velocities [11, 23]. Approaches that incorporate one or several torque-controlled manipulator arms [10] inherently need to model the system dynamics. White et al. [32] compare a kinematic and a dynamic controller on the same system and find that the kinematic version yields significantly more aggressive behavior. Many authors use a simplified model for the mobile base2 . Especially systems that have a complex wheeled base, for example Rollin’ Justin of the German Aerospace Center (DLR) [10], tend to make use of a simplification at the planning stage: The base is simply assumed to be an omni-directional platform with three degrees of freedom in the ground plane. A similar approach by Fruchard et al. [13] attaches 2 While models usually neglect some physical effects, the mentioned simplifications adapt the model for easier control synthesis at the cost of physical inaccuracies.

3

1.3. Project Overview

an associated holonomic frame to the non-holonomic and non-omni-directional base and employs an additional controller that can bring the base arbitrarily close to its associated frame. Furthermore, one may model the base as a dynamic system but simplify it with a diagonal inertia matrix [24]. To the best of our knowledge, all documented wheeled platforms in literature are velocity-controlled, at least in the context of mobile manipulators. It is likely that the necessary friction modeling is inaccurate and highly dependent on the ground and contact conditions, thwarting any dynamic modeling attempt. Hence the problem arises of operating a system that accepts partially torque inputs and partially velocity commands. This issue permits a number of different approaches. Dietrich et al. [10] first use a control algorithm that outputs joint torques for all joints and then employ an admittance coupling to transform torques into joint speeds. The latter are the inputs to the lower level velocity controllers on the base. The admittance coupling describes the platform dynamics and can be shaped to have arbitrary inertial and damping properties. In other words, the controller itself simulates how the base would have moved if the requested torques had been applied. It then passes down those expected velocities as control input. Mazur and Szakiel [23] on the other hand use the above mentioned back-stepping scheme from which velocities are obtained directly and calculate necessary torques to achieve those velocities in a second step. This is de facto the exact opposite of an admittance coupling. When an end-effector trajectory is used, one may calculate both the velocity commands and the torques for all joints simultaneously [24] and then apply the appropriate one to each actuator. This is done via a force-position duality. Finally, one can observe that visual servoing is used on mobile manipulators for feedback during the manipulation task [9, 16]. Visual servoing on any kind of platform is a large topic in itself and exceeds the scope of this summary. For a compact introduction one may refer to Siciliano et al. [30, chap. 10]. However, there are several noteworthy examples of implementing a visual servoing scheme on a mobile manipulator. De Luca et al. [9] use an eye-in-hand configuration (i.e., camera mounted near the end-effector on the arm) in conjunction with an image-based servoing scheme. The controller ensures that image points move along certain trajectories to their desired positions in the image plane and directly outputs velocity commands in joint space through the image Jacobian. Hamner et al. [16] use position-based visual servoing, but they modify the environment by placing artificial marker tags. In general, visual information appears to be very helpful for manipulation, though it tends to require good knowledge of the task at hand. Specifically, image features or an accurate three-dimensional model of the manipulation object are often assumed to be known in literature.

1.3

Project Overview

This project is divided into several work packages that contribute to designing a dexterous and robust mobile manipulator. Some components are not yet fully integrated. The first part of the document at hand (Chapters 2 and 3) focuses on the modeling and control of a mobile manipulator. Initial sections introduce the system that we developed in terms of hardware components and sensors. It also defines the general structure and naming conventions. A complete system model is subsequently derived, which is used as a basis for formulating a control law in Chapter 3. We show how unified force-motion control is implemented in our system, alongside with contact detection and coordination of the base motion. The second part (Chapter 4) adopts a broader point of view and considers the

Chapter 1. Introduction

4

robot as an agent fulfilling a mission. Apart from the manipulation task itself, we introduce more capabilities that our robot possesses. Most importantly, a mobile manipulator requires the incorporation of a navigation software which is capable of identifying where the manipulator is needed and moving the mobile robot to that location. In order to coordinate the task execution between the various software packages, we finally introduce a mission control module. This work ends with some recommendations on future work packages (Chapter 5) and concludes with Chapter 6. Apart from the conclusion, results are shown after each corresponding section. In the course of this project, significant time was devoted to setting up a software framework. Most notably, this includes the implementation of the controller and simulation components for testing. The implementation is embedded in the software framework of the Robotic Operating System (ROS)3 . We attempt to keep the implementation as modular as possible to facilitate testing in isolation and achieving improved maintainability and flexibility for future development.

3 See

http://www.ros.org/

Chapter 2

System Overview and Modeling This chapter introduces the robotic system in detail, including the general structure, hardware components, and sensors. Subsequently, a model is derived, which will be used for control purposes.

2.1

System Specifications

A mobile manipulator typically consists of two (partially or fully) integrated parts: A mobile base, usually equipped with wheels, tracks, or legs, is needed for positioning the robot. The second part is the manipulator itself, which is mounted on the mobile base. The interaction of those two components produces a very large workspace. In the following, the individual components are described in more detail. A labeled image of the system is presented in Figure 2.1.

Mobile Base We use the four wheel skid-steer platform Husky by Clearpath Robotics1 as our mobile base. A wheeled vehicle was chosen because it is relatively efficient in locomotion and can carry a large payload. Husky is capable of traversing rough and uneven terrain thanks to its large wheels, which makes it suitable for deployment in many environments. Convenient mounting rails allow simple attachment of the robot arm and additional sensors. A data sheet detailing the properties of Husky can be found in the appendix. Note on Wheel Links A potential source of confusion exists regarding the treatment of the four wheels of our platform. The skid-steer setup of the mobile base is designed in a way that the two wheels on each side are connected via a belt. Hence both wheels on each side are simultaneously driven by a single motor. From a modeling perspective, this constitutes only a single degree of freedom per side. The Tables 2.1 and 2.2 which provide an overview over link and joint names reflect this by only listing a single wheel link and joint for each side (incidentally called “front” because the hind wheels are neglected). Still, in a physical simulation, one cannot simply ignore two 1 See

https://www.clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/

5

Chapter 2. System Overview and Modeling

L6

J4

6

J3

L5

L4

J5

J2 J8 L10

L7

L1

L3 J6 L8

L2 J1 J7 L9

eB z eB x

eB y Figure 2.1: Software rendering showing our mobile manipulator. Links are labeled by square, joints by circular balloons. The front right wheel (joint eight, link ten) is occluded in the image. The coordinate axes show the orientation of the base-fixed coordinate system B. The origin is at the center of the mobile base, but has been drawn outside to avoid an overly cluttered image.

of the wheels. The solution is to generate a separate model for the physics simulator including the hind wheels. For the purpose of this report, however, one can proceed by imagining there is only one wheel on each side of the robot. The dynamic model that only contains a single pair of wheels is adapted such that the inertia of the hind wheels is projected onto the front ones.

Arm The design of our manipulator arm is based on a previous project, where Bodie [3] developed a torque-controlled four-joint structure. In this work, we use similar design concepts and extend the architecture to six degrees of freedom. This design was chosen because we expect our robot to operate in a variety of different missions and environments. General manipulation tasks require the position and orientation of the end-effector to be specified independently. At the same time, ground conditions are unknown and difficult to predict and wheel encoders will have far worse position accuracy than those of the arm joints. Therefore, the degrees of freedom available from the base motion are deemed not helpful when executing precise manipulation tasks, resulting in the implementation of six actuated joints on the arm. The structure has been chosen such that a fully actuated wrist can correct any orientation misalignments without requiring large motions of the upper or lower arm. This is especially relevant because the arm links may not be perfectly modeled [3] and cause inaccurate motions. Furthermore, smaller correction motions reduce the danger of moving into a singular configuration. A disadvantage of the current design is the large mass at the wrist, which reduces payload capabilities and slows down the dynamics. Moving more actuators to the base and incorporating a belt mech-

7

2.1. System Specifications

Label L1 L2 L3 L4 L5 L6 L7 L8 L9 L10

Long name Husky base Arm base Shoulder Upper arm Forearm First wrist link Second wrist link Hand/Gripper Left wheels Right wheels

Key BASE ARM_BASE SHOULDER ARM FOREARM WRIST_1 WRIST_2 HAND front_left_wheel_link front_right_wheel_link

Table 2.1: Link names of the system. The labels refer to image 2.1.

anism may be possible, but additional friction and backlash will likely deteriorate torque control performance. The arm elements are connected by integrated robot joints, called ANYdrives 2 , which contain series elastic actuators. They allow highly reactive torque control and sensing. Specifics are outlined in the data sheet in the appendix. For clear identification in the following sections, link names are specified in Table 2.1. Links in this context are rigid bodies that constitute the system. They are combined via joints whose names and allowable ranges are listed in Table 2.2. The ANYdrive joints can rotate continuously, but meaningful limits are chosen here to prevent self-collisions. At the zero position of all joints, the manipulator arm is at its maximum extension in the upwards direction and the gripper points forward. The “key” property specified in both of those tables also appears in the software implementation and is therefore provided for reference.

Gripper The end-effector of the arm is a gripping tool. It is designed for manipulating thin and long objects such as tool handles. An image of the gripper with an engaged tool can be seen in Figure 2.2. The gripper is actuated by a single servo motor, capable of position and speed control. The motor’s axis drives both fingers of the gripper through a two-sided rack and pinion mechanism, such that their center position remains fixed. Apart from the gripping mechanism itself, the gripper possesses no additional degrees of freedom (e.g., no active or passive alignment capabilities). However, the shape of the gripper fingers are built in a way such that a few degrees of misalignment can be tolerated. Actual motions and forces to manipulate an object must be generated by the robot arm. If necessary, it may be opportune to incorporate a form of remote center compliance in the gripper, for example for peg-in-hole insertion tasks. The gripper also provides means for mounting a camera. This will be useful for monitoring the gripping process and introducing visual servoing schemes. A final notable property of the gripper is that its front side (where the object enters) exhibits four supporting beams with spherical rollers at the end. They allow contact with a planar surface while producing minimal friction for motions tangential to the contact plane. This feature lends itself to using a wall for aligning the gripper orthogonally. 2 See

http://www.rsl.ethz.ch/robots-media/actuators/anydrive.html

Chapter 2. System Overview and Modeling

8

Figure 2.2: Gripper with a wrench clamped between its two fingers.

Label J1 J2 J3 J4 J5 J6 J7 J8

Long name Shoulder rotation Shoulder flexion Elbow flexion Wrist flexion Wrist deviation Wrist pronation Front left wheel Front right wheel

Key SH_ROT SH_FLE EL_FLE WR_FLE WR_DEV WR_PRO front_left_wheel front_right_wheel

Angle limits [rad] (−∞, +∞) [−5π/8, +5π/8] [−7π/8, +7π/8] [−7π/8, +7π/8] (−∞, +∞) (−∞, +∞) (−∞, +∞) (−∞, +∞)

Table 2.2: Joint names of the system. All joints are rotational. The labels refer to image 2.1.

9

2.2. Kinematic and Dynamic Modeling

Further components The mobile robot comes equipped with an on-board computer which executes software related to navigation, mapping, and localization. Another separate computation unit runs the control algorithms of the arm and handles visual servoing. Those computers are connected through a local network and are expected to exchange information. In terms of communication, the robot should be autonomous and able to complete the mission using the equipped sensors only. However, for development and supervision (and possibly remote operation) purposes, a wireless router is installed on the system. The operator can monitor all tasks through a wireless connection, record data, or provide direct commands. Two separate battery packs are installed on the robot. Whereas the first one powers the mobile base and the sensors needed for navigation, the second one supplies energy for the motors on the manipulator arm and for the computer that runs the arm software.

Sensors The mobile base is equipped with a GPS receiver, an inertial measurement unit (IMU), and position encoders on the wheels. Additionally, a 3D laser scanner (Velodyne Lidar HDL-32E3 ) is attached to the top plate of the base. The arm itself only contains proprioceptive sensors in the joints. Each series elastic actuator joint comprises position encoders before and after its spring. The available readings for each joint are current position, velocity, and torque. At the end of the arm, a camera can be mounted on the gripper. This thesis, however, does not make explicit use of the vision sensor.

2.2

Kinematic and Dynamic Modeling

This section derives a kinematic and dynamic model of the system. In a first step, we define the system state and relevant kinematic quantities. Then the wheeled platform is modeled along the lines of Padois et al. [27] before the system dynamics are finally introduced. For better understanding, the reader may refer to the visual representation of the robot structure in Figure 2.1. All relevant quantities and parameters that are introduced in this section are summarized for reference in Tables 2.3 and 2.4.

2.2.1

Kinematics

The kinematics of the robotic system describe how the floating base and attached components may move in free space and with respect to each other. While joints and rigid connections give rise to inherent motion constraints, environmental contacts may additionally impose restrictions on a kinematic level. The model is defined by a link hierarchy. Each link corresponds to a rigid body. One link can potentially consist of several physical components, but they are modeled as a single body with combined inertial properties. Links are connected by joints. A link can have several other links connected to it, i.e., have several child joints and links, but only a single parent link. That way, a kinematic tree is created. Our model has a base link that corresponds to the mobile platform. Its associated coordinate 3 Specifications

in the appendix.

Chapter 2. System Overview and Modeling

Symbol q q˙ q¨ q˙joints q¨joints xee x˙ ee ¨ ee x Jee T M S b g fee fgc τ

10

Description

Element of

Generalized coordinates (quaternion representation) Generalized velocities Generalized accelerations Joint velocities Joint accelerations End-effector pose (quaternion representation) End-effector twist End-effector spatial acceleration End-effector Jacobian Ground contact kinematic mapping Mass-inertia matrix Actuator selector matrix Centrifugal and Coriolis terms Gravity terms Wrench on the environment through the end-effector Wrench on the environment through the wheels Joint torques

R15 R14 R14 R8 R8 R7 R6 R6 R6×14 R14×8 R14×14 R8×14 R14 R14 R6 R6 R8

Table 2.3: Overview of the most important modeling quantities and their respective symbols.

Symbol rW mW ltrack mBASE mARM_BASE mSHOULDER mARM mHAND lARM

Description Wheel radius Mass of a single wheel Track length Mass of the base link Mass of arm base link Mass of the shoulder link Mass of the upper arm link Mass of the hand link Length of the upper arm link

Value 0.178 2.64 0.56 33.45 1.47 1.06 1.38 0.58 0.25

Unit m kg m kg kg kg kg kg m

Table 2.4: A selection of the most important modeling parameters. The wrist links are identical to the shoulder link. The forearm link is identical to the upper arm link.

11

2.2. Kinematic and Dynamic Modeling

frame is indicated by the letter B and follows the usual robotics convention: z axis is pointing upwards, x axis towards the front and the y axis to the left (see Figure 2.1). The base link has six degrees of freedom with respect to the world, hence it is sometimes termed (free) floating base. Several branches depart from the base link. The first branch corresponds to the arm. The first link of the arm (ARM_BASE) is mounted rigidly onto the Husky (BASE). The arm is constructed in a way that each link only has a single joint and child link, forming an open kinematic chain. Each arm joint only has a single degree of freedom with respect to its parent link. The end-effector (the gripper) is the final link and has no further child joint. Two additional branches starting at the base, each containing a single link only, are the front left and right wheels. The system has a total of 14 degrees of freedom: Six DOFs are related to the base pose, which is floating in three-dimensional space. The arm joints contribute another six DOFs and finally there are two wheel joints. We use a quaternion representation of the base orientation. Therefore, the number of generalized coordinates q ∈ R15 is one in excess of the number of DOFs. The generalized velocities q˙ ∈ R14 and accelerations q¨ ∈ R14 follow the usual notation and are defined as q˙ q¨

:=



:=



> I vB > ˙B Iv

> B ωB > ˙B Bω

> q˙joints

>

,

(2.1)

> q¨joints

>

,

(2.2)

where I vB and B ωB are the absolute linear and angular velocities of the base, represented in world or base frame respectively. Note that our notation is sloppy in the sense that the generalized velocities q˙ are not the component-wise time derivatives of the generalized coordinates q, as can already seen from the dimension mismatch. Instead, they involve the time derivative of a quaternion to angular velocity. The vector of joint velocities q˙joints ∈ R8 holds the joint rotation rates in the order specified by Table 2.2. The generalized accelerations are simply the component-wise time derivatives of the generalized velocities. We refrain from deriving the actual kinematic equations for the robot arm (e.g., mappings from joint velocities to end-effector twist). Instead, efficient online algorithms are used that take this kinematic description as an argument. For details, the reader is referred to Section 3.1. The modeling and control formulation for this system makes heavy use of kinematic mappings and their inverses. For that reason, our usage of the following terms shall be clarified: Pose The pose of a body describes its three-dimensional position and orientation in space. There are many ways how this can be specified and the convention used in this paper is defined as follows: The pose of a body with associated frame B that has been translated and rotated with respect to a frame A consists of a position and an orientation component. The first is the vector from the origin of A to the origin of B, represented in the A frame, i.e., ArA,B . The orientation component describes a (passive) rotation from B frame back to A frame. This is typically represented by the quaternion qA,B or a rotation matrix RAB . Finally, the homogeneous transformation matrix associated with the pose of body B in the A frame is hom TA,B =



RAB 0

A rA,B

1



∈ R4×4 .

(2.3)

Chapter 2. System Overview and Modeling

12

Jacobian A Jacobian in our context is a partial derivative of a vector with respect to another vector. It is represented as a matrix. We use the terminology “Jacobian of body A” whereby we mean the following: The Jacobian of body A maps the generalized velocities q˙ to the absolute spatial velocity of the body, i.e.,   B vA = B JA q˙ . (2.4) B ωA For numerical representation, a Jacobian must therefore always be expressed with respect to a certain frame (here: B). In literature, this is sometimes termed the basic or geometric Jacobian. Pseudoinverse At various points, we need to invert kinematic relationships of a similar form as (2.4). This is required because task objectives are typically specified in the operational space and eventually need to be translated to joint space. Unless otherwise noted, we use a damped version of the Moore-Penrose pseudoinverse, denoted by a dagger superscript, so that we can write   vA † des B q˙ = (B JA ) . (2.5) B ωA The standard matrix inverse cannot be used because the system is redundant, so there are more joints than operational degrees of freedom and therefore the Jacobians are not square matrices. In other words, the joint speeds are not uniquely defined by the operational space twist. The standard Moore-Penrose pseudoinverse is equivalent to an optimization that minimizes the (unweighted) sum of squares of joint speeds. Weighted versions exist and it is common to use the system’s mass matrix as minimization weights in order to find a minimal energy solution [28]. We use an adapted damped version which ensures that very small singular values receive some damping in order to preclude extremely high values in the inverted matrix. This is crucial because it can lead to fast and dangerous motions when the joint configuration is close to a singular one. Following the implementation by Bodie [3], small singular values σi below a given threshold σmin are not simply inverted but damped with a factor δi according to

( with

2.2.2

δi (σi ) =

i σiinverse = σ2σ+δ 2 , i i   δmax 1 − σσmin if 0 < σi < σmin , i 0 otherwise .

(2.6) (2.7)

Kinematic Driving Model

The above description of the degrees of freedom and the generalized coordinates suggests that the mobile base can potentially attain any pose in free space. For typical applications this is not the case, instead, the mobile base moves on the ground and the base velocity depends on the surface properties and the wheel motion. Note that, without ground interaction, the whole robot would just keep accelerating by gravitational forces. In the following, a kinematic driving model is derived that couples some of the degrees of freedom of the model through incorporating ground interactions. Its purpose is to express the generalized velocities purely as a function of the joint

13

2.2. Kinematic and Dynamic Modeling

velocities and current generalized coordinates. In other works, we seek a linear4 relationship of the form q˙ = T (q)q˙joints . (2.8) Since the bottom part of q˙ contains q˙joints , the mapping T (q) must necessarily exhibit the form   TDM (q) T (q) = ∈ R14×8 . (2.9) I8 The upper part of T (q) shall be called driving model matrix TDM (q), because this is where the modeling aspect of the wheeled platform enters the equation. Since skidsteer models are difficult to identify and strongly sensitive to parameter changes [22, 34], we shall use a standard differential drive model as an approximation. Note that, we choose such an analytical model to simplify our control formulation and to avoid difficult parameter estimation. Clearly, this restricts the flexibility of our model because we implicitly assume that out robot is always driving on a planar surface spanned by the x and y axis of the robot frame B. Future applications need to review this condition and potentially introduce a more sophisticated model. Recalling that the left and right wheel joints are the last two entries in the joint velocity vector, a differential drive system without wheel slippage and skidding is described by   rW rW 0 ··· 0 2 2  0      0 ··· ··· ···  0 ··· ··· ··· 0  B vB  q˙joints , (2.10) =  0 ··· ··· ··· 0  B ωB    0 ··· ··· ··· 0  −rW rW 0 · · · 0 ltrack ltrack where the absolute linear and angular velocities are expressed with respect to the base (body) frame. This makes the matrix on the right hand side of (2.10) a constant. To finally convert those velocities to the same form as they appear in the generalized velocities, we use   rW rW 0 ··· 0 2 2  0       0 ··· ··· ···  0 ··· ··· ··· 0  v R 0 IB 3 I B   q˙joints . = (2.11) 0  03 I3  B ωB  0 ··· ··· ···   0 ··· ··· ··· 0  rW W 0 · · · 0 l−r ltrack track | {z } TDM (q)

Now the matrix T (q) from equation (2.8) can be constructed. Note that, due to the dependency on the rotation matrix RIB , T (q) is no longer constant but depends on the current base orientation. In a control framework, it therefore needs to be updated at each iteration. Finally, we will consider the time derivative of T , as this will be required later. The only time dependent part in the driving model matrix is the rotation matrix RIB . The derivative is hence straightforward to find: All constant entries vanish and the rotation matrix derivative can be calculated through the identity ˆB . R˙ IB = RIB B ω

(2.12)

4 Linearity in q˙ joints is not necessary a priori, however, it simplifies calculations significantly afterwards. The matrix T is in general a nonlinear function of q.

Chapter 2. System Overview and Modeling

14

A neat derivation of this equation is provided by Hamano [15]. All required terms are hence available for formulating the derivative of the driving model (2.8), reading ˙ q˙joints + T (q)q¨joints . q¨ = T˙ (q, q)

2.2.3

(2.13)

Dynamics

The dynamic model of our system can be generically expressed by > > M q¨ + b + g + Jgc fgc + Jee fee = S > τ .

(2.14)

This formula is termed the equations of motion in joint space, because it represents a system of equations of dimension 14, i.e., one per degree of freedom. For readability, dependencies of the above terms are omitted. The symmetric mass matrix M encodes the system’s inertia, b contains centrifugal and Coriolis terms, and g the torques due to gravity in each generalized coordinate. The sum b + g is often jointly called nonlinear terms or effects. The vector fgc contains the forces and torques related to ground contact and is mapped through the associated Jacobian Jgc into the joint space. The same applies to the wrench fee acting at the endeffector and its Jacobian. Obviously, the external forces must be represented in the same frame as the Jacobians they are multiplied against. Both forces are acting on the environment such that the robot experiences the corresponding reaction forces. Finally, on the right hand side, the joint torques τ of the actuated joints are projected by a selection matrix S to the degrees of freedom they do physical work on. In our case, the arm and wheel degrees of freedom are actuated and the floating base is not actuated at all5 (hence its name). The torque vector τ contains exactly the joint torques that individually act on their respective degree of freedom. Therefore the selection matrix must be given by   S = 08×6 I8 . (2.15) Equation (2.14) can be derived in a number of different ways, for example Lagrangian or Newton-Euler methods. Such a derivation is not explicitly provided here.

2.2.4

Constraint Consistent Dynamics

Section 2.2.2 shows that there is indeed a relationship among the general coordinates of our system if we assume a certain driving model and no-slip constraints. This reduces the dimensionality of our system. In fact, the dynamics (2.14) should more correctly be written as  > > M q¨ + b + g + Jgc fgc + Jee fee = S > τ , (2.16) Jgc q˙ = 0 . The second equation guarantees a zero-slip ground contact by enforcing the velocity of the contact point(s) to be zero. In order to apply those constraint consistent dynamics (2.16) to control laws (e.g., inverse dynamics), it would be favorable to combine the two equations into a single one and additionally eliminate the unknown contact forces fgc . This procedure is outlined in the following: From Section 2.2.2 we know that the generalized velocities and accelerations can be expressed through the joint velocities and accelerations and the generalized coordinates (equations (2.8), (2.13)). Furthermore, if the generalized velocities are 5 Ground

interaction is handled through external forces, specifically the term fgc .

15

2.2. Kinematic and Dynamic Modeling

expressed in that way (i.e., q˙ = T (q)q˙joints ), they necessarily obey the no-slip constraint. This always holds because, by construction, the matrix T contains the driving model and hence cannot produce any velocities that violate the differentialdrive model. Mathematically speaking, the columns of T only span directions that satisfy the driving model and hence the no-slip constraint. They also span all possible directions that the system can move in. It follows that the columns of T must be a basis for the null space of Jgc , such that we can formulate the identity Jgc (q)T (q) ≡ 0

∀q .

(2.17)

This implies that one can insert (2.8) and (2.13) into the equations of motion (2.16) and automatically fulfill the second condition for all possible joint velocities. Finally, we can project the dynamics into the subspace that remains after the no-slip conditions have been enforced. The projection is performed by pre-multiplying the system dynamics with T > . The reduced constraint consistent dynamics therefore read > T M T} q¨joints + T > M T˙ q˙joints + T > b + T > g | {z | {z } | {z } ˜ M

>

˜ b > Jgc

˜ g

>

> Jee

+T fgc + T fee | {z } | {z } 0

> J˜ee

> > =T | {zS } τ .

(2.18)

˜> S

The quantities with a tilde diacritic shall be called “reduced”. Note how the constraint forces from ground contact automatically drop out of the equation. A further conversion to operational space dynamics is possible at this point but is not required for this work.

Chapter 2. System Overview and Modeling

16

Chapter 3

Control and Simulation 3.1

Control Formulation

The following section develops the control law for our mobile manipulator. We roughly follow the acceleration-resolved approach for hybrid force-motion control that is outlined by Siciliano and Khatib [29, pp. 177-178]. A block diagram of the control architecture is presented in Figure 3.1. In the following sections, the individual components of the feedback loop are documented in detail. First, we derive a PID controller that outputs task space acceleration commands. Then we focus on interactions with the environment and describe means of detection and handling of contact forces. Finally, the base motion is considered and combined with end-effector commands to a single control output. Deriving the system dynamics (equations of motion (2.14)) by hand is a tedious and error prone process. We avoid such a derivation by relying on efficient numerical computation of the relevant dynamic terms at runtime. This project uses the algorithms implemented in the Rigid Body Dynamics Library (RBDL)1 , which also provides methods for calculating Jacobians and their time derivatives. Those quantities are hence assumed to be available for control purposes. Of course, the model used by RBDL needs to be updated with the current system state. While all necessary measurements can be extracted directly from a simulator, a state observer needs to be employed when operating a partially observable physical system.

3.1.1

Desired Task Space Acceleration

One of the main tasks of manipulator control is to bring the end-effector into a desired pose. Since the actuators are commanded on the torque level, it is natural to formulate desired motions in terms of accelerations. This section is concerned with the accelerations in task or operational space. The end-effector pose xee contains the position and orientation of the end-effector with respect to a given frame. In our framework, all calculations are performed in the world frame, which allows setting the operational space target only once. If the position of the manipulator or the base receive a perturbation or are refined through a state-estimator, the controller can react accordingly without re-setting the target. des ¨ ee The desired end-effector acceleration I x is the control authority which is used to curr des reach convergence between the current pose I xee and the desired one I xee . 1 RBDL was developed by Martin Felis (Optimization in Robotics and Biomechanics research group, Heidelberg University) and can be found at http://rbdl.bitbucket.org/ . We use an adapted version that provides an extended interface.

17

˙ des xdes ee , x ee ¨ des x ee Σmotion

Contact est Contact f des fee force handling ee Σforce estimator

PID

Manipulability maximization

q¨joints,0

¨ cmd x ee

cmd fee

Differential inverse kinematics

state measurements

Torque cmd System limits τ

cmd q˙joints

Inverse τ des dynamics

cmd q¨joints

R

Chapter 3. Control and Simulation 18

Figure 3.1: Block diagram of the control architecture. The feedback may in part be provided by a state observer.

19

3.1. Control Formulation

A PID controller is employed to convert from an error in pose to desired accelerations. Essentially, it can be reduced to the formula Z des des curr des curr des curr ¨ ee = Kp ( I xee I xee ) + Ki ( I xee I xee ) dt + Kd ( I x˙ ee − I x˙ ee ) . (3.1) Ix We use the boxminus ( ) operator here to signify a special form of subtraction that yields the difference between two poses. The result is a six-dimensional difference vector. The first three components represent the difference in position and the last three components constitute the rotation between desired and actual orientation, represented as a rotation vector2 . The gains Kp , Ki , Kd are diagonal matrices in R6×6 . Typically, the first three and the last three diagonal entries are identical as they relate to the position and des orientation components respectively. Often, the desired twist I x˙ ee is zero although it may be explicitly defined, for example when tracking a trajectory. In practice, one should prevent the integrator part from becoming too large, hence there is a limit imposed. Furthermore, we specify a maximum acceleration output for safety reasons.

3.1.2

Contact Detection

For safe and dexterous manipulation capabilities, a quick and reliable collision detection is vital. Here, we follow an approach developed by De Luca et al. [7, 8] that not only avoids any (potentially noisy) acceleration measurements to estimate a contact force but also eliminates the need for additional force sensors. Bodie [3] demonstrated in her work that the torque sensing capabilities of our robot should suffice to estimate interaction forces. Not all contacts can be detected. For instance, if the structure of the robot fully absorbs the contact force (i.e., when the latter acts in a singular direction), the torque signals provide no information on that force. We make the assumption that contacts only occur at the end-effector. This simplifies our calculations and is reasonable if the environment is well known and self-collisions are avoided. In some cases, however, it is possible to determine which link has collided by identifying the first joint whose motion does not agree with the applied torques. This becomes increasingly difficult for links that are close to the base because they have fewer degrees of freedom in space and therefore external forces are more likely to be supported by the structure only, hence remaining undetected. We define a residual r ∈ R8 as the effect of external contact forces that is seen on a joint torque level. Basically, the actual motion of the robot is compared with the expected system dynamics and the difference is interpreted as a residual (i.e., a form of error). This assumes that the system model is sufficiently accurate, because all model mismatches falsely contribute to the residual. In the following, we define a number of terms that will be necessary for estimating the contact force: ˙ We The generalized momentum of the robot (in all degrees of freedom) is p = M q. are interested in the reduced version p˜ of the momentum and calculate its derivative p˜ = p˜˙ = =

T > M T q˙joints , ˙ T + T > M T˙ )q˙joints + T > M T q¨joints (T˙ > M T + T > M ˜ − g˜ −J˜> fee . ˙ T + T > M T˙ )q˙joints + S˜> τ − b (T˙ > M T + T > M ee {z } |

(3.2) (3.3)

˜˙ 1 =:p

2 The direction of the vector specifies the axis of rotation and the magnitude is equal to the angle expressed in radians.

Chapter 3. Control and Simulation

20

The matrix T embodies again the kinematic driving model, introduced in equation (2.8). In the last line (3.3), we substitute the reduced equations of motion (2.18). The only term that is not straightforward to obtain from the system is the time ˙ . We show in the following how it can be calcuderivative of the mass matrix M lated. According to the projected Newton-Euler method, the mass matrix can be constructed by summing contributions from all bodies k ∈ [1, nbodies ] of the system. Using spatial Jacobians (see equation (2.4)) this can be written as M=

nbodies X k=1

(I Jk )

>



I3 0

0 RIK

|



 K Θk {z

I3 0

0 RIK

> I Jk

,

(3.4)

}

=I Θk

where K Θk ∈ R6×6 is the inertial matrix of body k relative to its own reference frame K. This is a constant quantity. The time derivative of the mass matrix is therefore obtained by applying the chain rule to equation (3.4). Time derivatives of the Jacobians can be accessed through the model in RBDL. Time derivatives of the rotation matrices are given in equation (2.12). In the literature, calculations for the derivative of the mass matrix are sometimes simplified by making use of a skew-symmetry in the equations of motion: The centrifugal and Coriolis terms b can always be written as a matrix vector product ˙ q. ˙ This matrix C is not unique and can be chosen, such that the matrix b = C(q, q) ˙ − 2C is skew-symmetric, for example by using Christoffel symbols of the first M type [30, p. 258]. If this is the case, the skew symmetry allows us to find an explicit expression of the mass matrix derivative: 

  > ˙ − 2C + M ˙ − 2C M M =M >

=⇒

˙ M

=

0,

=

C + C> .

(3.5)

We do not use this shortcut because our model provides no direct way of extracting the matrix C with the required properties. Irrespective of which method is used, all terms involved in the calculation of p˜˙ 1 are available at this stage. Now consider the (continuous time) dynamic system for the residual r, Z t    ˙ r = kr p˜1 − r dτ − p˜ . (3.6) 0

By taking the time derivative of the above,   > r˙ = kr p˜˙ 1 − r − p˜˙ = −kr (r − J˜ee fee ) ,

(3.7)

and assuming a perfect system model, it can be observed that the residual is the state of the stable first order system (3.6), driven by the end-effector force projected into joint space. If we project the residual back into task space, then the residual wrench  † f residual = J˜> r (3.8) ee

ee

tracks the end-effector force for sufficiently high gains kr . The residual r needs to be updated by the controller according to the differential equation above. residual Due to jerky behavior, the residual wrench fee is passed through a first order filter before it is being used in other calculations. We shall term this filtered quantity est the estimated contact force fee .

21

3.1.3

3.1. Control Formulation

Contact Handling

If a contact is detected and the desired end-effector acceleration points in the direction of the obstacle, selection matrices are used to prevent the controller from fighting against the obstacle and producing dangerously large interaction forces. Those selection matrices essentially define which directions in task space are position controlled. By construction, the orthogonal complement is then force controlled. We adopt a simple approach for calculation of the selection matrices: We define a contact frame C, such that the estimated contact force acts in the x-direction of that frame. This allows writing  > est est kfee k 0 0 . (3.9) C fee = This, of course, does not define the contact frame uniquely, but the remaining two directions may be arbitrary. The selection matrices are first constructed in that contact frame. Only directions that allow free motion should be commanded by a position controller. When in contact with a wall, we continue to control the position in the plane that is orthogonal to the contact force and the rotation around the force vector. The remaining subspace shall be force-controlled, yielding selection matrices   0 1 1 1 0 0 , (3.10) C Σmotion = diag   1 0 0 0 1 1 = diag . (3.11) C Σforce = I6 − C Σmotion This choice ensures that the columns of the two selection matrices span mutually orthogonal subspaces in R6 . Finally, the selection matrices need to be rotated into the world frame, because this is where the control commands are calculated in. The transformation reads    > RIC 0 RIC 0 Σ = Σ . (3.12) I 0 RIC C 0 RIC The rotation matrix RIC can be easily constructed by finding two unit vectors e1 and e2 that are orthonormal to each other and mutually orthogonal to the direction of the contact force ef , such that they form a basis of R3 . One option would be using vector cross products I e1 I e2

I I ef × I ez I e × e I z I f

, k = I ef × I e1 .

=

k

(3.13) (3.14)

The eIz unit vector was used as an example here. If it turns out it is almost aligned with ef , one should use another basis vector for numeric stability. The vectors can  then be combined to form a rotation matrix RIC = I ef I e1 I e2 . Furthermore, in case of contact, it is desirable to apply a contact force in order to keep the contact closed. Hence we apply the desired force magnitude f des in the direction of the estimated contact force f est des cmd fee = Σforce ee f , (3.15) est k kfee in case the acceleration is directed against the obstacle. Otherwise, the contact force automatically becomes zero since the force selection matrix is a zero matrix3 . The desired acceleration is likewise filtered to produce a command cmd ¨ ee Ix

des ¨ ee = I Σmotion I x .

(3.16)

Note that there is only feed-forward control on the contact force. One may easily extend this formulation by closing a PID feedback loop on the contact force. 3 Without contact, all directions are position controlled, making the motion selection matrix the identity matrix.

Chapter 3. Control and Simulation

3.1.4

22

Base Motion Coordination

For a mobile manipulator, two distinct modes of operation are required depending on the task. One is to avoid any motion with the mobile base and operate the arm with a high precision. This is almost equivalent to a fixed-base manipulator. The second is a form of whole body control where arm and base collaborate to complete a manipulation task. Both of these options may be necessary during a mission. Implementing the first case (fixed base) is relatively simple: We use a hierarchical controller that considers the base motion as its primary goal and request it to be zero. This restricts two degrees of freedom of our system, so the remaining six can be used for defining the configuration of the manipulator arm. The case that explicitly asks for base motion is more interesting and thus discussed in the following. Essentially, the hierarchy is reversed and the base motion is only a secondary goal that is to be achieved in the null space of the primary objective, which is the motion control of the end-effector. The main reason why the base should move is to enlarge the workspace of the manipulator arm. As the end of the workspace is approached, the configuration becomes near-singular. A popular measure [1, 10] of this effect is the kinematic manipulability w(J (q)), defined as w(J (q)) :=

q

det (J (q)J > (q)) .

(3.17)

As Siciliano et al. [30, p. 126] note, this is nothing but the product of the singular values of the Jacobian J (q). The larger the singular values, the more a change in joint positions amplifies to a motion of the end-effector. A zero singular value is equivalent to a singular configuration. Different measures considering the ellipsoids of manipulability (e.g., their volume or eccentricity) [1] may prove useful, too, but are not considered here. We use the mobile base motion to avoid singular configurations of the arm. Specifically, we desire movements to always maximize the manipulability of the arm. To calculate this manipulability, we consider the Jacobian of the end-effector, reduced to the columns corresponding to the arm joints. The notation Jˆee (q) ∈ R6×6 symbolizes this cropped version of the end-effector Jacobian. This reduction is necessary because otherwise the floating base degrees of freedom produce almost perfect manipulability for all configurations. Hence w would never be zero and only be marginally affected by the arm configuration. One can find similar approaches in literature, for example Bayle et al. [1] use only the Jacobian of the arm, too. To push the system in a direction that increases the manipulability, we define singularity avoidance joint torques τsa and align them with the gradient of the manipulability, such that    τsa = kp,sa 

∂w(Jˆee (q)) ∂qarmjoints

0 0

>   8 ∈R .

(3.18)

Note that only derivatives with respect to the arm joints are calculated and the wheel joint torques are deliberately set to zero. At this stage, we only want to know how the arm would need to move in order to increase its manipulability. The derivative with respect to a single joint can be calculated via 1 ∂w(J ) = ∂qi 2

q det (J J > ) Tr

JJ

 > −1

∂J > J +J ∂qi



∂J ∂qi

> !! .

(3.19)

23

3.1. Control Formulation

Partial derivatives of the Jacobian with respect to individual joints can be obtained from RBDL via Hessians. Finally, we cannot apply the torques (3.18) directly to the system, because they may interfere with the end-effector position control. Instead, we map them back to joint accelerations ˜ −1 τsa − kd,sa q˙joints . q¨joints,0 = M

(3.20)

The damping constant kd,sa is required to prevent permanent base motion oscillations. The above accelerations will then be projected into the null space of the end-effector position control (hence the zero subscript), which ensures they do not disturb higher level objectives. Details are presented in the following section.

3.1.5

Inverse Dynamics and Joint Commands

This final section of the control law derivation combines all of the above to finally produce commands that are sent to the actuators. In order to calculate the joint commands, the dynamics of the system are utilized. This allows incorporation of all modeled dynamic effects as well as gravity compensation. Observe that the endeffector control in Section 3.1.1 is treated exclusively in the operational space. This is useful because of lower dimensionality and more intuitive behavior. However, at this point it is necessary to convert back to joint space, since the commands need to be distributed to the individual joint actuators. As noted before, the kinematic relationship between joint speeds and end-effector twist can be compactly written with Jacobians. Since we work on the acceleration level, we also need time derivatives of twist, which together read x˙ ee = ¨ ee x

J˜ee q˙joints ,

(3.21)

= J˜˙ee q˙joints + J˜ee q¨joints .

(3.22)

The joint accelerations q¨joints are not uniquely defined by the operational space acceleration due to redundancy in the system. Therefore the general solution reads   †      ˙ q˙ curr + I − J˜ † J˜ cmd ˜ ¨joints,0 , ¨ cmd (3.23) q¨joints = J˜ee x − J ee joints ee ee q 8 ee |

{z

=:NJ˜ee

}

where the final joint accelerations q¨joints,0 may be chosen arbitrarily. The endeffector Jacobian null space projector NJ˜ee ensures that those accelerations do not influence the primary objective of manipulator positioning. We choose those additional inputs in order to maximize the manipulability as shown in the previous curr section. The current joint speeds q˙joints can be obtained by measurement while the commanded task space acceleration stems from the PID controller. In any case (irrespective of hierarchy or contact state), the desired joint torques can finally be obtained by solving the reduced system dynamics (2.18) for the joint torques:  †   ˜ + g˜ + J˜> f cmd . ˜ q¨cmd + b τ des = S˜> M (3.24) joints ee ee The combination of acceleration and force control, as presented here, is often termed unified or hybrid force-motion control. It is common to limit the torque commands to a reasonable range [τmin , τmax ] to protect the actuators and for safety reasons, e.g., if something in the control algorithm goes wrong. Note, however, that reference tracking is affected by altering the torques and may cause unexpected behavior.

Chapter 3. Control and Simulation

24

Reference Tracking x Position [m]

0.4 0.35 0.3

Reference Actual

0.25 0.2

0

2

4

6

8

10

12

14

16

18

20

22

24

0

2

4

6

8

10

12

14

16

18

20

22

24

0

2

4

6

8

10

12

14

16

18

20

22

24

y Position [m]

0.4 0.3 0.2 0.1 0

z Position [m]

0.8 0.7 0.6 0.5 0.4

Time [s] Figure 3.2: End-effector reference tracking plots. The controller becomes operational after 4.0 s.

For velocity controlled joints (the wheels in our case), the commanded accelerations are simply integrated over a single time-step ts and added to the current velocities, such that cmd cmd curr q˙joints = ts · q¨joints + q˙joints .

(3.25)

This procedure is similar to the one found in Omrˇcen et al. [25].

3.2

Results

This section documents some results of the above modeling and control procedure. The results were obtained from a physics simulator. We use the Gazebo robot simulator4 due to its good integration with the ROS framework. It simulates the rigid body dynamics including contact interactions. We provide it with a model of our robot in the form of a link/joint hierarchy. We use the same parameters that are also passed to the RBDL model as part of the controller.

25

3.2. Results

Parameter Controller sampling time Actuator torque limit Proportional gain on position Integrator gain on position Derivative gain on position Proportional gain on base position Derivative gain on base position

Value 0.0025 35.0 350.0 5.0 80.0 10.0 6.0

Unit s Nm 1/s2 1/s2 1/s 1/s2 1/s

Table 3.1: Parameters used for the reference tracking experiment in Figure 3.2.

3.2.1

Reference Tracking in Free Space

The plot in Figure 3.2 shows the end-effector position reference tracking behavior. Initially, a constant target position is set to which the controller moves after being enabled at approximately four seconds. This is equivalent to a step input. To ensure smooth motions, the derivative gains are relatively high such that there is no overshoot after switching the controller on. After roughly six seconds, the desired position begins to move along a straight line (in 3D space) of length 0.41 m. Repeated traversals of that line with a maximum velocity of 0.3 m/s and a maximum acceleration of 0.2 m/s2 are achieved by fitting a fifth order polynomial time trajectory. The resulting oscillatory motion of the end-effector is clearly visible. Larger accelerations than the chosen ones tend to exceed the torque limits of the shoulder flexion joint due to the relatively high wrist mass. Note that the controller has no knowledge of the complete trajectory beforehand, instead it merely reacts to changes in the commanded position. We do not supply any velocity reference. During the entire tracking phase, the orientation of the end-effector is held constant. Furthermore, the mobile base follows the arm motion in a way that tends to increase manipulability. After approximately 19 seconds, the desired position is held constant again and the actual position converges to the set-point. Modeling inaccuracies are compensated by the integrating behavior, which ensures a zero steady-state error. In summary, one may conclude that the tracking is reasonably accurate. Table 3.1 summarizes the parameters used for creating Figure 3.2.

3.2.2

Reference Tracking with Contact

The following plot in Figure 3.3 shows the reference tracking behavior when the end-effector comes in contact with an obstacle. In our experiment, the obstacle is a flat wall with normal direction along the x-axis. One can observe that the reference tracking works as expected while in free space (x > 0.26 m). Once the set-point moves behind (or inside) the wall, the reference cannot be followed anymore, so our controller ensures that the pose remains stable and the actual position stays on the surface of the obstacle until the reference returns. Once the direction of the desired end-effector acceleration obtains a negative projection on the outside-pointing wall normal, the contact detaches and selection matrices switch to complete position control in all (task space) degrees of freedom. Careful observation of Figure 3.3 reveals that detachment occurs before the reference returns to the front side of the wall because of the anticipatory derivative component of the PID controller. The desired contact force is 5 N which can be observed in the bottom plot of Figure 3.3 during contact phases. It shows the estimated contact force based on residual 4 See

http://gazebosim.org/

Chapter 3. Control and Simulation

26

Reference Tracking with Contact x Position [m]

0.4 0.3 0.2

0 Estimated contact force in x-direction [N]

Reference Actual

0.1 0

5

10

15

20

25

30

0

5

10

15

20

25

30

5 0 −5 −10 −15

Time [s]

Figure 3.3: End-effector reference tracking during contact phases. Contact with an obstacle occurs between 5.8 s and 11.2 s and again between 14.6 s and 19.8 s. Outside of those intervals the end-effector moves in free space. The obstacle is a flat wall at position x = 0.26 m.

27

3.2. Results

calculations. After an initial transient, the controller manages to stabilize the contact force accurately. The negative sign is due to the fact that the force on the environment acts against the x-coordinate axis in this experiment. A “true” value of the contact force cannot be provided because the Gazebo simulator only outputs extremely noisy contact force signals. Unfortunately, peaky transients in the estimated contact force occur both when closing the contact and at detachment, which can be explained as follows. When the end-effector initially hits the wall, its momentum changes very rapidly. This impact causes a sharp peak in the contact force that can hardly be avoided. If the bandwidth of the controller is increased, this peak can be reduced by actively decelerating the end-effector. Furthermore, modeling mismatches contribute to the unsteady behavior during transients, which was also observed by Bodie [3]. Experiments involving free motion (no contacts) show that the residual never quite reaches zero. This can be explained by modeling errors, especially relating to the base. One could imagine the following situation: The arm is extended from the base, such that the arm center of mass and arm base do not coincide in the xyplane of frame B. Furthermore, the joint torques are controlled such that gravity is compensated but no motion occurs. An additional torque input in the first arm joint (SH_ROT) would rotate the arm around the z-axis. By the differential drive model from Section 2.2.2, the controller expects that wheel torques are necessary to keep the base still. In reality, however, no wheel torques are required, because friction on the four wheels prevent any motion. This unmodeled effect therefore induces a false residual. Because the residual is never quite zero, a threshold is needed to decide when a contact has indeed occurred. This threshold must necessarily be smaller in magnitude than the desired contact force, otherwise the controller would be constantly switching between contact and free motion mode. Unfortunately, this threshold causes difficulties when the contact state changes. Both of those problems are negatively impacted by the delay in recognizing the opening or detachment of a contact. This arises due to the way the residual is calculated (1st order system and some smoothing, see Section 3.1.2), because accelerations are not directly used. Furthermore, there is some elasticity in the contact, which is also not taken care of. Finally, note that the integrator of the end-effector PID controller must be prevented from winding up during the wall contact phase; otherwise the tracking behavior after detachment is impacted negatively until the integrator has emptied again. In our case, this is achieved by temporarily disabling the integrating behavior of the controller while in contact.

3.2.3

Manipulability Maximization

With the procedure described in Section 3.1.4, qualitative observations show that the base follows the end-effector motion as desired to keep the arm in a favorable position. Figure 3.4 shows the configuration of the robot for different time points while the manipulability is being maximized and the end-effector pose held constant. After 30 seconds, there is no observable difference in the configuration anymore. Quantitative data on this effect are provided by the time evolution of the arm manipulability w(Jˆee (q)) (see (3.17), (3.18)) which is plotted in Figure 3.5. Interestingly, the manipulability reaches a maximum (as one would expect) and then decreases again to finally settle at a constant value. This overshoot is due to the fact that the system is not really controlled to achieve maximum manipulability, instead, torques are simply generated that push the system towards a more favorable

Chapter 3. Control and Simulation

28

t = 9s

t = 13 s

t = 17 s

t = 21 s

t = 25 s

t = 29 s

Figure 3.4: Configuration of the robot arm and base at different time points during manipulability maximization. The time values correspond to Figure 3.5. configuration but without considering its momentum. After overshooting, a new local maximum is attained, which is clearly suboptimal compared to the first one. The manipulability ascent can be tuned more aggressively if necessary. This would make the base follow the arm more quickly, hence reducing the chance of a singular configuration (fully stretched out arm) even when the end-effector set-point moves away from the base quickly. However, this would in turn induce extensive and fast base motion. Since there are still modeling issues with the wheeled base, especially related to residual calculations (see Section 3.2.2), implementing such a change does not appear useful at the present stage. As expected with regard to the hierarchical formulation and null-space projection (3.23), there are cases when the objectives cannot be combined and the null space is empty. For example, when the end-effector set-point moves away from the base too far and too quickly, the manipulator arm stretches out completely and therefore enters a singular configuration. In that case, the null space of the end-effector Jacobian looses one dimension. If possible, the wheel joints must then be used to drive into the desired direction and contribute to the end-effector position control. For an observer, it appears like the arm pulls the base behind it. One problem that is inherent to our manipulation formulation is that it only considers instantaneous improvement. By only evaluating the Jacobian (which is an instantaneous mapping from joint speeds to task-space velocities) one simply commands those joints whose infinitesimal motions improve the manipulability. From that perspective, it does not make sense to twist the base, because no infinitesimal improvement in manipulability is achieved (hence the gradient in that direction is zero accordingly). However, twisting the base may be very useful when considering a longer time horizon as it potentially enables motions that are much more suitable to increase the manipulability. Our approach fails to recognize this possibility due to its instantaneous nature and the lack of a motion planner.

29

3.2. Results

·10−3

Manipulability w(Jˆee (q))

9

8

7

6 0

10

20

30

40

50

60

70

80

90

100

Time [s] Figure 3.5: Time evolution of the manipulation while the end-effector pose is held constant. The controller is activated at approximately 9 s.

Chapter 3. Control and Simulation

30

Chapter 4

Mission We now turn away from the isolated treatment of the mobile manipulator and consider an entire mission that it could accomplish. A typical mission consists of a number of stages: First, the robot starts off from its initial position and needs to explore the environment. During this exploration phase, it should construct a three-dimensional map of the environment and permanently localize itself with respect to the latter. Given a rough description of how the manipulation site looks like, the robot identifies it, navigates there and positions itself appropriately. This is when the manipulation starts and the arm begins to move. In our scenario, this process is guided by a visual servoing scheme, i.e., by using feedback from a camera. What follows is very task specific. In the case of the MBZIRC, the robot identifies a wrench of the appropriate size on a control panel, picks this tool with the gripper, and uses it to rotate a valve stem. As mentioned, this procedure is intended to be executed autonomously. The following sections describe the individual components of the mission and the mission control software in more detail. Results are presented in Section 4.4 at the end of this chapter.

4.1 4.1.1

Navigation Features of the Navigation Software

We require our robot to navigate safely through a previously unknown environment, avoid obstacles (static and dynamic ones), and localize itself with respect to features in the environment. Due to the limited scope of this thesis, we refrain from a full treatment of those requirements. Moreover, suitable algorithms already exist, making further programming endeavors redundant. For this project, we use the navigation software stack developed by Kr¨ usi [19] and adapt it to run on the Husky. A short overview of the available functionalities is given in the following: A mapping and localization module performs simultaneous localization and mapping (SLAM) with a laser scanner, an IMU, and wheel encoder readings. Each time a new laser scan (three-dimensional point cloud) becomes available, a map of the environment is updated. The map itself is a point cloud representation of the environment at a chosen resolution. The alignment of the new cloud with the current map is performed by iterative closest point matching (ICP). As new features of the environment come into sight, they are added to the map. Representations of objects that appear no longer present in the environment (i.e., laser beams seem to penetrate them) slowly fade from the map. This is necessary in order to remove 31

Chapter 4. Mission

32

dynamic objects from the database. At the same time, the robot localizes itself within the constructed map, which acts as the world frame in the control formulation. The localization part therefore provides the pose of the robot base frame with respect to the world frame. Path planning algorithms based on rapidly-exploring random trees (RRT) are available to calculate a feasible path between current position and a defined goal in the mapped environment. The planner takes the mapped obstacles into account and additionally ensures that the ground contour and inclination can be traversed safely. A general navigation goal, for example given by the user in GPS coordinates, may also reside in an unexplored region. In that case, the robot will determine a path to the end of the known map and re-plan periodically as the map is growing. In the current implementation, there is a potential threat to the successful completion of the mission because the robot may not find its way out of a dead end. Finally, a navigation controller calculates wheel velocity commands to follow the planned path. As a result, the robot should end up in the desired orientation at the goal position. An additional safety layer, which is independent of the SLAM module, automatically stops the motion if obstacles close to the robot appear in the laser scan. Of course, manual driving commands can be issued by an operator by means of a joystick at any time.

4.1.2

Detection of the Manipulation Site

The above navigation software package permits relatively autonomous maneuvering if a goal is given to the system from an external reference. This section enhances the autonomy of the navigation stack by developing an automatic goal detection algorithm. The desired behavior entails identifying a suitable goal pose and navigating there safely. Due to the application of this work to the MBZIRC 2017, this section considers an exemplary manipulation site as the navigation goal. It is characterized by a rectangular planar user panel of width 1.0 m and height 0.75 m. As part of the mission, the panel detection module is expected to monitor the map or incoming laser scans while the mobile robot is exploring the area. Once the detection module has successfully identified the manipulation site, the new goal position for navigation is placed in front of the panel. The path planner should then find a feasible route which is subsequently traversed. We expect the robot to position itself parallel to the panel with a distance of approximately 30 cm. Sensory information for the detection is obtained from a laser scanner (see sensor overview in Section 2.1). 32 lasers are vertically aligned, resulting in a vertical angular resolution of 1.33 ◦ . If we consider the laser to be approximately at the same height as the center of the panel, then the distance at which two laser beams will certainly hit the panel is 16.15 m. We consider two beams to be the bare minimum for detecting the panel without too many false positives. If the detection algorithm operates based on single laser scans, the robot must therefore necessarily be within a radius of 16.15 m around the panel in order to detect the latter. This restriction can be relaxed when the detection module uses the map as input data, because the map accumulates points from several laser scans which will likely hit the panel at various positions. The algorithm that detects the rectangular planar structure is independent of the supplied type of point cloud, i.e., it can operate on both the current scan or the constructed map. When switching, the parameters need to be adjusted. Tuning of the involved parameters and detection thresholds is performed manually on data recorded at a field test. Figure 4.1 gives a first impression showing how the 3D laser scan perceives the environment and how our algorithm identifies the panel within

33

4.1. Navigation

Figure 4.1: The points recorded by the laser are represented as yellow dots. Due to flat ground, they are mostly concentric circles around the robot. The panel was identified in this point cloud by our algorithm and visualized by a green frame. Points that belong to the panel are colored purple. that point cloud. Algorithms In the following, we present an algorithm that detects a rectangular panel in a point cloud. An exhaustive search for extracting a planar component from a large set of points would be computationally intractable. A suitable method is provided by the Random Sample Consensus (RANSAC) algorithm, an iterative and randomized method. We use it at two points in our algorithm in order to identify planes and lines in the point cloud. In RANSAC terminology, an inlier, as opposed to an outlier, is a data point that supports a given model. In our software implementation for panel detection, we make heavy use of methods provided by the Point Cloud Library (PCL)1 . Algorithm 1 outlines our steps to identify the panel in a point cloud. The first step is to detect planar components within the cloud, which is performed by RANSAC. Upon extracting a candidate plane (identified by the set of inliers and the plane coordinates), the algorithm checks if the plane is vertical. If that criterion is fulfilled, the identified planar (and vertical) subset of the point cloud is passed to a function (line 10) that validates if the panel is indeed part of the given plane. Said function is shown in Algorithm 2. The inliers that constitute the candidate plane are grouped by euclidean clustering, which identifies a set of points as a cluster if they are geometrically close together. Simply speaking, if a point is within a defined threshold distance to another point, they must necessarily belong to the same cluster. For each identified cluster, we determine its geometric size by searching for extremal points in horizontal in vertical directions among the points in the cluster. If the dimensions agree with the nominal size of the panel, we conclude that the panel is present. In case an identified plane is not vertical, the plane is discarded and another candidate is considered (if available). However, it is necessary to remove certain line 1 See

http://pointclouds.org/

Chapter 4. Mission

34

Algorithm 1 Panel detection procedure 1: function PanelDetection(IncomingCloud) 2: Make working copy of cloud 3: while iterations < max iterations do 4: Extract planar component from working cloud via RANSAC 5: if no planar component found then 6: return false 7: end if 8: Extract normal vector of plane 9: if normal vector is horizontal then 10: CheckPlaneForPanel(PlanarComponent) . see Algorithm 2 11: if panel was found then 12: return true 13: end if 14: else 15: Remove lines from plane inliers. 16: end if 17: Remove plane inliers from working cloud 18: end while 19: return false 20: end function

Algorithm 2 Finding the panel in a plane 1: function CheckPlaneForPanel(PlanarCloud) 2: Project all points onto the plane 3: Cluster points with Euclidean clustering 4: for all clusters do 5: Check the dimension of the cluster (width and height) 6: if dimension within tolerance then 7: Extract and publish the panel pose 8: return true 9: end if 10: end for 11: return false 12: end function

35

4.2. Manipulation

segments from the plane inliers (line 15 in Algorithm 1) before discarding the plane, as the following failure scenario shows: The RANSAC plane identification may come up with a horizontal plane that cuts through the actual panel. Then the intersection of that plane with the panel results in a line of points that belongs to the panel but contains inliers of the horizontal candidate plane. Hence, even though the algorithm rejects that plane because it is not vertical, not all of the plane’s inliers should be removed from the working cloud because we want to retain the points that belong to the panel. Falsely removing points from the panel would make it increasingly difficult to identify the panel as such. We prevent the deletion of such a line of points that could potentially belong to the panel by performing another RANSAC segmentation which finds lines of the appropriate length in the horizontal plane. Those lines are then removed from the set of inliers before discarding the remaining inliers from the working point cloud. For a more stable estimate of the panel pose, the results of successive identification iterations are processed by two filters. While the position filter is of a simple moving average type, the orientation filtering employs spherical linear interpolation (SLERP). This filtering procedure must be undertaken in the world frame because this is where the panel pose is expected to be constant. The filtered pose is fed back to the discovery procedure. Hence, if the position of the panel in the cloud is already known from previous iterations, the identification can be made much faster by only searching within a certain neighborhood from the previously known panel position. One issue with this procedure is that the orientation of the detected plane is not unique because the direction of the normal can also be reversed. In the presented version, our algorithm provides no guarantees that those plane normals identified in successive scans point in the same direction. A flip to the opposite direction causes a problematic and unnecessary disturbance to the orientation filter because it is not aware of the modulo π property of the orientation. Additionally, not only do we want to keep track of the orientation of the plane, but also recognize where its front side is. A simple workaround to this problem presents itself by enforcing the plane’s normal to always have a positive projection on the vector from world to panel. Alternatively, one could monitor incoming pose estimates for rotation differences close to 180 ◦ and rotate them manually.

4.2

Manipulation

Once the robot has positioned itself in front of the panel, it is ready to enable manipulation mode. This is where the mobile manipulator controller developed in Chapter 3 commences operation. Since most of its functionality has previously been presented, this section is mostly concerned with describing additional components involved in manipulation. Initially, the arm controller relies on the localization information from the navigation module and the estimate of the panel position. It uses this information to position the end-effector in a way such that the panel is in the field of view of the camera. This should be the only time when the robot arm operates without direct feedback2 . The arm controller is aware of different mission modes and accordingly exhibits different behaviors. In the initial “free motion” mode, an emergency stop is automatically initiated upon detection of a contact. Contrary, when the task demands approaching and gripping an object, the controller switches to a compliant behavior as described in Section 3.1.3. 2 Feedback from the robot joints is always available, but the commanded end-effector position is set in a purely feed-forward way.

Chapter 4. Mission

36

A trajectory generator (adapted from Bodie [3]) is supplied with end-effector pose commands by an operator or from mission control software. It makes use of fifth order splines to calculate a time-parametrized trajectory from the current position to the goal while respecting acceleration and velocity limits. The trajectory points are then sent sequentially to the manipulation controller, which communicates with the actuators. In cases that require merely velocity control on the end-effector, there is no need for trajectory generation, thus velocity commands can be transmitted to the manipulation controller directly. Once again, the mission mode determines which inputs are respected by the controller in order to avoid conflicting goals and the resulting undefined behavior.

4.2.1

Gripper

While writing this thesis, a gripper is developed in a parallel project to give our robot the physical means to interact with objects in the environment. The gripper hardware is introduced in Section 2.1 above. We develop two pieces of software for the gripper. One is a Gazebo simulation plugin, the other a hardware driver for the gripper servo motor. The latter is based on the basic functionalities provided by the dynamixel motor package3 . Both provide an identical interface, so the simulation can be used interchangeably with the real hardware. On the simulation side, the gripper was modeled for a physics simulator by specifying the link and joint arrangement, inertial properties, and geometry. A single motor drives both gripping fingers, which was modeled through a mimic-joint arrangement. Furthermore, geometry meshes are not only used for visualization, but also needed to specify the collision geometry. A computationally less expensive approximation with primitive shapes would be insufficient since the grasping behavior strongly depends on the exact shape of the gripper. The interface (usually called upon by mission control or a human supervisor) allows direct interaction with the gripper by setting motor position, velocity, or torque. Additionally, higher level commands are supported, including a freeze mode, gripper opening, and gripper closing. All of those provide feedback once they have successfully been completed. Additional information such as finger position of the gripper or current torque is also available. When attempting to pick up an object, a typical procedure would include the following commands: Open the gripper, approach the object, close the gripper and wait for positive feedback, and, finally, retract the arm with the object attached to the gripper.

4.2.2

Vision

For precise manipulation and interaction with the environment, our robotic system employs a visual servoing framework. The controller receives feedback from a camera sensor to position the end-effector. The local nature of those position or velocity commands eliminates any error in the localization, because the commands are already with respect to the camera frame, which is attached to the end-effector. Therefore, as soon as the panel is observable by the camera, the arm controller does not rely on the identified panel pose and only accepts commands from the visual servoing framework. Visual servoing can in principle be set up in two distinct ways: On the one hand, image-based visual servoing operates in the image space and tries to position image 3 See

http://wiki.ros.org/dynamixel_motor

37

4.3. Mission Control

features at desired positions in the image plane. A mapping between the motion of the robot joints and the velocities of points in the image is used to generate joint velocity commands. On the other hand, position-based visual servoing is formulated in task space. The vision software estimates the pose of objects in the environment with respect to the camera frame and generates task-space commands that bring the camera into a desired configuration relative to the environment. A neat introduction and more details on both methods are given in the two-part series on visual servo control by Chaumette and Hutchinson [5, 6]. In general, there are advantages and drawbacks to both approaches and the following are most notable for our case: ˆ Position-based visual servoing can be easily fused with operational space control and contact detection. ˆ Position-based schemes may loose track of image features as they move out of the field of view, although small motions reduce that risk significantly and can be expected in our mission. ˆ Image-based methods can lead to unpredictable and dangerous motions in task space (because only image space motions are controlled), which are to be avoided for a dexterous manipulation task. ˆ Position-based methods require a 3D model of the observed object. For predefined manipulation tasks this is known and online pose estimation can be performed with a pre-calibrated camera. ˆ If image features are well-defined and distinguishable, image-based visual servoing tends to be more accurate because the target is directly specified in terms of pixel coordinates on the image.

For the time being, we decide for a position-based scheme, because the shapes and features of the objects to be manipulated are known relatively well, such that the vision algorithm can accurately estimate their pose. The desired behavior in terms of manipulation and mission is somewhat similar to an assembly robot described by Hamner et al. [16], however, we cannot modify the environment with any visual markers to aid localization. We expect the vision module to provide pose or velocity set-points to the arm controller as soon as the panel is in the field of view of the camera. The algorithms related to visual recognition and pose estimation are developed in a concurrent project outside the scope of this work. We expect to fuse information from a camera with force information from the controller once it encounters a contact [20, 33]. Combining those two source of information improves the state estimation and we expect to be able to manipulate objects precisely. Since camera information is limited by the frame rate, a fast inner control loop of the arm controller ensures that the end-effector transitions smoothly between commanded set-points.

4.3

Mission Control

The purpose of mission control is to supervise all the previously mentioned software modules, to call for the relevant actions, and to provide useful means for the human supervisor to interact. Furthermore, it is important that some critical indicators such as battery status or fault messages are monitored and adequate action is taken. It is desirable that the mission control only acts as a high level task

Chapter 4. Mission

38

dispatcher, so that the software framework remains modular. Contrary to other implementations [16], we want to avoid direct monitoring of performance in mission control, because that would make it highly specific to a given task. Therefore, we require submodules of the mission to take care of such monitoring themselves and report back once a task has completed successfully. For example, the arm controller is expected to monitor unexpected collisions by itself. We choose to design the mission in a state machine framework. In a similar project, Padois et al. [26] also employ a state machine to switch between different phases of a manipulation task. Time duration and feedback from commanded software packages are monitored to switch states in our case. The mission is abstractly represented as a collection of states and state containers. We employ two main types of states: ˆ Execution states are a collection of action calls or commands to different components of the system. A typical execution state would be to move the manipulator to a certain position. Execution states terminate when the task was completed (or possibly failed for some reason). ˆ Supervision states are used to monitor task execution or some other signal. They do not have active components. Usually, the completion of other execution states cause the preemption of a corresponding supervision state. In case of failure, supervision states can preempt execution states. This is usually a safety feature, for example when battery levels become critically low.

Two state containers are frequently in use which organize the relationship between states: ˆ State machines are used to execute several states in sequence. A defined initial state is entered upon starting the state machine. As one state finishes with a certain outcome, the state machine contains a mapping from this outcome to the next state or a termination outcome, which would end the state machine. ˆ Concurrencies allow the execution of two or more states in parallel. There is no initial state because all contained states begin to operate immediately. Typically, concurrencies are used to run execution states simultaneously with supervision states.

State containers can be regarded as states, too. They may themselves be used within other state containers. Implementation-wise, most of the interaction between mission control (specifically: execution states) and other modules happens through the actionlib package4 . It enables preemptable service calls with feedback and result reporting. The underlying state-machine framework is a development from our Robotic Systems Lab. The mission control software will guide the mobile manipulator through the complete mission. A state-flow diagram of the top level mission is shown in Figure 4.2. For readability, this is a simplified version of the actual implementation. Not all outcome mappings are shown and several supervision states are combined to a single one. Important submodules are the navigation and manipulation routine. The first would involve the following steps: 1. The exploration module commands the robot to drive across the field on a path that maximizes the chance of finding the manipulation site. Concurrently, the panel detection module is running. 4 See

http://wiki.ros.org/actionlib

39

4.3. Mission Control

state machine: mission

Startup

concurrency: mission tasks supervised state machine: mission tasks concurrency: navigation routine supervised Supervision states

Navigation supervision

Navigation routine

not ok concurrency: manipulation routine supervised Manipulation supervision

Emergency shutdown

Manipulation routine

Shutdown

Figure 4.2: High level state machine of a manipulation mission.

Chapter 4. Mission

40

state machine: manipulation routine initial state failure

Detect presence of wrenches success Approach correct wrench

failure

success Grip wrench and retreat arm failure

Recovery behavior

success Detect valve and approach

failure

success Engage valve and turn

failure

success success

failure

Figure 4.3: Manipulation routine state machine. 2. Once the panel was identified, the robot navigates to one side of it (the suspected front side, if known). 3. Upon arrival, the arm positions itself in a way such that the camera can see the panel. 4.

ˆ If the tools on the panel can be recognized, the navigation routine completes. ˆ If the tools cannot be recognized within a maximum time duration, the arm is moved back and a new navigation goal is placed on the opposite site of the panel. Once arrived there, continue at step 3.

The manipulation routine is heavily task dependent. For the MBZIRC mission, the main steps can be seen in Figure 4.3. While the mission control oversees the different states and switches accordingly, the tasks are actually solved by the interaction between the vision module and the arm controller. Any of the states may fail, for example due to unexpected contact or when the visual tracking looses its target. In this case, a recovery behavior is invoked. This usually involves careful retraction of the arm to a home position. In a semi-manual mode, the user decides what to do next after the recovery behavior completed. For simplification, all different recovery states have been lumped into a single block.

4.4

Results

Navigation During a field test, the mapping, localization, and navigation capabilities were tested. The trial was conducted on a large soccer field with specifications similar to those provided by the MBZIRC.

41

4.4. Results

Approaching the manipulation panel from a sufficiently large distance and at an angle of 54 ◦ from the normal, it becomes first visible in the map from approximately 18 m distance. The dimensions of the cluster of points in the map agree with the true dimensions up to a few centimeters. The map resolution may be adapted, however, a finer resolution also implies larger memory requirements and computation times. The localization and mapping modules performed as expected, even though there was little structure in the immediate environment, because no obstacles were placed on the field. Autonomous detection of the panel and navigation had not yet been adapted for the system at the time of testing and could therefore not be evaluated. However, the recorded laser data can be replayed for detecting the panel. Those experiments show that in the raw scan the panel can be reliably detected from a distance of 10 m. The planarity of points is relatively precise (all points lie within a ±5 cm distance to the identified plane). The tolerance on the panel height needs to be relatively large (±0.3 m) due to the discrete horizontal scan lines. The panel width on the other hand turns out to be within ±4 cm of the nominal value. When driving very close to an obstacle or panel, the laser can still measure distances as low as 45 cm. This is sufficient for our purposes as long as the laser is mounted along the xz-plane of the base frame. Due to time constraints, the panel detection algorithm was not tuned for working on the map cloud although we expect that the panel can already be identified from a larger distance. While it is crucial to detect the position and orientation of the panel, it would additionally be desirable to identify which side the laser is currently seeing. The front side distinguishes itself from the back by means of tools sticking out of the panel plane. An attempt was made to measure the distances between the identified panel plane and the actual laser scan points that constitute the panel. One would expect that, on the front side, the variance of those distance measures provides some information on whether or not the tools are present. In preliminary tests, this did unfortunately not provide trustworthy data for identification of the panel side. Due to the limited scope of this work, this approach was not pursued further. Manipulation Time constraints of this thesis and delays in mechanical and electrical assembly prevented testing the manipulation mission on a real system. In simulation5 , however, successful gripping of a tool is possible. Figure 4.4 shows an exemplary sequence of images during that procedure. For the purpose of testing, the position of the wrench is directly provided as input to the manipulation controller (instead of relying on visual identification). During the first three images, one can clearly observe how the mobile base automatically positions itself such that the arm attains a favorable configuration. Turning of the valve could not be tested because the physics engine proves unreliable in simulating the contact between wrench and gripper. Oscillatory motion prevents a repeatable engagement with the valve. The physical gripper was also tested in isolation on a mockup panel and was able to pick and hold a wrench as expected. The main difficulty that still needs to be solved is the correct alignment of the wrench with the axis of rotation of the final arm joint.

5 The simulation environment (block, wrenches, arena) was developed by Moju Zhao and Xiangyu Chen, see https://github.com/start-jsk/jsk_mbzirc

Chapter 4. Mission

42

1

2

3

4

5

6

Figure 4.4: Procedure for gripping the wrench: 1. Default position; 2. Manipulability maximization repositions the base; 3. Open the gripper; 4. Approach the wrench; 5. Close the gripper; 6. Retract the end-effector.

Chapter 5

Future Work This chapter identifies specific areas in the development of our mobile manipulator that are still incomplete or need improvement. Several suggestions that could be addressed in future projects are given.

5.1

Mobile Base Modeling

During control, especially for contact estimation, it is evident that the current differential drive modeling of the base is insufficient. The no-slip, no-skid, and nofriction assumptions do not hold for the current base configuration, except when driving perfectly forward or backward. Yi et al. [34] propose an extended Kalman Filter design that estimates the center of rotation and wheel slippage for a four-wheeled skid-steered vehicle online via IMU readings. Should continuous base motion during manipulation be desired, it may be worthwhile investigating this approach in more detail. Alternatively, one may use a parametrized kinematic model, and only introduce the vehicle dynamics through the location of the instantaneous center of rotation of the left and right side wheels. Mandow et al. [22] show how the parameters may be identified through an experimental procedure and subsequent optimization. It seems, however, that those parameters may still be significantly dependent on the ground condition and the payload distribution on the mobile robot. In case of a mobile manipulator, this distribution even changes when the arm is moving during operation. The advantage of this second approach would be that it can be easily incorporated into the current control scheme, for example by online adaptation of the driving model matrix (see Section 2.2.2). Finally, one may also consider the possibility of using an admittance coupling [10] to model the friction of the ground interaction.

5.2

Manipulation Control

Manipulability Maximization As we already mentioned in the results section of the control part (p. 24), the automatic base motion is capable of maximizing the manipulability of the arm. However it does not rotate (yaw), because it cannot foretell the benefit of it. This is a limitation of the differential nature of our approach. One may examine a full differential drive or skid-steer controller to overcome this. 43

Chapter 5. Future Work

44

Singularities So far, only attempts have been made to avoid singular positions altogether. They are generally considered unfavorable, because small changes in the set-point can lead to large control actions, rendering the system potentially unstable. To further increase robustness, one could devise a sensible strategy for how to proceed when a singular configuration is indeed reached. A possible way of handling this situation is to split up the operational space into components that segment out singular directions from the usual control action [4]. This allows to proceed with the control objective in the directions orthogonal to the singular directions and use the remaining null space motion in a safe manner to maneuver out of the singular configuration. The formulation is equivalent to treating the manipulator as a redundant one with a reduced task space (i.e., lacking the singular directions). Another useful feature that can be implemented is an evaluation of reachability. When a new pose command for the end-effector is received, the controller should be able to decide whether or not the goal lies within the current workspace of the manipulator arm (while the base remains fixed). If so, it can decide to keep the base still and only move the arm joints for higher accuracy1 . In case of unreachable positions, for example those that extend the arm too far off the side (orthogonal to the driving direction), one should reproject the desired position to avoid singular positions and then plan a maneuver that repositions the base accordingly. With the current functionality, the manipulator would enter a singular configuration in such a scenario. An investigation of trajectory generation through optimization may prove worthwhile to solve this problem neatly. Adaptive Control A final suggestion for improving the manipulator control is to apply adaptive control techniques. Currently, adaptations to the dynamic model are accomplished by modifying the inertia of the gripper when a known payload is attached. However, payloads (especially their inertial properties) are not always known and it would be helpful if there was an automatic adaptation that can quickly identify how the current dynamic model must be corrected. If there is no correction, then the integrator part of the position controller needs to compensate. This should be avoided, because it does not consider the true dynamics of the system and additionally leads to transients during which tracking performance is poor.

5.3

Mission and System Deployment

Several components of the mission are still lacking entirely, especially those related to the initial navigation part: A suitable field explorer module is required to steer the robot through the available area in order to find the site of manipulation initially. One could already compute an exploration trajectory off-line through an optimization routine. A crucial parameter is the maximum distance from which we expect the panel to be detectable. Given the maximum driving speed of the mobile base, one can come up with an upper limit on the time until the panel is certainly identified. This is an important piece of information, especially if missions are time-critical. Preliminary tests can be performed in simulation where the laser scanner and the mission environment are modeled. 1 Losses

in accuracy may arise especially from uneven ground.

45

5.3. Mission and System Deployment

Another area of enhancement is GPS integration. Currently, GPS signals are not used by the navigation module for localization2 . Especially when operating on large and open fields with little structure, GPS signals may prove a viable source of information for navigation and localization. The wrench gripping procedure already looks promising, so effort needs to be devoted to engaging with the valve. This tasks appears more challenging, because the position accuracies are very tight. In the MBZIRC mission, the valve stem measures only 13.5 mm in width. Furthermore, the lighting conditions are challenging and require a robust visual detection algorithm. In general, significant system integration work needs to be completed for executing a whole mission. During the development, attempts have been made to write software packages as modular as possible, making testing of the individual components significantly easier. The next step, however, is bringing the individual pieces together and making them interact. A special focus should be placed on matching the vision module with the arm controller and mission control. Once everything is combined on the physical system, one needs to fine-tune various parameters and possibly perform an online estimation of some modeling quantities.

2 GPS is only used to align an external map with the local one and to convert waypoints given in GPS coordinates to local map coordinates.

Chapter 5. Future Work

46

Chapter 6

Conclusion The thesis at hand attempts to develop and illustrate various components of an autonomous mobile manipulator. We succeed in simulating the manipulation capabilities, including coordination of the robot arm and base, contact detection, and unified force-motion control. We contribute to an existing navigation software stack by developing the automatic detection of a user panel based on laser ranging measurements. A mission control program adds the competence to sequentially complete a series of instructions. Being aware that this work consists of several loosely related aspects, a brief summary is provided illustrating which modules already exist and which parts still need to be developed. The navigation software was already provided by a former project and only required minor enhancements. A field explorer still needs to be programmed, however, this is deemed to be a manageable task. The detection of the manipulation site is in a functional state. In terms of manipulation, a unified force-motion controller is available and working in simulation, nevertheless, it requires testing on the physical robot arm. A vision module is available and capable of identifying the correct wrench and issuing pose commands. Combining vision with the manipulation controller requires some effort and should be a priority. The gripper is currently undergoing a new design iteration and can be considered functional. The mission control software is working and allows the user to interact if necessary. Due to the modular design, the sequence of instructions can be scaled up easily and integration of further states and components is a simple procedure. Significant time should be devoted to testing once the system is assembled. The most prominent challenge encountered during this thesis was the difficult treatment of the mobile base. Non-holonomy, skid-steer setup, and a velocity control interface made modeling and control complicated. In light of a complete manipulation mission and the MBZIRC 2017 challenge, the work at hand presents the first milestone towards a competition entry. While we cannot yet offer a complete solution to the challenge task, this thesis contributes several important modules to building an autonomous mobile manipulator.

47

Chapter 6. Conclusion

48

Bibliography [1] B. Bayle, J.-Y. Fourquet, and M. Renaud, “Manipulability of Wheeled Mobile Manipulators: Application to Motion Generation,” The International Journal of Robotics Research, vol. 22, no. 7, pp. 565–581, 2003. [2] B. Bayle, J.-Y. Fourquet, and M. Renaud, “From Manipulation to Wheeled Mobile Manipulation: Analogies and Differences,” in Proceedings of 7th Symposium on Robot Control (SYROCO’03), Wroclaw, 2003. [3] K. Bodie, “Design and control of a compliant manipulator arm,” Master Thesis, ETH Zurich, 2016. [4] K.-S. Chang and O. Khatib, “Manipulator Control at Kinematic Singularities: A Dynamically Consistent Strategy,” in Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, vol. 3, 1995, pp. 84–88. [5] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, 2006. [6] F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced approaches,” IEEE Robotics and Automation Magazine, vol. 14, no. 1, pp. 109– 118, 2007. [7] A. De Luca and R. Mattone, “Sensorless robot collision detection and hybrid force/motion control,” in Proceedings of the IEEE International Conference on Robotics and Automation, 2005, pp. 999–1004. [8] A. De Luca, A. Albu-Sch¨ affer, S. Haddadin, and G. Hirzinger, “Collision detection and safe reaction with the DLR-III lightweight manipulator arm,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 2006, pp. 1623–1630. [9] A. De Luca, G. Oriolo, and P. R. Giordano, “Image-based visual servoing schemes for nonholonomic mobile manipulators,” Robotica, vol. 25, no. 02, pp. 131–145, 2007. [10] A. Dietrich, T. Wimb¨ ock, A. Albu-Sch¨affer, and G. Hirzinger, “Reactive Whole-Body Control: Dynamic Mobile Manipulation Using a Large Number of Actuated Degrees of Freedom,” IEEE Robotics and Automation Magazine, vol. 19, no. 2, pp. 20–33, 2012. [11] R. Fierro and F. Lewis, “Control of a nonholonomic mobile robot: backstepping kinematics into dynamics,” in Proceedings of the 34th IEEE Conference on Decision and Control, no. December, 1995, pp. 3805–3810. 49

Bibliography

50

[12] J.-Y. Fourquet and M. Renaud, “Coordinated control of a nonholonomic mobile manipulator,” in Experimental Robotics VI. London: Springer London, 2000, vol. 250, pp. 139–149. [13] M. Fruchard, P. Morin, and C. Samson, “A Framework for the Control of Nonholonomic Mobile Manipulators,” The International Journal of Robotics Research, vol. 25, no. 8, pp. 745–780, 2006. [14] R. Gill, D. Kuli´c, and C. Nielsen, “Path Following for Mobile Manipulators,” in International Symposium on Robotics Research (ISRR), 2015, pp. 1–16. [15] F. Hamano, “Derivative of Rotation Matrix - Direct Matrix Derivation of WellKnown Formula,” in Proceedings of Green Energy and Systems Conference, 2013. [16] B. Hamner, S. Koterba, J. Shi, R. Simmons, and S. Singh, “An autonomous mobile manipulator for assembly tasks,” Autonomous Robots, vol. 28, no. 1, pp. 131–149, 2010. [17] R. Holmberg, “Development and Control of a Holonomic Mobile Robot for Mobile Manipulation Tasks,” The International Journal of Robotics Research, vol. 19, no. 11, pp. 1066–1074, 2000. [18] M. Hutter, H. Sommer, C. Gehring, M. Hoepflinger, M. Bloesch, and R. Siegwart, “Quadrupedal locomotion using hierarchical operational space control,” The International Journal of Robotics Research, vol. 33, no. 8, pp. 1047–1062, 2014. [19] P. A. Kr¨ usi, “Autonomous Navigation in Complex Nonplanar Environments Based on laser Ranging,” Ph.D. dissertation, ETH Zurich, 2016. [20] V. Lippiello, B. Siciliano, and L. Villani, “A Position-Based Visual Impedance Control for Robot Manipulators,” in IEEE International Conference on Robotics and Automation, no. April, 2007, pp. 2068–2073. [21] E. Magrini, F. Flacco, and A. De Luca, “Control of Generalized Contact Motion and Force in Physical Human-Robot Interaction,” in IEEE International Conference on Robotics and Automation, 2015, pp. 2298–2304. [22] A. Mandow, J. L. Mart´ınez, J. Morales, J. L. Blanco, A. Garc´ıa-Cerezo, and J. Gonz´ alez, “Experimental kinematics for wheeled skid-steer mobile robots,” in Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 1222–1227. [23] A. Mazur and D. Szakiel, “On Path Following Control of Nonholonomic Mobile Manipulators,” International Journal of Applied Mathematics and Computer Science, vol. 19, no. 4, pp. 561–574, 2009. [24] D. Oetomo, M. Ang Jr, R. Jamisola, and O. Khatib, “Integration of Torque Controlled Arm with Velocity Controlled Base for Mobile Manipulation: An Application to Aircraft Canopy Polishing,” in Fourteenth CISM-IFToMM Symposium on Robotics RoManSy, 2002, pp. 308–317. ˇ [25] D. Omrˇcen, L. Zlajpah, and B. Nemec, “Autonomous motion of a mobile manipulator using a combined torque and velocity control,” Robotica, vol. 22, no. 6, pp. 623–632, 2004.

51

Bibliography

[26] V. Padois, J.-Y. Fourquet, P. Chiron, and M. Renaud, “On Contact Transition for Nonholonomic Mobile Manipulators,” in Experimental Robotics IX, STAR 21, M. Ang and O. Khatib, Eds. Berlin Heidelberg: Springer-Verlag, 2006, pp. 207–216. [27] V. Padois, J.-Y. Fourquet, and P. Chiron, “Kinematic and dynamic model based control of wheeled mobile manipulators : a unified framework for reactive approaches,” Robotica, vol. 25, no. 2, pp. 157–173, 2007. [28] J. Park, Y. Choi, W. K. Chung, and Y. Youm, “Multiple Tasks Kinematics Using Weighted Pseudo-Inverse for Kinematically Redundant Manipulators,” in Proceedings of the 2001 IEEE International Conference on Robotics and Automation, 2001, pp. 4041–4047. [29] B. Siciliano and O. Khatib, Springer Handbook of Robotics, B. Siciliano and O. Khatib, Eds. Berlin, Heidelberg: Springer, 2008. [30] B. Siciliano, L. Sciavico, L. Villani, and J. Oriolo, Robotics: Modeling, Planning and Control. London: Springer, 2010. [31] B. van Eden, B. Rosman, D. Withey, T. Ratshidaho, M. Keaikitse, D. Masha, A. Kleinhans, and A. Shaik, “CHAMP : a Bespoke Integrated System for Mobile Manipulation,” in Joint conference of the 25th annual symposium of the Pattern Recognition Association of South Africa (PRASA), 6th Workshop on African Language Technology (AfLaT) and 7th Robotics and Mechatronics (RobMech) Conference of South Africa AH, 2014. [32] G. D. White, R. M. Bhatt, C. P. Tang, and V. N. Krovi, “Experimental evaluation of dynamic redundancy resolution in a nonholonomic wheeled mobile manipulator,” IEEE/ASME Transactions on Mechatronics, vol. 14, no. 3, pp. 349–357, 2009. [33] W. J. Wilson, C. C. W. Hulls, and G. S. Bell, “Relative end-effector control using cartesian position-based visual servoing,” IEEE Transanctions on Robotics and Automation, vol. 12, no. 5, pp. 684–696, 1996. [34] J. Yi, H. Wang, J. Zhang, D. Song, S. Jayasuriya, and J. Liu, “Kinematic modeling and analysis of skid-steered mobile robots with applications to lowcost inertial measurement unit-based motion estimation,” IEEE Transactions on Robotics, vol. 25, no. 5, pp. 1087–1097, 2009.

Bibliography

52

Data Sheets In the following, technical specification data sheets are attached. They are compiled by the respective manufacturer of the product.

53

Bibliography

HUSKY

54

TM

UNMANNED GROUND VEHICLE USER STORAGE AREA

990 mm [39 in]

670mm XX in [26.4 [YYmm] in]

544 mm [21.4 in]

420 XX inmm [16.5 [YYmm] in]

L = 296 mm [11.7 in ] W = 411 mm [16.2 in] D = 155 mm [6.1 in]

390mm XX in [14.6 [YYmm] in] 330mm XX in [13 [YYmm] in] 223 mm [8.77 in]

TOP

SIDE

130mm XX in [5 in] [YYmm]

FRONT

TECHNICAL SPECIFICATIONS SIZE AND WEIGHT EXTERNAL DIMENSIONS (L x W x H)

990 x 670 x 390 mm (39 x 26.4 x 14.6 in)

INTERNAL DIMENSIONS

296 x 411 x 155 mm (11.7 x 16.2 x 6.1 in)

WEIGHT

50 kg (110 lb)

WHEELS

330 mm (13 in) Lug Tread

GROUND CLEARANCE

130 mm (5 in)

SPEED AND PERFORMANCE MAX. PAYLOAD

75 Kg (165 lb)

ALL-TERRAIN PAYLOAD

20 Kg (44 lb)

MAX SPEED

1.0 m/s (2.3 mph)

DRIVETRAIN / DRIVE POWER

4x4 Zero-Maintenance

MAX CLIMB GRADE

45° (100% Slope)

MAX TRAVERSAL GRADE

30° (58% Slope)

BATTERY AND POWER SYSTEM BATTERY CHEMISTRY

Sealed Lead Acid

CAPACITY

24 V, 20 Ah

RUNTIME - STANDBY

8 Hours

RUNTIME - NOMINAL USAGE

3 Hours

CHARGE TIME

4 Hours

USER POWER

5 V, 12 V, 24 V Fused at 5 A each. 192 W total available power (upgrade to 480 W optional).

INTERFACING AND COMMUNICATION CONTROL MODES

Direct voltage, wheel speed, and kinematic velocity.

FEEDBACK

Battery voltage, motor currents, wheel odometry, and control system output.

COMMUNICATION

RS232 @ 115200 baud

ENCODERS

Quadrature: 78,000 pulses/m

DRIVERS AND APIs

ROS, C++, and Python.

ENVIRONMENTAL OPERATING AMBIENT TEMPERATURE

-10 to 30 °C (14 to 86 °F)

STORAGE TEMPERATURE

-40 to 50 °C (-40 to 122 °F)

RATING

IP 44 (upgrade to IP 55 available)

Contact us today for pricing and a free 30 minute technical assessment: 1-800-301-3863 © 2014 Clearpath Robotics, Inc. All Rights Reserved. Clearpath Robotics, Husky, Husky A200, and clearpathrobotics.com are trademarks of Clearpath Robotics. All other product and company names listed are trademarks or trade names of their respective companies.

55

Bibliography

ANYdrive robot joint Integrated, robust, torque controllable

Accurate position & torque control, impact robustness

Fully integrated ANYdrive consists of a powerful brush-less motor, custom spring, backlash-free gear, high-precision encoders, and efficient power electronics.

The intergrated spring enables accurate torque tracking while protecting the gear from impacts.

High load bearing & hollow shaft

Absolute position sensing Precise absolute encoders make repeated calibration of the joints unnecessary.

The robust design and hollow shaft allow for compact robot design and optimal cable routing.

Ingress-protection IP66

Programmable controller Custom control algorithms can be implemented through the open API (coming soon).

The ANYdrive is completey sealed against dust and water ingress.

ANYdrive is a complete robot joint: It enables building robots of any form with minimal complexity and maximal performance.

Example applications Peak/nominal power

720 W / 240 W

Absolute joint position

Nominal voltage

48 VDC

Joint output torque resolution

‹0.1 Nm

Peak/nominal torque

40 Nm / 15 Nm

Torque control bandwith

›60 Hz

Peak joint velocity

114 rpm (12 rad/s)

Backlash ±0.02°

Dimensions (L x D)

93 x 80 mm

Mass Max. bending moment

0.9 kg 100 Nm

Control modes

Position, torque, impedance, velocity, or current control

Hollow shaft diameter

11 mm

Communication

CAN 1 Mbit/s, CANopen protocol, ROS integration

06/16

17-bit, ‹0.025°

contact: [email protected]

Bibliography

56

HDL-32E High Definition Real-Time 3D LiDAR The HDL-32E provides high definition 3-dimensional information about the surrounding environment.

Specifications: Sensor:

• Time of Flight Distance Measurement with Calibrated Reflectivities • 32 channels • Measurement Range: Up to 100 m • Accuracy: ±2 cm (typical) • Dual Returns • Field of View (Vertical): +10.67° to -30.67° • Angular Resolution (Vertical): 1.33° • Field of View (Horizontal): 360° • Angular Resolution (Horizontal/Azimuth): 0.1° – 0.4° • Rotation Rate: 5 Hz – 20 Hz • Integrated Web Server for Easy Monitoring and Configuration

Laser:

• Class 1 – Eye-safe • 903 nm Wavelength

Mechanical /

• Power Consumption: 12 W (typical)

Electrical /

• Operating Voltage: 9 V – 24 V (with Interface Box and Regulated Power Supply)

Operational

• Weight: 1.0 kg (without cabling) • Dimensions: 85 mm diameter x 144 mm height • Shock: 500 m/s 2 amplitude, 11 ms duration • Vibration: 5 Hz to 2,000 Hz, 3Grms • Environmental Protection: IP67, Type 4 Enclosure • Operating Temperature: -10°C to +60°C • Storage Temperature: -40°C to +105°C

Output:

• 3D LiDAR Data Points Generated: -

Single Return Mode:

-

Dual Return Mode:

700,000 points per second 1,400,000 points per second

• 100 Mbps Ethernet Connection • UDP Packets Contain: -

Time of Flight Distance Measurement

-

Calibrated Reflectance Measurement

-

Rotation Angles

-

Synchronized Time Stamps (µs resolution)

• Orientation: 6DOF Inertial Sensor Measurements • GPS: $GPRMC NMEA Sentence from GPS Receiver (GPS not included) Product Ordering Information: Product Name

Connector Type

SKU Ordering Number

HDL-32E (Standard)

RJ45

80-HDL32E

HDL-32E with 5.0 m Cable

RJ45

80-HDL32E-5M

HDL-32E with 0.3 m Cable

M12

80-HDL32E M12-0.3M

HDL-32E with 1.6 m Cable HDL-32E with 0.3 m Cable plus Interface Box and Accessories

M12 M12

80-HDL-32E M12 80-HDL32E M12-IFB

Copyright ©2016 Velodyne LiDAR, Inc. Specifications are subject to change without notice. Other trademarks or registered trademarks are property of their respective owners. 97-0038 Rev-G

CLASS 1 LASER PRODUCT

Velodyne LiDAR, Inc.

345 Digital Drive, Morgan Hill, CA 95037

[email protected] 408.4 65.2800

www.velodynelidar.com