DRAGONFLY: A VERSATILE UAV PLATFORM FOR ...

7 downloads 0 Views 390KB Size Report
Introduction. The DragonFly experimental test bed is a platform that supports new research and innovations in navigation, fault tolerant control and multiple.
DRAGONFLY: A VERSATILE UAV PLATFORM FOR THE ADVANCEMENT OF AIRCRAFT NAVIGATION AND CONTROL Jennifer Evans, Gokhan Inalhan, Jung Soon Jang, Rodney Teo, Claire J. Tomlin, Department of Aeronautics and Astronautics, Stanford University1

Introduction The DragonFly experimental test bed is a platform that supports new research and innovations in navigation, fault tolerant control and multiple vehicle coordination. It consists of two UAVs with modular onboard avionics packages, which communicate through a wired and wireless network to the ground and lab development systems running QNX real-time OS. Its modularity and networked architecture is key in supporting such a wide range of concurrent research. This paper gives an overview of the DragonFly experimental test bed and the specific research goals that it currently supports. This paper is organized as follows. The first section gives an overview of the DragonFly UAVs basic characteristics and the advanced onboard electronics and navigation system. The remaining sections discuss the various research goals that the test bed supports. First, it focuses on single vehicle control research, specifically, fault detection and control reconfiguration in the event of structural damages. Second, it discusses control and coordination in closely spaced parallel approaches. Finally, the section on multiple vehicle coordination, discusses basic aspects of decentralized decision making under communication delays and losses.

DragonFly UAVs DragonFly II, the fixed wing aircraft seen in Figure 1, is outfitted with an eight horsepower, twocylinder engine, new landing gear and various structural and servo systems. With a ten-foot wingspan and a gross takeoff weight of 56 lbs.,

DragonFly II is approximated to reach speeds of around 25 m/sec. The newly acquired DragonFly III (Figure 2) has completed its first checkout flights. It is smaller in size but capable of carrying over 25 lbs. of payload. DragonFly III is now in the stage of being modified for connection of air data sensors, GPS, wireless modem antennas and the avionics connections. These first two vehicles provide the initial experimental test-bed for control and dual vehicle algorithms. The flight tests take place at Moffett Federal Airfield, close to NASA Ames Research Center.

DragonFly Avionics The DragonFly II includes an avionics package capable of supporting a multitude of experiments and research areas. The flexibility, modularity and the re-configurable nature of the avionics section stems from a collection of unique hardware and software components.

Unique GPS Receiver The avionics hardware includes a customized 40 channel, five-antenna GPS board. The receiver provides measurements of position, velocity, attitude, angular rates, time, and raw GPS observables. The common oscillator of the receiver allows for accurate and low noise GPS attitude and precise synchronization between all 40 channels. Another unique feature of the receiver is its PCI bus interface. Through the PCI bus, the receiver can send and receive measurements to a host computer at very high speed. Traditionally, GPS receivers provide serial communication of messages, severely limiting the data rates and

1 Jennifer Evans ([email protected]), Gokhan Inalhan ([email protected]), Jung Soon Jang ([email protected]) and Rodney Teo ([email protected]): Hybrid Systems Lab research assistants; Claire Tomlin, Ph.D. ([email protected]): Assistant Professor of Aeronautics and Astronautics; Stanford University, Dept. of Aeronautics and Astronautics M/C 4035, Stanford, CA 94305

1

The inertial measurement unit is a Honeywell HG1700 IMU. It is a tactical grade, low cost IMU with three ring laser gyros and three accelerometers. It has gyro rate biases of around one degree per hour and accelerometer biases of around 1 mg. The unit is nicely contained, aligned and isolated in a small case weighing less than four pounds and measuring approximately five inches cubed. The unit consumes less than ten watts of power and supplies one Mbps of data. It has an RS-422 SDLC communications protocol for the 600 Hz autopilot outputs and 100 Hz raw inertial outputs. This unit has been very rugged, reliable and consistent throughout the DragonFly project testing.

Figure 2. DragonFly III

Host Computer The single board computer (SBC) is an off-theshelf embedded PC developed by Versalogic

Serial Comm.

DPRAM

DPRAM

DPRAM

DPRAM

GPS5

GPS4

Inertial Measurement Unit

GPS3

clk

GPS2

Figure 1. DragonFly II

Corporation. The SBC is an x86 embedded PC with on-board flash and a PC-104+ (ISA and PCI) bus system. The board is rugged and compact and stacks on top of the GPS receiver. The SBC runs the QNX real-time operating system.

GPS1

adding significant latency. PCI, however, transfers data over a bus at the rate of 132 Mbytes per second.

DPRAM

PCI Bus to SBC Figure 3. Diagram of GPS Receiver Design

System Software The software architecture is a client/serverbased system. Server software modules interface with the sensors. Each sensor specific module preprocesses raw sensor data and prepares it for delivery to interested clients. The clients are the application software modules such as data loggers, flight control and navigation. The client has the option to configure a flexible data flow from the server. That is, the client can balance the latency and bandwidth restrictions for its specific application. For example, a low priority flight recorder can configure a data flow that will store all the high bandwidth flight data in the server until the experiment is over. A real-time flight controller, on the other hand, can configure a data flow that notifies the controller as soon as another sensor record is received (high bandwidth, low latency). In both cases, the server is the same. In fact, a server can supply multiple data flow types simultaneously, changing dynamically as experiments are re-configured “on the fly”.

2

Clients

Ground Flight

Ground Monitoring Applications

DGPS Application Flight Controller Application Logger Navigation Application

Servers GPS Ground GPS Flight IMU DAQ ACC HRT

Figure 4. Software Architecture The modular clients and servers and the “on the fly” reconfigurability make an ideal software platform to support concurrent experiments and research in multiple areas. For instance, researchers can compare the performance of multiple navigation clients running simultaneously. Each one accepts a different set of sensor data from the servers. Researchers can then test the performance of the flight control client using the different data sets from the navigation client. [1]

Navigation System The navigation application or client receives data from the GPS server, IMU server, highresolution timer server and the GPS attitude server. The navigation client then blends the sensor information in various possible GPS/INS configurations. The sensor integration method can be “tuned” to provide the best performance for any particular task, project or autopilot on board the UAV. The various GPS/INS integration methods range from loosely coupled to tightly coupled to deeply coupled inertial aiding feedback to the GPS receiver. Performance of each level of integration can be compared to determine the optimal solution for any given situation. Figure 4 is a diagram of the data flow to and from the navigation filter. The GPS receiver

provides position, velocity, time, range and delta range measurements. It also sends phase information to the attitude server. The attitude server can then supply the navigation application with GPS attitude at 10 Hz. The inertial measurement unit supplies delta velocities and delta angles to provide a smooth reference signal for the navigation application filter. In an extended Kalman filter, the navigation application combines the available sensor measurements from the selected data flow from the servers. The output of the filter is a 50 Hz position, velocity and attitude. The navigation application can request different combinations and rates of incoming data “on the fly”. The filter can therefore adjust to changing environmental or aircraft conditions during an experiment. This is how the navigation client can be “tuned” to provide an optimal data output to the onboard controls application. [2] This concludes the discussion on the Dragonfly experimental test bed. We now present the various research goals that the test bed supports.

Fault Tolerant Control System In this section, we will give an overview of our research on fault tolerant control. Aircraft systems may encounter a large class of unexpected failures such as sensor/actuator failures, airframe/control surface damages. For the safety requirements of air travel, it will be necessary to actively deal with and recover from these types of failures. In most aerospace applications, sensor/actuator failure detection and isolation can be achieved by the employment of duplex or triplex hardwareredundant voting schemes, which provide a reliable means of selecting the functional, and ignoring the failed, component. This hardware redundancy is expensive and requires additional resources for installation. An alternate approach is "analytical redundancy" in which the failure of a sensor, actuator or airframe is compensated for by reconfiguring the control scheme.

3

Figure 5. Block diagram of Hardware-in-the-Loop simulation of the DragonFly UAV In the next part, we briefly talk about the network using the Client/Server mechanism. The Hardware-in-the-Loop simulation [3] of the QNX system, the real-time operating system for our DragonFly UAV, for use as a validation test-bed for avionics, is linked by MS Windows through our avionics and control algorithms. Then we Cascade Connect, a commercially available design a discrete nonlinear sliding mode controller interface, over the TCP/IP network. [4,5], for the digital avionics system, devised to The client/server software in the flight compensate for model uncertainty due to control computer of the onboard avionics of the DragonFly surface damages. Finally, we demonstrate the UAV processes data from GPS, IMU and the air effectiveness of the resulting control scheme in the data probe in a timely manner for the navigation Hardware-in-the-Loop simulation, and illustrate the and control of the vehicle. Since the avionics can effects of control surface damages on the be connected to the Hardware-in-the-Loop performance and stability of the system simulation test-bed across the network, without accessing these sensors, the output from the 6-DOF nonlinear simulation of the DragonFly UAV can Hardware-in-the-Loop Simulation emulate the actual measurements and be fed to the Figure 5 shows the simplified structure of the avionics. As a result, since communication interval, Hardware-in-the-Loop simulation that we have delay and emulated measurement content can be designed. This test-bed consists of two different easily modified and controlled by the software in operating systems, MS Windows and QNX. Two Windows NT workstation, the effects of sensor and MS windows systems are responsible for the realactuator packet delay as well as communication time simulation of the DragonFly UAV, which constraints in the navigation and controls can be emulates flight tests, and the real-time control easily investigated. In addition, the actual action, which emulates the on-board controller. Standard TCP/IP links these over the ethernet

4

DragonFly aircraft with all hardware may be plugged into this platform.

Nonlinear Discrete Sliding Mode Controller Design Figure 6 illustrates the application of a sliding mode scheme for the digital flight control system under the plant variations. Such variations are typically due to airframe or control surface damages, which can be modeled as changes in aerodynamic coefficients (i.e., stability and control derivatives).

With this dynamic inversion, the resulting overall dynamics can be represented as discrete linear input-output equations. However, an outerloop control over this inner-loop is necessary because of possible variations on the underlying nonlinear model as a result of airframe or control surface damages. For this reason, a discrete sliding mode control was designed as the outer-loop. The design process can be completed at two steps where first, the condition for the existence of a discrete sliding surface is derived. Then using this sliding surface, a stabilizing control law is designed in order to compensate for bounded variations on the aircraft model due structural or control surface failures.

Validation The effectiveness of the resulting control scheme under control surface damages is demonstrated in the Hardware-in-the-Loop simulation.

Figure 6. Block diagram of nonlinear discrete sliding mode controller Note that, in digital avionics applications, the control signal is sampled and executed at a fixed sample rate. Thus, direct implementation of a continuous control law, may significantly modify the performance and stability of a controlled system. For this reason, it is necessary to design the controllers using discrete, rather than continuous techniques.

During the evaluation, the following maneuver (or desired attitude trajectory) is chosen; the aircraft banks to track its heading command while minimizing the sideslip angle for the coordinated turn and maintaining the flight path angle zero. The required variation of the heading angle is 720 degrees with a given time T sec, where T represents the maneuvering time to complete the given maneuver. The maneuver begins at the initial condition, VT = 58.28 ft/s and h = 300 ft. In addition, up to 80% of the aileron surface loss is considered here. This is emulated by multiplication of an aileron effectiveness factor σ (0.4< σ 2) vehicle coordination algorithms.

8

In this section, we give an overview of the methods that we have developed for multiple vehicle coordination under the DragonFly Experimental Platform. Specifically, we are going to highlight our underlying differences from previous techniques and provide some numeric examples to describe the types of problems we focus on and can solve at this stage. The last portion of the section describes Decentralized Coordination Network (DCN) - a computational tool developed to supplement the DragonFly Experimental Platform to provide a realistic assessment of our coordination algorithms under real-world situations such as communication delays and asynchronistic data passing.

vehicles and operational constraints, a property hardly achievable under a dynamic environment where number of vehicles and the vehicle types change. There also have been numerous methods dating back to 1980s developed in artificial intelligence [15] to interconnect these individual decision making models with heuristics, resulting in emerging motion patterns. In many cases this provides very flexible solution structures but with minimal or no convergence or optimality (in some sense) guarantees. We differ in our approach in both of these aspects as we can provide convergence guarantees with minimal overlap of individual dynamic models and vehicular constraints. Basic property of our approach is that individual decision makers iterate over and communicate local solution patterns (configuration states) and local optimization cost decreases. Through these local optimizations, we show that we can obtain emerging global optimization patterns under mild constraints. A description of our method, convergence guarantees and the equilibrium properties for multiple vehicles can be found in [16].

Figure 12a. Four aircraft coordinate through common airspace for safety assurance One particular reason for our interest in multiple vehicle control has been the everincreasing number of application areas such as formation flying [13], aviation surveillance and imaging, precision agriculture, environmental control and monitoring. Especially concepts such as unmanned air vehicle fleets for battlefield scenarios and communication relays push for development not only for the required hardware systems but also fundamental theoretical basic research for control of these distributed systems. An immediate but also sound effect of this interest has been direct application of the known control system design techniques to relative control [14] of two or more vehicles. Note that these formulations result in leader-follower types of architectures for both at the control system and the decision-making models. However, this requires a centralized knowledge base for dynamic models,

Figure 12b. Multiple vehicle routing example with multiple depots Figure 12a and 12b show two distinct test cases where the algorithm was tested. The first one is a typical air traffic scenario where each aircraft has to alter its initial flight path to avoid conflicts. Safety assurance in this scenario is defined in terms of at least 5km separation with every other aircraft at any time. In this simulation, the imperfect communication links are modeled by binomial 9

distributions, which effectively act as binary erasure channels. A major difference of our approach from a standard optimization is that the solution is reached in a decentralized fashion where each vehicle only knows its own local dynamics, velocity and turn rate constraints and the corresponding intervehicular safety constraints. Figure 13a shows the decrease and convergence of the global cost function corresponding to the set of iterations carried by every vehicle. In this figure, each line corresponds to the iteration carried by a vehicle based on a solution pattern communicated from another vehicle.

However, as the global cost function plot suggests, a solution with similar quality was already proposed by vehicle #2 and #4. The second main example shown in Figure 13b is a standard multiple vehicle routing problem where three vehicles located at three different depots access sixty waypoints (goals) within service time limits. Underlying concept of decentralized decision-making was used to initialize and then improve on these routes by the vehicles through local optimizations. In addition, within this framework, we have developed a computational test-bed for rapid prototyping and analysis of coordination algorithms. The decentralized coordination network allows multiple MATLAB™ processes to run and communicate with each other using standard TCPIP network protocol. We utilize the RBNB Matlink™ libraries [17] provided by Creare Inc. to publish/subscribe time stamped data through a hybrid network architecture with different types and rates of data sources and sinks.

Figure 13a. Global cost function converges as the number of iterations increases Notice that there are regions where the cost function for a particular iteration stays constant. This is a result of the fact that for either that particular iteration the communication link with that vehicle was down or the data was incomplete, corrupted or not present because of a synchronicity of message passing. Figure 13b shows how the coordination scenario and the iterations are perceived by vehicle #1. Notice especially the bad link with vehicle #3. In our “cooperative bargaining” scheme, “quality solutions” (meaning iterative optimization solutions with great decrease in the global cost) can propagate, even on an incomplete network. Such a fact is observed in this case: after a long period of no communication with vehicle #3, vehicle #1 receives a solution which results in a very high cost decrease via the local optimization of vehicle #3’s proposed solution.

Figure 13b. Coordination and communication losses from vehicle #1’s perspective Figure 14 captures our initial setup where a formation of four aircraft (one group) and a world model run in a decentralized coordination algorithm where the individual aircrafts publish/subscribe local iterations (states, local cost decrease) to the world model in an asynchronous fashion. We envision this computational tool to supplement the DragonFly Experimental Platform to provide a realistic assessment of our coordination

10

Decentralized Coordination Network (DCN) Local Control Process

Aircraft # 1

Aircraft # 3 RBNB Matlink

Client/Server Layer TCP-IP TCP-IP TCP-IP TCP-IP

Aircraft # 4

Aircraft # 2

RBNB Server TCP-IP

TCP-IP

WORLD MODEL Initial Setup

Figure 14. Decentralized coordination network concept schematic algorithms under real-world situations such data delays, communication losses and global clock differences in localized vehicles. Formal analysis of our methods, convergence properties, optimality conditions for equilibrium solutions of multiple vehicles can be found in reference [16].

Conclusions We have presented a flexible and modular avionics architecture for an UAV system. This test-bed supports a variety of research focusing on fault-tolerant control under structural damages, and multiple vehicle coordination such as closely spaced parallel approaches. As a part of this platform, we have developed a Hardware-in-the-Loop simulator and a computational tool, a Decentralized Coordination Network.

Acknowledgements The authors would like to thank William Larsen of the FAA; NASA Ames Fabrication Shops, especially Garret Nakashiki and Damon Reid for their help in the DragonFly development and flight test; NASA Ames Flight Operations; and Trimble Navigation for hardware donations, support, and direction to create the GPS receiver.

This research is supported by DARPA under the Software Enabled Control Program (AFRL contract F33615-99-C-3014), by ONR MURI N00014-00-1-0637, by the FAA and by DSTA of Singapore under the DTTA award.

References [1] J. Evans, S. Houck, G. McNutt, and B. Parkinson, "Integration of a 40 Channel GPS Receiver for Automatic Control into an Unmanned Airplane”, Proceedings of the Institute of Navigation GPS Conference, Nashville, TN, September 1998. [2] J. Evans, W. Hodge, J. Liebman, C. Tomlin, B. Parkinson, “Flight Tests of an Unmanned Air Vehicle with Integrated Multi-Antenna GPS Receiver and IMU: Towards a Testbed for Distributed Control and Formation Flight”, Proceedings of the ION-GPS Conference, Nashville, TN, September 1999. [3] Jang, J.S. and Tomlin, C.J., “Autopilot design for the Stanford DragonFly UAV:Validation through Hardware-in-the-Loop Simulation,” Proceeding of GNC Conference, Chicago, Illinois, June 2000. [4] Shtessel, Y.B. and Tournes, C.H., “Flight Control Reconfiguration on Sliding Modes,” Proceedings of AIAA Conference, AIAA-97-3632, 1999. 11

[5] Hedrick, J.K. and Gopalswamy, S., “Nonlinear Flight Control Design via Sliding Methods,” Journal of Guidance, Control, and Dynamics, Vol. 13, No. 5, 1990.

[16] Inalhan G., Tomlin C. J., “An optimization based method for decentralized coordination of multiple vehicles” in manuscript form for American Control Conference 2002

[6] Lomax, H. and Pulliam, T.H. Fundamentals of Computational Fluid Dynamics, Springer-Verlag, 2000.

[17] “Data Turbine Matlab Toolkit Reference V1.1”, Creare Inc. October 2000.

[7] S. Koczo, “Coordinated Parallel Runway Approaches”, 1996, Rockwell International, NASA Contractor Report 201611. [8] M. Jackson and P. Samanant and C. Haissig, “Design and Analysis of Airborne Alerting Algorithms for Closely Spaced Parallel Approaches”, Proceedings of the AIAA Guidance, Navigation and Control Conference, August 2000, Denver CO. [9] J. Hammer, “Study of the Geometry of a Dependent Approach Procedure to Closely Spaced Parallel Runways”, Proceedings of the IEEE/AIAA 18th Digital Avionics Systems Conference, 1999, St. Louis MO, 4.C.3-1. [10] B. Carpenter and J. Kuchar, “ProbabilityBased Collision Alerting Logic for Closely-Spaced Parallel Approach”, Proceedings of the AIAA 35th Aerospace Sciences Meeting and Exhibit, January 1997, Reno NV, AIAA 97-0222. [11] R. Teo and C. Tomlin, “Provably Safe Evasive Maneuvers against Blunders in Closely Spaced Parallel Approaches”, Proceedings of the AIAA Guidance, Navigation and Control Conference, Aug 2001, Montreal, AIAA 2001-4293. [12] WAAS Precision Approach Metrics: Accuracy, Integrity, Continuity and Availability http://waas.stanford.edu/metrics.html. [13] Chicka, D. F., Speyer, J. L., “Solar-powered, formation-enhanced aerial vehicle systems for sustained endurance,” Proceedings of American Control Conference 1998. Vol 2, pp 684-688 [14] Reynolds C. W., “Flocks, herds and schools: A distributed behavioral model”, Proceedings of SIGGRAPH 1987. pp 25-34 [15] Schumacher C. J., Singh S. N., “Nonlinear control of multiple UAVS in close-coupled formation flight,” AIAA Guidance, Navigation, and Control Conference 2000

12