Goal-driven automation of a deep space ... - Wiley Online Library

10 downloads 90487 Views 968KB Size Report
Goal-driven automation of a deep space communications station: a case study in knowledge engineering for plan generation and execution. Randall W. Hill, Jr., ...
Article Goal-driven automation of a deep space communications station: a case study in knowledge engineering for plan generation and execution

Randall W. Hill, Jr., Steve A. Chien and Kristina V. Fayyad Jet Propulsion Laboratory, California Institute of Technology Tel: +1 626 306 6144 Fax: +1 626 306 6912 Email: steve.chien얀jpl.nasa.gov

Abstract: This paper describes the application of Artificial Intelligence techniques for plan generation, plan execution, and plan monitoring to automate a Deep Space Communication Station. This automation allows a communication station to respond to a set of tracking goals by appropriately reconfiguring the communications hardware and software to provide the requested communications services. In particular this paper describes: (1) the overall automation architecture, (2) the plan generation and execution monitoring AI technologies used and implemented software components, and (3) the knowledge engineering process and effort required for automation. This automation was demonstrated in February 1995, at the DSS13 Antenna Station in Goldstone, CA on a series of Voyager tracks and the technologies demonstrated are being transferred to the operational Deep Space Network stations.1

Keywords: artificial intelligence, planning and scheduling, architecture

1. Introduction The Deep Space Network (DSN) (JPL, 1994) was established in 1958 and since then it has evolved into the largest and most sensitive scientific telecommunications and radio navigation network in the world. The purpose of the DSN is to support unpiloted interplanetary spacecraft missions and support radio and radar astronomy observations in the exploration of the solar system and the universe. There are three deep space communications complexes, located in Canberra, Australia, Madrid, Spain, and Goldstone, California. Each DSN complex operates four deep space stations — one 70-meter antenna, two 34-meter antennae, and one 26-meter antenna. The functions of the DSN are to receive telemetry signals from spacecraft, transmit commands that control the spacecraft operating modes, generate the radio navigation data used to locate and guide the spacecraft to its destination, and acquire flight radio science, radio and radar astronomy, very long baseline interferometry, and geodynamics measurements. Figure 1 shows a picture of a Deep Space Network 70-meter antenna located at Goldstone, CA. From its inception the DSN has been driven by the need to create increasingly more sensitive telecommunications

1

The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work was conducted as part of the DSN Technology Program managed by Dr Chad Edwards. For further information contact: steve.chien얀jpl.nasa.gov

Expert Systems, August 1998, Vol. 15, No. 3

Figure 1: A 70-meter Deep Space Network Antenna located at Goldstone, CA. 141

devices and better techniques for navigation. The operation of the DSN communications complexes require a high level of manual interaction with the devices in the communications link with the spacecraft. In more recent times NASA has added some new drivers to the development of the DSN: (1) to reduce the cost of operating the DSN, (2) to improve the operability, reliability and maintainability of the DSN, and (3) to prepare for a new era of space exploration with the New Millennium program: to support numerous small, intelligent spacecraft while using very few mission operations personnel. The purpose of this paper is threefold. First, we describe an architecture for automating a Deep Space Station to allow it to fulfill goal-based requests to capture spacecraft data. In particular, we will describe how the components of the architecture transform a flight project service request into an executable set of DSN operations that fulfill the request through automated resource allocation, goal-driven plan generation, and plan execution and monitoring. Second, we describe how each of the constituent AI technologies of plan generation, and plan execution monitoring were used to enable goal-driven automation of the DSS13 Deep Space Station. Third, we describe the knowledge engineering effort required to automate the DSS13 Deep Space Station. In particular, we focus on the work required to cover new mission types and to integrate additional subsystems. This paper is organized in the following manner. We begin by characterizing the operation of the DSN at the time that this research was performed. Next we describe the high-level architecture envisioned to automate operations of the Deep Space Network (DSN). In this section we give a functional description of each of the components, which includes the Operations Mission Planner (OMP) (see Table 1 for abbreviations) scheduling system for automated resource allocation, DSN Planner (DPLAN), an automated procedure generation system, and Link Monitor and Control Operator Assistant (LMCOA), a plan execution and monitoring system. In addition we provide examples of the inputs and outputs to each of the components to illustrate what occurs at each step in the process of capturing spacecraft data. Next, we focus on the two components of the system involved in automation of a single Deep Space Station: the DPLAN planning system and the LMCOA plan execution and monitoring system. Specifically we describe the knowledge representation used for DPLAN’s decomposition rules and for the resultant temporal dependency networks (TDN’s) used by LMCOA. Next, we describe the knowledge acquisition and validation processes performed while building the automation system. To this end we give a detailed example of knowledge and system engineering from an effort in 1995 where we added two subsystems to an existing monitor and control system in order to perform a telemetry track of the Voyager spacecraft using the AI technologies described in this paper. Finally, we describe 142

Table 1: Acronym table Acronym

Meaning

BWG DPLAN DSN DSS EM HTN JPL LMCOA

Beam Wave Guide DSN Planner Deep Space Network Deep Space Station Execution Manager Hierarchical Task Network Jet Propulsion Laboratory Link Monitor and Control Operator Assistant Multimission VICAR Planner Network Operations Control Center Operations Mission Planner Programmable Oscillator Situation Manager Sequence of Events Signal Processing Center Temporal Dependency Network Telemetry Processor for DSS13 Total Power Radiometer Microwave Controller

MVP NOCC OMP PO SM SOE SPC TDN TP13 TPR UWC

the results of the technology demonstration at DSS13 and ongoing efforts to insert the demonstrated AI technology into the operational DSN. 2. Current DSN operations Voyager-1 is cruising at 17.5 kilometers/second toward the outer edge of the solar system. Though its onboard systems are mostly asleep during this phase of its mission, Voyager’s health metrics are continually sent to Earth via a telemetry signal radiated by its 40-watt transmitter. It will take eight hours at the speed of light for the signal to reach its destination, Earth, a billion miles away. Upon arrival, the telemetry signal is received by a Deep Space Station which is part of an extremely sensitive ground communications system, NASA’s Deep Space Network (DSN). At this station the signal is recorded, processed, and sent to the Mission Operations and Voyager project engineers at the Jet Propulsion Laboratory (JPL), who assess the health of the spacecraft based on the contents of the signal. The type of activity just described occurs daily for dozens of different NASA spacecraft and projects that use the DSN to capture spacecraft data. Though the process of sending signals from a spacecraft to Earth is conceptually simple, in reality there are many earthside challenges that must be addressed before a spacecraft’s signal is acquired and transformed into useful information. Expert Systems, August 1998, Vol. 15, No. 3

2.1 Network preparation at the Network Operations Control Center Figure 2 shows a simplified depiction of DSN operations (see JPL (1995) for a more complete description of the DSN processes). The first stage is called Network Preparation and it occurs at the Network Operations Control Center (NOCC) located at JPL. The entire process is initiated when a flight project sends a request for the DSN to track a spacecraft. This request specifies the timing constraints of the track (e.g. when the spacecraft can be tracked), data rates, frequencies required, as well as the services required (i.e. downlink of information from the spacecraft, commanding uplink to the spacecraft, etc.). The DSN responds to the request by performing a process called Network Preparation. The Network Preparation Process includes attempting to schedule the resources (i.e. an antenna and other required subsystems such as receivers, exciters, telemetry processors) needed for the track as well as generating necessary data products required to perform the track (predictions of the spacecraft location relative to the ground station, transmission frequencies, etc.). The output of this process is a schedule of tracks to be performed by DSN ground stations, equipment allocations to tracks, and supporting data required for tracks. A key component of the support data is the Sequence of Events (SOE), which describes the time-ordered activities that should occur during the track. The SOE includes actions that the DSN should take, (e.g. begin tracking the project’s spacecraft at 1200 hours), and it also includes events that will occur on the spacecraft being tracked (e.g. the spacecraft will change frequency or mode at a designated time and the DSN should anticipate the event). Additionally, the DSN must generate predict information required for the track. This is information about where in the sky the spacecraft will be relative to the antenna so that the antenna can be directed to the correct orientation to acquire the spacecraft and to

maintain pointing during the track as the earth rotates and moves and the spacecraft moves. 2.2 Data capture at the Signal Processing Center Once the schedule has been determined, and the SOE and predict information generated, the DSN operations process moves to the Signal Processing Centers (SPC)2 where a process called data capture occurs. The data capture process is performed by operations personnel at the deep space station. First they determine the correct operations necessary to perform the track. Next, they perform the operation: they configure the equipment for the track, establish the communications link, which we hereafter refer to as a ‘link’, and then they perform the track by issuing control commands to the various subsystems comprising the link. Throughout the track the operators continually monitor the status of the link and handle exceptions as they occur. For example, the ground station may lose the spacecraft signal (this occurrence is called ‘the receiver breaking lock with the spacecraft’). In this case the operations personnel must take immediate action to reacquire the spacecraft signal as quickly as possible to minimize the amount of data lost. All of these actions are currently performed by human operators, who manually issue tens or hundreds of commands via a computer keyboard to the link subsystems. The monitoring activities require the operator to track the state of each of the subsystems in the link (usually three to five subsystems), where each subsystem has many different state variables that change over time. 3. An architecture for automation of the DSN In the last section we described the current process for transforming a flight project service request into an executable set of DSN operations. As we have already pointed out, many of the steps of the described processes are both knowledge and labor intensive. This paper describes efforts to automate portions of the process map shown in Figure 2 using Artificial Intelligence technologies. Figure 3 shows the specific tools and how they map onto the current DSN functions. Specifically, the Operations Mission Planner 26Meter (OMP-26M) system is applied to the resource sched-

2

Figure 2: An overview of DSN operations. Expert Systems, August 1998, Vol. 15, No. 3

To explain in further detail, the operational DSN Deep Space Stations (DSSs) are organized into complexes where several DSSs share a pool of common subsystems. These complexes are called Signal Processing Centers (SPCs). However, the prototyping work described in this paper took place at the DSN’s research station DSS13. DSS13 does not share subsystems with other DSSs because its equipment tends to be different (e.g. experimental or test versions) and hence does not belong to a SPC. Thus in the operational DSN the trackplan generation and connection operations efforts would be at an SPC, but for this work they took place at a DSS. From an AI research standpoint the reader can assume that SPCs and DSSs are interchangeable. 143

of stations. Thus we will primarily discuss the other two components, DPLAN and LMCOA. 3.2 DPLAN: Automated procedure generation

Figure 3: Current prototype systems for DSN automation.

uling process, the Deep Space Network Antenna Operations Planner (DPLAN) is used for automatically generating DSN operations procedures, and the Link Monitor and Control Operator Assistant (LMCOA) automatically executes the operations procedures and performs connection operations. 3.1 OMP-26M: Automated scheduling The high level resource allocation problem for the DSN is handled by the Operations Mission Planner (OMP-26M) (Kan, Rosas & Vu, 1996) scheduling system. The OMP system accepts as inputs: (1) generalized service requests from spacecraft projects of the form ‘we need three 4-hour tracks per week’ and (2) specific requests of the form ‘we need to perform maintenance on DSS-28 from 1000–1600 on Wednesday October 24th.’ OMP then produces a specific schedule allocating resources to requests, resolving conflicts using a priority request scheme which attempts to maximize satisfaction of high priority projects. This automated scheduling and resource allocation function corresponds to the processes previously described as ‘schedule resources’ and ‘resource management.’ OMP deals with schedules for NASA’s 26-meter subnet involving thousands of possible tracks and a final schedule involving hundreds of tracks. While OMP performs a vital function in the automation architecture, this paper focuses on the elements of automation which were used to automate a single Deep Space Station, rather than resource allocation of a network 144

The automated track procedure generation problem involves taking a general service request (such as telemetry-downlink of data from a spacecraft) and an actual equipment assignment (describing the type of antenna, receiver, telemetry processor, and so on) and generating the appropriate partially ordered sequence of commands, called a Temporal Dependency Network or TDN, for creating a communications link to enable the appropriate interaction with the spacecraft. The DSN Antenna Operations Planner (DPLAN) uses an integration of AI Hierarchical Task Network (HTN) (Erol, Hendler & Nan, 1994; Lansky, 1994) and partial order operator-based planning techniques (Pemberthy & Weld, 1992) to represent DSN antenna operations knowledge and to automatically generate antenna operations procedures on demand from the service request and equipment assignment. We chose to use an AI planning representation because of the design philosophy of choosing a constrained representation which can still naturally represent the problem. While it may have been possible to encode the procedure generation problem using an expert systems methodology (e.g. have rules to infer the appropriate actions), many of the decisions made during procedure generation are based on analysis of the effects and requirements of actions. It is precisely this type of analysis which planning technology provides, hence planning technology seemed to be a natural representation. As a secondary motivation, the procedure execution layer, LMCOA, already used a plan-like representation of preconditions and postconditions of operators. Hence a significant amount of knowledge engineering for procedure generation problem was already required for the procedure execution element. In providing a procedure generation capability, we adapted the Multimission VICAR Planner (MVP) (Chien & Mortensen, 1996), a previously constructed planner which had been used to automatically generate image processing procedures to support science data analysis requests. In order to perform the adaptation of MVP, two tasks were required. First, small amounts of interface code were generated to reformat the output of the planner to satisfy the TDN formatting requirements. Second, a planning knowledge base for the antenna operations domain for telemetery tracks was created. This knowledge base contained knowledge on how to operate the DSS13 communications station equipment used in the demonstration. 3.3 LMCOA: Automated procedure execution The automated execution component, called the Link Monitor and Control Operator Assistant (LMCOA) uses the Expert Systems, August 1998, Vol. 15, No. 3

TDN (Figure 12) generated by DPLAN to perform the actual track and is responsible for monitoring the execution of the TDN3. This involves ensuring that the expected conditions and subsystem states are achieved, certain types of closed-loop control and error recovery are performed in a timely fashion, and the correct dispatching of commands to the subsystems controlling the link occurs. LMCOA uses an operator-based representation of the TDN to represent necessary and desired conditions for execution of procedures and monitors relevant subsystem state. While LMCOA provides a framework for automation of procedure execution, it does require that a fair amount of procedural knowledge be codified. While some pre- and postconditions of activities can be verified in a systematic fashion, others require writing application-specific code to perform these checks. Furthermore, some control loops and error recovery procedures also require development of application specific code. As a result, the application and adaptation of LMCOA was expected to take more effort than the DPLAN adaptation. Since LMCOA was already being used at DSS13 to conduct experiments in Ka-band antenna pointing, much of the LMCOA software had already been developed and its knowledge base was already validated. The Voyager tracking experiment, however, required us to integrate two other subsystems, a telemetry processor (TP13) and a programmable oscillator (PO), with the existing LMCOA and monitor and control system. Determining more exactly the amount of effort required to perform such an adaptation was one of the purposes of the demonstration. Thus, the combination of OMP, DPLAN, and LMCOA enables automation of a significant portion of DSN operations. In the remainder of this article, we focus on the DPLAN and LMCOA systems and how they combine to allow goal-driven automation of a Deep Space Communication Station. 4. DPLAN: Automated procedure generation As stated above, the automated track procedure generation problem involves taking a general service request and an equipment assignment and generating a plan, or TDN (Figure 12), to run the track for creating a communications link to enable the appropriate interaction with the spacecraft. The DSN Antenna Operations Planner (DPLAN) uses an integration of AI Hierarchical Task Network (HTN) and partial order operator-based planning techniques to represent DSN antenna operations knowledge and to generate

Figure 4: DPLAN and LMCOA inputs and outputs. antenna operations procedures on demand from the service request and equipment assignment. In this section we first describe the inputs and outputs of the DPLAN system. We then describe how DPLAN represents knowledge of DSN antenna operations procedures and constraints. Track Plan Generation: Inputs and Outputs DPLAN uses high level track information to determine appropriate steps, ordering constraints on these steps and parameters of these steps to achieve the high-level track goals given the equipment allocation. In generating the TDN, the planner uses information from several sources (Figure 4): Service Request: The service request specifies the goals of the track (e.g. to provide certain DSN services over a specified period of time). These include downlink/ telemetry, commanding, ranging, and radio science types of services. A sample set of service requests/tracking goals is shown in Figure 5. Project SOE: The project sequence of events specifies events from the mission/project perspective. Relevant information specified in the project SOE includes such items as the one-way light time (OWLT) to the spacecraft, notifications of the beginning and ending times of tracks, spacecraft data transmission bit rate changes, modulation index changes, and carrier and sub-carrier frequency changes. Project profile: This file specifies project specific information regarding frequencies and pass types. For example, the project SOE might specify frequency = HIGH, and the project profile would specify the exact frequency used. The project profile might also other signal parameters and default track types. TDN KB: The Temporal Dependency Network (TDN) knowledge base (Fayyad, Hill & Wyatt, 1993) stores information on the TDN blocks available for the DSN Planner

3

LMCOA also uses the predict and SOE information generated by DSN operations to execute the track. Automatic generation of this essential information is an important portion of automating DSN operations but falls outside the scope of DPLAN. There are currently other efforts within the DSN to automate these processes relating to the generation of these types of information. Expert Systems, August 1998, Vol. 15, No. 3

Figure 5: Example tracking goals. 145

and LMCOA to use. This knowledge base includes information regarding preconditions, postconditions, directives, and other aspects of the TDN blocks. Equipment Configuration: This details the types of equipment available and unique identifiers to be used to specify the exact pieces of equipment to be used in the track. These include the antenna, antenna controller, the receiver, and so on. 4.1 How DPLAN constructs tracking plans DPLAN uses the tracking goals (Figure 5) specified in the SOE to generate the operations procedure for a track. The DSN planner reduces the high-level track goals into executable steps by applying knowledge about how to achieve specific combinations of track goals in the context of specific equipment combinations. This information is represented in the form of task reduction rules, which detail how a set of high level goals can be reduced into a set of lower level goals in a particular problem-solving context. Each task reduction rule rigorously details the scope of its expertise in terms of track and equipment combinations. The information about the scope of applicability of the rule can be considered in terms of a track goal hierarchy and an equipment goal hierarchy, where the rule applies to all contexts below the rule in the relevant hierarchy (all specializations of its scope). Using this problem specification, the DSN planner then uses task reduction planning techniques (also called hierarchical task network or HTN) and operator-based planning techniques to produce a parameterized track-specific TDN to be used to conduct the track. The actual planner used to generate the TDN is a modified version of the task reduction planning component of the Multimission VICAR Planner system. This track-specific TDN and the SOE can then be used by LMCOA to operate the antenna to conduct the requested track. DPLAN uses a hierarchical knowledge representation to represent DSN operations procedures. This allows a knowledge engineer to specify the scope of each piece of knowledge (i.e. to which set of goals or equipment types a rule or constraint applies). For example, Figure 6 shows a partial track goal hierarchy involving the goals telemetry, commanding, and ranging. The user can specify pieces of knowledge (rules) which apply at any point in the hierarchy. These rules are then interpreted as applying to all portions below the designated point. For example a rule might specify how to achieve a set of goals in the context of a telemetry track. This information would possibly be relevant to any track that contained a telemetry goal, including telemetry and commanding tracks, telemetry and ranging tracks, and telemetry, commanding, and ranging tracks. Figure 7 shows a partial track hierarchy for antennae. Rules might also specify how to achieve goals specifying breadth of expertise in multiple hierarchies (e.g. 146

Figure 6: Partial track goal hierarchy.

Figure 7: Example equipment hierarchy — antenna types. limiting the scope with respect to antenna type and antenna size). For example, a rule might specify how to achieve the telemetry tracking goal for a 34m BWG antenna. Alternatively, a rule might specify a constraint on how achieving telemetry might be constrained for all HEF antennas. By representing the track, equipment, and other hierarchies the scope of various pieces of knowledge regarding antenna track activities can be naturally and easily represented. In HTN Planning, task reduction rules specify how to reduce abstract activities into lower level activities. This process continues until all of the activities in a plan have been reduced into operational (i.e. executable) activities. In a task reduction rule, one specifies a goal G to be reduced, a context set of conditions C (which restrict the cases in which the rule applies), a set of lower level goals L (i.e. G is being reduced into L), and a set of constraints O, which specify constraints on the new goals L. For example, Figure 8 shows a graphic summary of a

Figure 8: Sample task reduction. Expert Systems, August 1998, Vol. 15, No. 3

Figure 9: Augmentation of TDN required by additional programable oscillator track goal.

task reduction rule. This rule has two context conditions: that the station being operated is DSS13 and that tracking goal ‘downlink track’ be present. The rule states that if these context conditions are met, the abstract task of precalibration can be achieved by performing the lower level tasks of inspecting the subsystems, connecting the subsystems, configuring the total power radiometer, loading antenna predicts, and configuring the receiver. Furthermore, the subsystems must be inspected before connecting the subsystems, and so on. Note that some of these tasks are not operational tasks and will be later expanded into more detailed (operational) tasks. For example, configuring the total power radiometer involves configuring the IF switch, configuring the UWC (Microwave controller)/TPR for precalibration, and performing the actual TPR pre-calibration. Next consider the task reduction rule shown in Figure 9. It states that in the equipment context of DSS13 and in any tracking goal context, the abstract goal of including a programmable oscillator (PO) can be achieved by adding the steps: load PO files followed by configure doppler tuner. Additionally, these steps must be ordered with respect to connect subsystems and load antenna predicts steps as indicated. In the context of specific tracks, generic configuration blocks will be specialized as appropriate to the details of the track. For example, the task reduction rule in Figure 10 states that in the equipment context of Station DSS13, if the track context is downlink, telemetry, with symbol decoding requested, ‘receiver configuration block type A’ is the appropriate one to use to configure the receiver. Considerable effort in computing the final TDN is devoted to determining the correct parameters for blocks in

Figure 10: A specialized task reduction. Expert Systems, August 1998, Vol. 15, No. 3

the TDN. For example, Figure 11 shows a configuration table used to determine the IF switch parameter for the TPR precalibration step. Depending on the communication bands used in the track, differing bands will be assigned to each of the communication pathways in the UWC/TPR. Based on the bands being used, the TPR IF switch parameter is also determined. This parameter setting is also determined during the decomposition process, so a correctly parameterized TDN can be constructed. In this case the tabular information is represented by using several rules. Each rule contains context conditions corresponding to the bands used and each rule specifying the correct IF1, IF2, and TPR IF Switch Parameter settings. The application of decomposition rules continues until all of the activity blocks in the TDN are operational — that is to say that known blocks in the TDN KB can be used to instantiate each and every activity in the TDN. This fully instantiated TDN can then be used with the LSOE by LMCOA to perform the actual track. Figure 12 shows the TDN for a VOYAGER downlink telemetry track using the programmable oscillator for the DSS13 DSN antenna station. 5. LMCOA: Automated procedure execution The automated execution component, called the Link Monitor and Control Operator Assistant (LMCOA), uses the TDN (Figure 12) generated by DPLAN to perform an actual track. LMCOA is responsible for monitoring the execution of the TDN. This involves several things: Ensuring that the expected conditions and subsystem states are achieved; performing certain types of closed-loop control and error recovery in a timely fashion; and dispatching commands to the subsystems controlling the link. LMCOA uses an operator-based representation of the TDN to represent necessary and desired conditions for execution of procedures and tracks relevant subsystem state. The LMCOA performs the operations procedures for a tracking activity by executing a Temporal Dependency Network (TDN), which is a procedure that is automatically generated by DPLAN, as described in the last section. DPLAN composes the TDN so that it contains the procedures (TDN blocks) needed for a specific tracking activity, and it orders them according to its knowledge of the dependencies that are defined among the blocks as well

Figure 11:

TPR IF switch parameter determination. 147

Figure 12: Temporal dependency network for Voyager track. as by what it knows about the pre- and postconditions of the blocks. The knowledge about interblock dependencies and about block pre- and postconditions is passed to the LMCOA, whose task it is to execute the end-to-end procedure. The LMCOA receives the TDN in the form of a directed graph, where the precedence relations are specified by the nodes and arcs of the network. The blocks in the graph are partially ordered, meaning that some blocks may be executed in parallel. Temporal knowledge is also encoded in the TDN, which includes both absolute (e.g. Acquire the spacecraft at time 02:30:45) and relative (e.g. Perform step Y 5 minutes after step X) temporal constraints. Conditional branches in the network are performed only under certain conditions. Optional paths are those which are not essential to the operation, but may, for example, provide a higher level of confidence in the data resulting from a plan if performed. More details about TDNs are provided in Fayyad, Hill & Wyatt (1993). To execute a TDN, LMCOA performs the following functions: (1) it loads the parameterized TDN into the Execution Manager; (2) it determines which TDN blocks are eligible for execution and spawns a process for each TDN block; (3) it checks whether the preconditions of each TDN block have been satisfied; (4) once the preconditions are satisfied, it issues the TDN block commands; and (5) it verifies whether the commands had their intended effects on the equipment. The Operator interacts with the LMCOA by watching the execution of the TDN; the operator can pause or skip portions of the TDN, check the status of commands within individual blocks, and provide inputs where 148

they are required. When part of a TDN fails, the LMCOA supports manual recovery by the operator by highlighting the point of failure and providing information about the preconditions or postconditions that failed to be satisfied. The LMCOA architecture is shown in Figure 13. The Execution Manager (EM) oversees the overall execution of the TDN; it loads the TDN, selects the TDN blocks that are eligible for execution, spawns a thread process for each of the blocks that are ready for execution (e.g. TDN Block 1 in Figure 13), and monitors the execution status of the blocks. The EM selects a block for execution when all of that block’s predecessors have been successfully executed. Since many of the TDN blocks require track-specific parameters, the EM finds the parameters in the sequence of events (SOE) information. Once a TDN block begins to execute, it requests the Situation Manager, via the Router, to verify that the block’s preconditions are satisfied. A precondition is a state that must be true in the physical world. For example, in Figure 12 the TDN block named ‘Configure IF Switch’ has a precondition that the connection between the monitor and control system and the subsystems has been established. Though this condition is the intended effect of the previous block, the precondition check verifies that the desired state actually holds, prior to attempting to configure the IF switch. The result of the precondition check is sent back to the block, and the block forwards this information to the EM, which keeps track of the status of each block. Once the Situation Manager (SM) verifies that the preconditions are met, the TDN block dispatches its comExpert Systems, August 1998, Vol. 15, No. 3

Figure 13: LMCOA architecture. mands, one at a time, through the Router to the subsystems it is controlling. When a subsystem has successfully executed a command, a status message is sent to the block, and it dispatches the next command. Once all of the block’s commands have been sent, the block sends a request to the Situation Manager to check whether the block’s postconditions are satisfied — the execution of the block is not considered successful unless the postconditions hold in the subsystems. The EM tracks each stage of a block’s execution — it tracks whether the block’s preconditions and postconditions are satisfied, and it also tracks the execution status of each of the block’s commands. Much of this information is sent by the EM to the User Interface (UI). The UI graphically depicts the execution status of the TDN to the user, who views a copy of the TDN on the display. The display uses color to indicate the status of each block and command. The UI indicates whether a block has been executed or not, whether it is currently executing or has been skipped or paused by the user, and whether there is an execution problem such as an unsatisfied precondition. The Operator interacts with the LMCOA by watching the execution of the TDN; the operator can pause or skip portions of the TDN, check the status of commands within individual blocks, and provide inputs where they are required. When part of a TDN fails, the LMCOA supports manual recovery by the Expert Systems, August 1998, Vol. 15, No. 3

operator by highlighting the point of failure and providing information about the preconditions or postconditions that failed to be satisfied. The Situation Manager (SM) is responsible for tracking the evolution of the monitor and control subsystems over the duration of the track. In order to perform this task it embodies significant knowledge of how to query the subsystems to determine the information about their state and how to interpret responses. This knowledge is represented as a set of inference rules within the SM knowledge base. There are several obvious drawbacks to operating the monitor and control system manually. Certain DSN operations require continuous attendance by an operator over long periods of time, and some operations are highly repetitive and require large amounts of data entry. For instance, it is not unusual to conduct a Ka-band Antenna Pointing (KaAP) track lasting eight hours. During a KaAP track the procedure called ‘Load Sidereal Predicts’ is repeated many times (see the KaAP TDN in Figure 14 for an end-to-end view of the track); the Load Sidereal Predicts procedure requires inputs by the operator each time it is conducted. We estimate that an eight hour KaAP track requires about 900 operator inputs overall, if the track is performed manually. For this same track, under nominal conditions, LMCOA reduces the number of operator inputs to less than 10. Since humans tend to be error-prone on simple repeti149

Figure 14: Ka-band antenna pointing (KaAP) track temporal dependency network. tive tasks, it makes sense to assign these tasks to LMCOA, freeing the operator for the task of handling exceptions, which requires more intelligence and knowledge.

only acquiring knowledge about how to perform the operations, but it also required two subsystems to be added to the architecture. This makes a good case study of the effort involved with expanding such a system.

6. Knowledge Engineering The goal-driven automation system we have described requires an extensive knowledge base to store data about the domain and as such requires that knowledge engineering be performed in order to acquire and encode the knowledge. In this section we describe the level and types of effort required to acquire, encode and validate the knowledge base used in this demonstration. The levels of effort can be summarized as follows (in units of 8-hour workdays); it took about 36 workdays to acquire the knowledge, almost the same amount of time to encode the knowledge, but three times as much time to validate the knowledge base. Table 2 summarizes the knowledge engineering effort in more detail. In the rest of this section we describe the overall effort to add a Voyager telemetry downlink mission to the already existing LMCOA. This involved not 150

6.1 Acquiring the knowledge The majority of knowledge represented in the system consists of; the project SOE definition, the DPLAN knowledge base (used in generating the TDN), the TDN blocks themselves, the parameterization of the TDN blocks by both DPLAN and LMCOA, and the directives which make up the TDN blocks used in the demonstration. Much of this information comes from subsystem knowledge, for example, in defining the TDN blocks, identifying and understanding the subsystem directives within the blocks, and knowing how and when to parameterize subsystem directives within the blocks. Five subsystems were utilized for this TDN, two of which had never been used by LMCOA, and one that required more development and testExpert Systems, August 1998, Vol. 15, No. 3

Table 2: Knowledge egnineering effort Knowledge Acquisition

36 days

Background information Resuse blocks from other TDNs Interview experts Produce TDN documentation Knowledge Encoding

14 4 11 7 45 days

Project SOE Parser Monitor & Control system extended Adapt LMCOA to Voyager TDN Write, revise TDN flat file Decompositon rules, syntax editor Knowledge Base Validation

10 11 15 4 5 127 days

TDN, pre- and postconditions Monitor and Control system extensions LMCOA adapted for Voyager TDN Project SOE Parser Goldstone testing DPLAN

51 19 24 7 24 2

ing to incorporate it into the current TDN. Acquiring this knowledge took about 36 workdays. The information was obtained by several methods including reviewing documents and learning about the existing software systems, interviewing experts familiar with a particular part of the domain, documenting the knowledge and participating in status meetings. 14 days were spent acquiring background information on DSN operations, SOEs, the Monitor and Control System at DSS13, and the LMCOA at DSS13. A small but very valuable amount of time was spent reviewing the existing TDNs for DSS13. A significant amount of knowledge was directly extracted from both the Ka-band link experiment (KaBLE) and KaAP TDNs for use in the Voyager TDN. The operational unit of a TDN, namely a block, has proven advantageous in our previous work on TDNs. The reuse of blocks between different operations procedures is one key advantage. 12 of the 22 blocks, or 55% of the blocks in the Voyager TDN came directly from the KaBLE and KaAP TDNs. Our plans for the next generation LMCOA include the use of a relational database to store TDNs, blocks, and their contents. A block that can be used in more than one TDN needs to be represented only once in the database. Changes could be made to one or more of these reusable blocks, without having to make significant changes to each TDN requiring the block. Therefore the cost of extending LMCOA to support a type of Expert Systems, August 1998, Vol. 15, No. 3

track was greatly reduced by reusing TDN blocks from existing TDNs. Most of the time acquiring pre- and postconditions occurred during the validation phase, due to the way the procedural knowledge is captured from operations personnel. Therefore details about acquiring pre- and postconditions are presented later in the validation section. 11 days were spent interviewing experts in the areas of DSN operations in general, DSS13 operations for a spacecraft telemetry downlink pass, subsystem monitor and control, and project SOE definition. In addition, members of the Voyager project were interviewed in order to obtain parameters specific to Voyager that were required for parameterizing the TDN. Seven more days were spent generating graphical and written documentation of the TDN and four days were spent participating in status meetings. 6.2 Encoding the knowledge The next step in the knowledge engineering process was to encode the knowledge into a machine-readable format. The knowledge representation was designed and already used in previous efforts, due to the existing LMCOA, automated Monitor and Control System, and planner. Therefore in this effort most of the time was spent encoding the knowledge into the pre-existing representation. Encoding the knowledge included; writing the TDN flat file (which specifies the Voyager TDN and is loaded by the LMCOA), adapting the LMCOA to Voyager, writing the parser for the project SOE, extending the DSS13 Monitor and Control system to automate required subsystems, and developing/adapting DPLAN. Some highlights of the encoding effort are noted below. The DSS13 monitor and control system was expanded to include two new subsystems, a telemetry processor (TP13) and a programmable oscillator (PO), which were not previously used by other TDNs. The integration effort involved developing simple subsystem simulators for testing purposes as well as writing interfaces to the monitor and control system. The integration of TP13 into the monitor and control environment took the most amount of time. The changes made to the LMCOA for the new Voyager TDN took 15 days. Some of this time was spent creating the ability for the LMCOA and DPLAN to read an SOE. As it turns out, the SOE is missing items that would be useful for automating the system. As other types of tracks and more subsystems are handled by the LMCOA, we will need to broaden the definition of the SOE to include information about things like boresighting and modulation index changes. 6.3 Validating the knowledge base This phase of the knowledge engineering required about three times the level of effort of either the acquisition or 151

encoding phases. This phase includes on-site testing at Goldstone in the operational environment, testing and developing simulators, and validating the TDN, its pre- and postconditions and other software modules. The validation phase took 127 workdays. Almost half of this time, 51 days, was spent validating the TDN, with most of the emphasis on the pre- and postconditions on TDN blocks. Validating the TDN involves making sure that the operations procedure is accurate. This can be a time-consuming task since different experts have different ways of performing portions of the procedures. Over time the station operators create personally tailored versions of the procedures, which makes it difficult to come to consensus on how they should be performed. After a TDN has been implemented and can be executed by the LMCOA, the operators can ‘see’ the procedure more clearly and refine it. In any case, these reasons all point out the need to be able to easily modify TDN blocks and preand postconditions. The format of the operations procedure must be simple so that it will be easy for developers and operations personnel, to understand and maintain these procedures. We found that validating pre- and postconditions is made more difficult by the fact that operations personnel do not usually think in these terms. The need for pre- and postconditions is much more obvious when we can demonstrate the execution of the baseline TDN on LMCOA. In this case the operators can observe subsystem actions occurring automatically as the TDN is executed and they can identify when actions are being taken before a necessary subsystem state has been reached. These states are then implemented as preconditions. Part of the reason the operators are not familiar with the concept of preconditions is related to how they operate the manual monitor and control environment. In this environment, the operators often have a lot of equipment pre-configured. A detailed question and answer session between TDN developer and operator proved useful for identifying what portions of the pre-configuration are preconditions for existing blocks in the TDN. In contrast, postconditions were more difficult for the operations personnel and developers to determine. There were two other factors in validating pre- and postconditions: (1) they were often quite complex, and (2) the subsystems often do not provide support to detect the states associated with them. Pre- and postconditions can be very complex, making it difficult to encode to them so that they have the desired effect. For example, a pre- or postcondition based on absolute or relative time can complicate the implementation of that condition. In an early version of the Voyager TDN, two TDN blocks occur in sequence, START RECORDING and then STOP RECORDING. The START recording block just tells the appropriate subsystem to start recording. The block then completes. The STOP RECORDING block doesn’t execute until the appropriate time has been reached, according to the time in the SOE. Until 152

this time, it appears that no blocks are executing in the LMCOA. That is true. However, the subsystems are recording data during this time. In order for the user interface to show that some activity was occurring, a postcondition was put on the START RECORDING block. According to this postcondition, the START RECORDING block will finish executing when the time has come to stop recording. Therefore, during the long data recording phase of the pass, the start recording block in the LMCOA remains in the executing state. Subsystem support is also required in order to make use of pre- and postconditions. In this and previous TDNs, one or more subsystems did not provide status information to the monitor and control system that needed to be checked by a pre- or postcondition. For example, in the Voyager TDN, files were downloaded to the Station Data Recorder (SDR). We had to modify this subsystem to send a ‘finished downloading’ status back to Monitor and Control so that a postcondition could automatically test when downloading was completed. Extensions to the monitor and control system took 19 days to complete. The majority of this time was spent incorporating TP13, the receiver, into the monitor and control system. New variables were added to the subsystem and communications problems had to be overcome. This time also included writing subsystem simulators for testing monitor and control and LMCOA in the integration and testing lab. A significant amount of time, 24 days, was spent at DSS13. Extensive testing and debugging was performed at Goldstone that could not be performed in the lab environment. Much of this time was spent incorporating TP13 into the DSS13 monitor and control environment. Some of the difficulties encountered resulted in modifying sections of the knowledge base related to the monitor and control system as well as the LMCOA. Finally, we spent some time debugging various software modules; the SOE Parser took seven days and DPLAN took two days. Another 24 days were required to debug the Voyager version of the LMCOA.

7. Results In February 1995 a comprehensive demonstration was conducted to validate the concept of integrating and using the AI software described in the preceding paragraphs to track a spacecraft with the DSN (Hill et al., 1995a & 1995b). In the demonstration, DPLAN generated the TDN shown in Figure 12 for a Voyager Telemetry Downlink track using the equipment configuration at Deep Space Station 13 (DSS13) in Goldstone, California, which included a 34meter beam-wave guide (BWG) antenna and a telemetry processor. The TDN generated by DPLAN was successfully executed by LMCOA — a communications link was Expert Systems, August 1998, Vol. 15, No. 3

established with Voyager and the 34-meter BWG antenna tracked the spacecraft, with minimal human control. This demonstration validated several capabilities. First, we demonstrated the ability to use AI planning techniques to generate a family of telemetry downlink tracks for the DSS13 hardware with arbitrary bit rate and modulation index changes. Second, we demonstrated the capability to perform plan execution and closed-loop control to operate these tracks with minimal operator intervention. Fewer than ten operator actions were required to perform a Voyager track, which normally would require an operator to take many times this number of steps to achieve the goal. Third, the demonstration showed that it is possible to extend an existing knowledge base to accommodate new missions and subsystems. There is a cost involved in making any such extension, however, and we have provided the details of what was required in terms of knowledge engineering and development. As a result of this demonstration, DPLAN and LMCOA are influencing the design of two new DSN subsystems, the Monitor and Control subsystem and the Network Planning and Preparation subsystem. While there are many outstanding research areas preventing complete end-to-end automation of the DSN (see Hill et al. (1995b) for a more detailed description of some of these), the concepts and technology demonstrated to date represent a significant step towards automation of routine DSN tracking services. 8. Discussion 8.1 Lessons learned In demonstrating the automation at DSS13, several important lessons were learned. First, the validation of the knowledge base was a much more arduous task than expected. Considerable change occurred in the LMCOA knowledge base as a result of extensive testing on the actual operational equipment. This testing revealed the extreme difficulty of direct encoding of the low level procedural knowledge to execute DSN tracks. This valuable lesson could have a significant impact on the eventual automation of DSN operations – it means that significant resources need to be devoted to validating the low level antenna operations procedures and adequate time for testing and validation of these procedures is a must. In retrospect, given the complexity of the individual subsystems this should not have been such a great surprise. A second important lesson was that modifying the planner knowledge base to account for changes in the TDNs was not overly difficult. This is shown by the relatively small amounts of effort involved in knowledge engineering for the planner component relative to the execution component. This is due to the fact that the planner was not as severely challenged since it was only required to generate a few types of TDNs. In a more complete test, it would Expert Systems, August 1998, Vol. 15, No. 3

have to generate a wide range of tracking TDNs to support a range of spacecraft for different types of tracks. In a more challenging test (Chien et al., 1996; Chien, 1996) the cost of acquiring, validating, and maintaining the planning knowledge would have been significantly greater. A third lesson regarding the antenna operations domain is that the majority of the interactions and complexity are at the lower level. Designing the blocks (i.e. determining the low level directives to achieve and verify low level subsystem states) was much more complicated (and knowledge and labor intensive) than composing the blocks themselves. Because of this complexity, further efforts in this area have heavily utilized smart tools for representing and reasoning about subsystems (such as finite state diagrams, and software engineering and analysis tools). 8.2 Future work One critical area for future work not explored in this article is the ability to close feedback loops between the various layers of the DSN automation architecture. These possible feedback loops occur when a failure occurs at a layer that cannot be resolved at that layer — necessitating that the next higher layer resolve the problem. For example, execution of a TDN block by the LMCOA system may fail such that LMCOA cannot recover. In this case the planner should be able to replan from the current state to attempt to continue to provide tracking services as well as possible. If the planner cannot replan to maintain the services, the failure should be passed back to the scheduler which may need to schedule additional tracks in order to ensure that the spacecraft/project is properly serviced. These issues are explored in further detail in Chien et al. (1997). 8.3 Related work There are a number of existing systems which also integrate scheduling, planning, control, and execution monitoring. We do not attempt to review them all, but focus on a few representative systems. To begin with, the main distinction between this architecture and other work is the hierarchical structure and the complexity of the DSN antenna operations domain. Brooks’ subsumption architecture (Brooks, 1986) contains no hierarchy of planning, scheduling, or control. This type of architecture has been used for mobile robot navigation, where replanning and rescheduling is a more constrained problem as compared to antenna operations which must schedule and plan for multiple resources (antennae and subsystems), with both hard and soft temporal constraints. CIRCA (Musliner, Durfee & Shin, 1993) has a threetiered architecture comprised of a planner, scheduler, and an executor which interacts with the environment through actuators and sensors in a mobile robot navigation domain. 153

CIRCA does planning then scheduling, versus the DSN automation architecture which must first schedule and then plan. CIRCA’s scheduling enforces hard real-time constraints, but returns failure if it cannot meet the time constraints. DANS/OMP, on the other hand, enforces hard realtime constraints, but always returns a schedule, by using the priority scheme which maximizes the number of project requests that it accommodates. If some project requests cannot be accommodated, DANS (Demand Access Network Scheduler)/OMP will still return a schedule, even though it is sub-optimal. Bonasso et al. (1997) describe 3T, a three-tiered architecture with a planner, sequencer, and a reactive skills module which interacts with the environment. Planning occurs hierarchically before sequencing, unlike the architecture which we describe in this paper, which does scheduling then planning. The sequencer in Bonasso et al. is a RAP interpreter (Firby, 1989) which encodes all the timing information within the RAPs. DANS/OMP does not use RAPs, and uses a more complex algorithm to schedule the projects’ requests. Unlike the DSN automation architecture, in Bonasso et al.’s system all three of its tiers do not need to be used for a given task. In the DSN domain necessarily scheduling, then planning, then control and execution must happen for successful antenna operations. ATLANTIS (Gat, 1992) is also a three-tiered architecture, similar to Bonasso et al. It is comprised of a controller which acts at the lowest reactive level, a sequencer which is a special-purpose operating system based on the RAP system, and a deliberator which does planning and world modeling. In ATLANTIS, it is the sequencer which does the brunt of the work; the deliberator is under the control of the sequencer. In fact, the deliberator’s output is merely used as advice by the sequencer, and the entire system is able to function without the deliberator if necessary. In the DSN automation architecture, as mentioned above, scheduling occurs hierarchically before planning; both steps are necessary. Also, there is a control and execution tier which is separate from the scheduling tier, unlike ATLANTIS which combines sequencing with control. TCA (Simmons, 1994) has no real tiers, but many distributed modules working with a central control module via message-passing. There is no hierarchy that sets up schedules or plans; TCA operates by setting up a task tree instead. AuRA (Arkin, 1989; Arkin & Balch, 1997) has threetiers; planning, sequencing, and execution for use in mobile robot navigation. Its sequencer simply traverses a Finite State Automata (FSA) expression of a plan, unlike the more powerful algorithms used for scheduling in DANS/OMP. Also, AuRA first plans and then sequences, whereas the DSN automation architecture first schedules, then plans. The Cypress (Wilkins et al., 1995) architecture has plan and execution modules which operate asynchronously. There is also an uncertainty reasoning module which com154

municates with both the plan and execution modules. The DSN automation architecture’s scheduling, plan and execution modules can operate asynchronously, but there is no separate uncertainty reasoning module. Each tier handles uncertainty independently. Cypress is also not truly a hierarchical architecture and has no scheduling component. The military domain that Cypress has been used for is fairly complex, but since there is no scheduling component, Cypress doesn’t tackle as comprehensive a problem as that described in this paper. Both SOAR (Laird, Newell & Rosenbloom, 1987) and Guardian (Hayes-Roth, 1995) are general reasoning systems that can be adapted to a given task environment. The algorithms of the planner and the scheduler in the DSN automation architecture could be applied to a number of domains. The execution tier in our architecture, though, is particular to the antenna operations domain. Guardian does not have a hierarchical architecture, but uses a blackboard architecture with one module devoted to scheduling, planning, and control. SOAR also collapses all the tiers into a single mechanism. The New Millennium Deep Space One Mission Remote Agent (NMRA) (Pell et al., 1996, 1997) is an integrated software system including a planner/scheduler, smart executive, and mode identification and recovery components designed for control of an autonomous spacecraft. NMRA differs from our DSN automation architecture in several ways. First, our DSN architecture has a complex prioritization and satisficing component in its top layer, NMRA has no direct counterpart to this layer. The NMRA planner/scheduler functionality corresponds to scheduling and planning functionality provided by DANS and DPLAN in our DSN automation architecture. The NMRA function integrates these two functions as required by the spacecraft domain. The smart executive component provides a similar functionality to the robust plan execution monitoring provided by LMCOA and NMC. And in the DSN automation architecture, subsystem level fault detection, isolation, recovery, and mode identification is supported in the individual subsystem controller software modules. The DSN automation architecture uniquely combines a scheduler, planner, and execution module to automate a complex domain with many conflicting, hard constraints, handling replanning and rescheduling as necessary. The systems which have been designed for mobile robot navigation do not operate in as complex a domain as the DSN antenna operations domain. Examining the general reasoning systems, these are not hierarchically organized into separate planning, scheduling, and execution tiers. This hierarchical organization is a necessary part of the DSN antenna operations domain. The DANS/OMP scheduler uses more powerful algorithms then any of the other described systems’ schedulers or sequencers. Unlike most of these systems, in the DSN antenna operations domain, it is necessary to first schedule and then plan, rather than Expert Systems, August 1998, Vol. 15, No. 3

plan and then schedule. Lastly, during execution, none of the other systems described appear to be capable of communicating with as large a set of external equipment as there are in the DSN antenna operations domain, monitoring for possibly multiple antenna or subsystem failures. 9. Conclusions This paper has described the application of Artificial Intelligence techniques for plan generation, plan execution, and plan monitoring to automate a Deep Space Communication Station. This automation enables a Communication station to respond to a set of tracking goals by appropriately reconfiguring the communications hardware and software to provide the requested communications services. In particular this paper has described: (1) the overall automation architecture, (2) the plan generation and execution monitoring AI technologies used and implemented software components, and (3) the knowledge engineering process and effort required for automation. These technologies are currently being transferred to the operational Deep Space Network stations. Acknowledgments The research described in this paper was carried out by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This demonstration was funded by the DSN Advanced Technology Program, NASA Code O, managed by Dr. Chad Edwards. Elements of the technology demonstrated were previously funded by the Operations/Artificial Intelligence Program, NASA Code X, managed by Dr. Mel Montemerlo. We would also like to thank Richard Chen, Crista Smyth, Trish Santos, and Roland Bevan for their contributions to the DSS13 demonstration. We would also like to thank the anonymous reviewer(s) of this article whose detailed and thorough suggestions contributed significantly to the presentation and discussion within this article. References Arkin, R. (1989) Motor schema-based mobile robot navigation, International Journal of Robotics Research, 8(4). Arkin, R. and T. Balch (1997) AuRA: Principles and practice in review, Journal of Experimental and Theoretical Artificial Intelligence, 9(2). Bonasso, R.P., R.J. Firby, E. Gat, D. Kortenkamp, D.P. Miller and M.G. Slack (1997) Experiences with an architecture for intelligent, reactive agents, Journal of Experimental and Theoretical Artificial Intelligence, March. Brooks, R. (1986) A robust layered control system for a mobile robot, IEEE Journal of Robotics and Automation, 2(1). Chien, S.A. and H.B. Mortensen (1996) Automating image processing for scientific data analysis of a large image database, Expert Systems, August 1998, Vol. 15, No. 3

IEEE Transactions on Pattern Analysis and Machine Intelligence 18(8), 854–859, August. Chien, S.A., R.W. Hill Jr., X. Wang, T. Estlin, K.V. Fayyad and H.B. Mortensen (1996) ‘Why real-world planning is difficult: A tale of two applications,’ in New Directions in AI Planning, M. Ghallab and A. Milani, eds., Washington, DC: IOS Press, 287–298. Chien, S.A. (1996) Knowledge acquisition, validation and maintenance in automated planning system for image processing, Proceedings of the Tenth Workshop on Knowledge Acquisition for Knowledge-based systems, Banff, Canada, November. Chien, S.A., R.W. Hill Jr., A. Govindjee, X. Wang, T. Estlin, M.A. Griesel, R. Lam and K.V. Fayyad (1997) A hierarchical architecture for resource allocation, plan execution and revision for operation of a network of communications antennas, Proceedings of the 1997 IEEE International Conference on Robotics and Automation, Albuquerque, NM; April 4, 3340–3347. Erol, K., J. Hendler and D. Nau (1994) UMCP: A sound and complete procedure for Hierarchical Task Network Planning, Proceedings of the Second International Conference on AI Planning Systems, Chicago, IL, June, 249–254. Fayyad, K., R.W. Hill Jr. and E.J. Wyatt (1993) Knowledge Engineering for Temporal Dependency Networks as Operations Procedures, Proceedings of AIAA Computing in Aerospace 9 Conference, San Diego, CA. Firby, R.J. (1989) Adaptive Execution in Complex Dynamic Worlds, PhD Thesis, Yale University. Gat, E. (1992) Integrating planning and reacting in a heterogeneous asynchronous architecture for controlling real-world mobile robots, in Proceedings of the National Conference on Artificial Intelligence (AAAI). Hayes-Roth, B. (1995) An architecture for adaptive intelligent systems, Artificial Intelligence, 72. Hill, R.W. Jr., S. Chien, C. Smyth and K. Fayyad (1995a) Planning for Deep Space Network, Proceedings of the 1995 AAAI Spring Symposium on Integrated Planning Applications, Palo Alto, CA: AAAI Press. Hill, R.W. Jr., S.A. Chien, K.V. Fayyad, C. Smyth, T. Santos and R. Bevan (1995b) Sequence of events driven automation of the Deep Space Network, Telecommunications and Data Acquisition, 42–124, October–December. JPL (1994) Deep Space Network, Jet Propulsion Laboratory Publication 400–517, April. JPL (1995) Final report of the services fulfillment re-engineering team, JPL Interoffice Memorandum, March 14. Kan, E., J. Rosas and Q. Vu (1996) Operations Mission Planner — 26M users guide modification 1.0, JPL Technical Document D-10092, April. Laird, J.E., A. Newell and P.S. Rosenbloom (1987) SOAR: An architecture for general intelligence, Artificial Intelligence, 33(1). Lansky, A. (1994) Action-based planning, Proceedings of the Second International Conference on AI Planning Systems, Chicago, IL, June, 110–115. Musliner, D.J., E. Durfee and K. Shin (1993) CIRCA: A cooperative, intelligent, real-time control architecture, IEEE Transactions on Systems, Man and Cybernetics, 23(6). Pell, B., D. Bernard, S. Chien, E. Gat, N. Muscettola, P. Nayak and B. Williams (1996) A remote agent prototype for spacecraft autonomy, Sciencecraft for the New Millennia, SPIE Conference, Denver, CO, August. Pell, B., D. Bernard, S. Chien, E. Gat, N. Muscettola, P. Nayak, M. Wagner and B. Williams (1997) An autonomous spacecraft agent prototype. Proceedings of the First International Conference on Autonomous Agents, Marina Del Rey, CA, February. 155

Pemberthy, J.S. and D.S. Weld (1992) UCPOP: A sound complete, partial order planner for ADL, Proceedings of the Third International Conference on Knowledge Representation and Reasoning, October. Simmons, R. (1994) Structured control for autonomous robots, IEEE Transactions on Robotics and Automation, 10(1), February. Wilkins, D.E., K.L. Myers, J.D. Lowrance and L.P. Wesley (1995) Planning and reacting in uncertain and dynamic environments, Journal of Experimental and Theoretical Artificial Intelligence, 7.

The Authors Randall W. Hill, Jr. Randall W. Hill, Jr. is a computer scientist at the University of Southern California Information Sciences Institute (USC-ISI) and a research assistant professor in the computer science department at USC. He received his BS degree from the United States Military Academy at West Point in 1978 and his MS and PhD degrees in computer science from USC in 1987 and 1993, respectively. Before joining USC-ISI, Dr Hill spent eleven years performing applied AI research at the Jet Propulsion Laboratory (JPL) where he developed numerous embedded AI applications for NASA’s Deep Space Network (DSN) and the US Army. While at JPL he also managed the Monitor and Control Technology group and was the work area manager of Network Automation for the DSN. At USCISI, Dr Hill is developing agents for virtual environments, performing research on focus of attention, perception, and cognition, and he is assisting in the management of the Soar Project. His other research interests include architectures for integrated intelligence, multi-agent collaboration, cognitive modeling and intelligent tutoring. Dr Hill also has a life-long passion for teaching and he recently taught a graduate-level course in artificial intelligence at USC.

Steve Chien Dr Steve Chien is Technical Group Supervisor of the Artificial Intelligence Group and Principal Computer Scientist in the Information and Computing Technologies Research Section at the Jet Propulsion Laboratory, California Insti-

156

tute of Technology where he leads efforts in automated planning and scheduling. He holds a BS with Highest Honors in Computer Science, with minors in Mathematics and Economics, MS, and PhD degrees in Computer Science, all from the University of Illinois. Dr Chien has served as a program committee member and organizer of numerous workshops, symposia and conferences. Most recently, he was the Program Chair for the 1997 NASA Workshop on Automated Planning and Scheduling for Space Exploration and Science and is also a Program Committee Member and Workshops Chair for the 1998 Conference on AI Planning Systems. Dr Chien was a recipient of the 1995 Lew Allen Award for Excellence, and is a 1997 recipient of the NASA Exceptional Achievement Medal for his work in research and development of planning and scheduling systems for NASA. Dr Chien has presented invited seminars on machine learning, planning, scheduling, and expert systems, and has authored numerous publications in and serves as a consultant to several multinational corporations in these areas. His current research interests lie in the areas of; planning and scheduling, machine learning, operations research, and decision theory.

Kristina Fayyad Kristina Fayyad received the MSE in Computer Science and Engineering from The University of Michigan, Ann Arbor in 1990. She also holds a BSE in Electrical Engineering and BSE in Computer Engineering (1985) from Michigan. She has worked for 12 years as a software engineer applying object-oriented methodologies to various research and development prototypes. Her most recent position was a Senior Software Engineer at Cascade Design Automation designing and testing an object-oriented database for VLSI design tools. From 1992 to 1996 she was at the Jet Propulsion Laboratory, California Institute of Technology, where she was a Technical Group Leader and managed a project for advanced monitor and control technology for JPL’s Deep Space Network. Prior to JPL she worked at the Center for Machine Intelligence, and General Motors/EDS research center and focused on automated diagnosis of manufacturing plant designs. From 1985–88 she was at ERIM, Ann Arbor, MI working on DARPA funded research projects targeting autonomous vehicle navigation and remote sensing image processing.

Expert Systems, August 1998, Vol. 15, No. 3