a model for cooperation between humans and intelligent systems

2 downloads 52676 Views 371KB Size Report
functions leads to a System in which a new model of coopération with the automation is needed; the human is no longer the supervisor of the automation, but ...
A MODEL FOR COOPERATION BETWEEN HUMANS AND INTELLIGENT SYSTEMS M. M . (Rene) V A N PAASSEN Max MULDER An L. M. ABELOOS

Delft University of Technology, Faculty of'Aerospace Engineering, Kluijverweg 1, 2629 HS Delft, The Netherlands M. M. vanPaassen @ Ir. tudelft. nl M.Mulder @ Ir. tudelft. nl almabeloos @ hotmail, com

ABSTRACT Advanced support and warning Systems have been developed for application in aircraft. This paper looks at the possibility to integrate these systems, and present coordinated information to the human pilots. An architecture based on agent technology is proposed to achieve coopération between support systems.

KEYWORDS Aircraft Cockpit, Adaptive Interface, Intelligent Interface, Automation, Human Factors.

INTRODUCTION Human operators of complex systems are usually supported by automated support functions. Increasing sophistication of these support functions leads to a System in which a new model of coopération with the automation is needed; the human is no longer the supervisor of the automation, but automation works alongside the human operator. The capabilities of each of these actors in such a complex System, automation and the human operator, should be exploited to obtain a robust and safe behaviour for the total system, while in addition the human operator's job satisfaction is ensured. This paper discusses a référence frame for the coopération between human users and intelligent support functions in a complex system. Application of this référence frame results in the design of a support system architecture for aircraft support and warning systems.

EAM-2000

REFERENCE FRAME The technology of cooperating "intelligent" agents (Wooldridge and Jennings, 1995), allows the development of a modular support system, where software agents each handle a support function in the system, communicate their advices, needs, goals, etc. to each other, and jointly communicate with the human operator. The main problem that has to be solved in such a system is the problem of adequate communication between the intelligent technology working behind (beyond) the interface, and the human users of the system. This will be illustrated at the hand of the application domain of aircraft support and warning systems. Modern aircraft are equipped with several warning and support systems. Functioning of critical board systems, such as the power, and hydraulics systems, and of critical components such as the engines, is all monitored by automation. In addition several conflict détection systems are applied, for example the Ground Proximity Warning System (GPWS) and the Traffic Alert and Collision Avoidance System (TCAS). With the modernization of air traffic control, monitoring of air traffic by means of the Airborne Separation Assurance System (ASAS) will be added (Abeloos, 1999). Currently these systems function largely independent from each other, and all of these systems have their individual "Channels" to communicate with the pilot, thereby competing for the scarce space on the main instrument panel, and for the scarce attentional resources of the pilot.

67

Environment

System

Figure

1: Human Operator supported by intelligent agents (A) in the control of a System. Through actuators/sensors and monitoring and automation functions the agents observe and influence the environment and the System. Information about System goals is présent both within the agents and in the human Operators).

Using agent technology, it should be possible to let all these warning and monitoring systems communicate with each other. Using the proper ontology, warnings and limitations can be compared and made consistent with each other. Addition of the priority of messages to the pilot provides the basis for adaptive communication (using changes in display format and use of sound and/or tactile channels).



Agents (in the inner circle). Each agent uses one or more functions to interpret the meaningful events in the context of the goals of the aircraft/pilot/automation system. The agent handling the TCAS warnings could translate those warnings into constraints for manoeuvering the aircraft. If properly designed, these agents "speak the same language", based on a common ontology, and can communicate with each other. Manoeuvering constraints from the TCAS agent can then be combined with constraints from other agents (e.g. engine and flight controls monitoring agents). Priority information, for example the fact that short term evasive manoeuvres are at this point more important than the planning of the flight path 2 0 or more minutes in advance, can be used to adapt the display and convey the warning to the pilots in an adequate manner.



The pilot(s), on the left side of the diagram, communicate with the support system through interfaces. Seen from the agent's viewpoint, communication with pilots is essentially similar to communication with monitoring and automation functions.

This leads to a system as depicted in Figure 1, in which the following can be distinguished: •

Sensors (outer layer of the circle) gather and measure signals from the environment of the aircraft, or from the aircraft. Examples are the interrogation of aircraft transponders and the detection of their replies by the TCAS hardware.



Monitoring and automation functions (in the middle layer of the circle) use the signals and interpret these to detect meaningful events. Continuing with the previous example, this could be the TCAS logic for detecting aircraft in the vicinity with a course that could lead to a collision. Such functions already exist in modern aircraft, and are usually implemented asralebases.

EAM-2000

68



Pilots also can have direct contact with the aircraft and the environment.

simple interface to correct understanding of the system goals.

the

agent's

One can see that information, as it travels from the world external to the support system to its core, becomes more meaningful. Using the terminology of (Rasmussen, 1983), the outer layer handles signais, the middle layer converts these into signs, and the agents handle symbols.

This requires that the interface between the support system and the pilots can use an ontology that enables transfer of information at higher levels of abstraction without loss or possibility of misunderstanding.

As information travels inwards in this design of a support system, more abstract functions of the support system handle this information. The agents themselves consider the information they receive in the context of the goals of the combined system. For a correct interprétation of the received information, these goals should be known. Communication on a goal level between the agents themselves can be designed into the system. However, goal and intent information is distributed between the support system and the human pilots.

R E F E R E N C E DESIGN

The agents in the support Systems and the pilots are separated by an interface that is similar to the interfaces between the outside world or the aircraft and the agents. Signais on the interface have to be interpreted as signs and subsequently as symbols, and this works in both directions, i.e. from the agents to the pilots, and vice versa. The most automatic, and if it works, comfortable implementation of the communication between a support system and the human operator would be based on interprétation of the human's actions, and inference of the human's goal sélection from those actions. However, for a safety-critical system such as an aircraft this is a dangerous design. Goal inference from simple actions is problematic at best (van Paassen, 1995), and a misinterpretation could lead to confusing messages from the support system. The safest bet would be the direct spécification of the goals selected by the human to the support system. However, in an aircraft this would at some times cause an unacceptable increase in pilot workload. In our research we will attempt to implement inference of the goals from higher-level information, such as the flight plan, and predefined goals, such as the company strategy, combined with an explicit but concise feedback of the agent's understanding of the system and a

EAM-2000

Summarizing from the previous section, a Human-Machine Interface that supports collaboration between one or more human pilots and a support system should provide the following: •

A means to optimally use the "communication bandwidth" between the support system and the human operators. To this end the support system must know the total system goals, and be able to infer what information is important in the context of these goals.



A means for concise and clear communication between the human operators and the support system at a high abstraction level. In other words, the support system must know what the human is doing, and vice versa, and it must be possible to clear any misunderstandings.

In addition, we want to strive for a modular design of the support system for our application. This is not only desireable from a design standpoint, it also facilitâtes modular intégration of the stand-alone support Systems that are presently in use, and incorporation of new support Systems. In the design depicted in Figure 2, the décision of how to use the information channels between the pilot(s) and the support system is made by an interface mapper. Content of the information is supplied by the different functionalities, implemented as agents. Sélection of format and priority of information is done on the bases of recommendations by a selector. This selector combines information about the system goals with the knowledge of the system and the environment. It does not need spécifie information about the functionalities of the other agents.

69

access to the interface interface mapper

A Human operator

goal+ constraint o CS

O

'3

> ,

tí O 5 o o

sao

*-»

goal manager update on goals Figure 2: Design of an intelligent support System. A goal manager maintains knowledge about the current system goals. Some of this knowledge is static, for example safety must be assured at all times. Other information changes with the flight, e.g. each flight can have a different destination, and short term goals, such as adherence to a certain flight level or the execution of a descent procedure, can change during the flight. The interaction with the environment, the aircraft and the pilots is performed by function-specific agents. Examples are the GPWS, TCAS or FMS (Flight Management System) agents. These agents transíate function-specific "signs" into generic symbols, e.g. TCAS warnings are translated into a restriction of the flight space that would conflict with the safety of the flight. These agents sometimes have to make assumptions about the environment, for example TCAS might assume that an aircraft on a potential colusión course maintains its present climb. This information set consists of (1) the goal that is afflicted, (2) the constraint that has to be followed to ensure achievement of the goal (i.e. the specific do or don't), and (3) the assumptions used by the agent providing the information. For example a colusión warning on the basis of navigation information and a comparison with a datábase can be characterised as: (1) a thread to

EAM-2000

airframe integrity, that can be avoided by (2) a climb with 10 deg flight path angle with a track of 100 to 250 deg or a flight with positive flight path angle with a track of 250 to 340 deg, based on (3) a specific position obtained from VOR/DME data and a terrain datábase file. This information set is the knowledge about the system and the environment that is needed by the selector. The selector uses this information, and uses information about goal priority, to form a recommendation to the interface mapper. Function specific agents that interact with the pilots can also supply information about the system goals to the goal manager, such as functions 2 and n in Figure 2.

CONCLUSIONS This paper looks at advanced support Systems not as Systems that are placed under the supervisory control of a human operator, but as Systems that cooperate, alongside the human Operators, in the achievement of the combined system goals. Several challenges can be identified in such a Setup, namely (1) that the support system must have an adéquate représentation of the combined System goals, (2) that goal information is shared between the human Operators and the support

70

system, and that therefore a match between these representations must exist and (3) that there is a need for a concise communication format for goal information. A modular set-up of such a support system, using for example agent technology, ensures that existing technology can be integrated in the support system. An architecture, in which a goal manager, selector and a display mapper are combined with function-specific agents is outlined.

REFERENCES

ASAS and ACAS interactions — A study to investigate the interactions between Airborne Separation Assurance Systems and Airborne Collision Avoidance Systems,

Abeloos, A. L. M . (1999).

Master's thesis, Delft University of Technology — Faculty of Aerospace Engineering, Delft. Rasmussen, J. (1983). Skills, rules and knowledge; signals signs and symbols, and other distinctions in human performance models, EEEE-SMC 13(3): 257-266. van Paassen, M . M . (1995). Tracking humancomputer dialogues in process control applications, in H. G. Stassen and P. A. Wieringa (eds), XIV European Annual

Conference on Human Decision Making and Manual Control, Delft, pp. 9—3. Wooldridge, M . and Jennings, N. R. (1995). Intelligent agents: theory and practice,

The Knowledge Engineering Review 10(2): 115--152.

EAM-2000

71