Conflict Detection During Plan-Integration for Multi-Agent ... - CiteSeerX

7 downloads 0 Views 275KB Size Report
Sep 28, 2000 - 2 This research is supported in part by the Texas Higher Education Coordinating Board ...... Vision, Westin Stamford, Singapore, 1996. [21].
Conflict Detection During Plan-Integration for Multi-Agent Systems K. S. Barber

T. H. Liu

S. Ramaswamy

The Laboratory for Intelligent Processes and Systems Electrical and Computer Engineering The University of Texas at Austin http://www.lips.utexas.edu [email protected] phone: (512) 471-6152 fax: (512) 471-3652

Abstract This paper describes techniques developed for conflict detection during plan integration. Agents’ intensions are represented with Intended Goal Structure (IGS) and E-PERT diagrams. Conflicts are classified as goal, plan and belief conflicts. Before integrating individual plans and detecting plan conflicts, agents first detect and eliminate their goal conflicts by exchanging their IGS. Plan integration is done through merging individual E-PERT diagrams. PERT diagrams were originally developed in 1980s for project management to provide a global consistent view of parallel activities within a project. We extended PERT diagrams for use in the plan integration activity within multi-agent systems. The E-PERT diagram contributes to maintain traceable temporal relations among agents’ local scheduled actions. Combined with pattern matching, plan conflicts due to resource sharing or conflicting conditions (i.e. post-conditions of one action disabling preconditions of another action) can be detected. The conflict detection techniques are implemented in Sensible Agent Testbed to promote deployment and performance analysis. Accepted by

IEEE Transactions on Systems, Man, and Cybernetics

Sep 28, 2000

Conflict Detection During Plan-Integration for Multi-Agent Systems K. S. Barber

T. H. Liu

S. Ramaswamy1

The Laboratory for Intelligent Processes and Systems2 Electrical and Computer Engineering The University of Texas at Austin http://www.lips.utexas.edu [email protected] phone: (512) 471-6152 fax: (512) 471-3652

Abstract: This paper describes techniques developed for conflict detection during plan integration. Agents’ intensions are represented with Intended Goal Structure (IGS) and E-PERT3 diagrams. Conflicts are classified as goal, plan and belief conflicts. Before integrating individual plans and detecting plan conflicts, agents first detect and eliminate their goal conflicts by exchanging their IGS. Plan integration is done through merging individual E-PERT diagrams. PERT diagrams have been used extensively in the systems analysis area from the 80's to provide a global consistent view of parallel activities within a project. We extended PERT diagrams for use in the plan integration activity within multi-agent systems. The E-PERT diagram contributes to maintain traceable temporal relations among agents’ local scheduled actions. Combined with pattern matching, plan conflicts due to resource sharing or conflicting conditions (i.e. post-conditions of one action disabling preconditions of another action) can be detected. The conflict detection techniques are implemented in Sensible Agent Testbed to promote deployment and performance analysis.

1. Introduction A multi-agent system can be seen as a group of entities interacting to achieve individual or collective goals. Multi-Agent Systems (MAS) [1] implement distributed problem-solving, which provides many advantages including fast parallel computing, flexibility through partitioned expertise, narrow-bandwidth high-level communication, and increased fault tolerance. Many systems existing in the real world are

1

Since Fall 1998, Dr. Ramaswamy can be reached at the Computer Science dept at Tennessee Tech University. Phone: (931)-372-3448, Email: [email protected], URL: www.csc.tntech.edu/~srini.

2

This research is supported in part by the Texas Higher Education Coordinating Board Advanced Technology Program (#003658452). The authors would also like to thank Anuj Goel, Cheryl Martin, Robert Macfadzean, Eric White, Ryan McKay, David Han, and Joonoo Kim for their contributions to the Sensible Agent project.

3

E-PERT diagram stands for Extended PERT diagram.

1

naturally distributed due to their spatial, functional, or temporal difference; therefore MAS can easily fit into such distributed environments. Through coordination, agents within such a system can work together to ensure coherent action. Conflict resolution is essential for coordinated agent behavior. Coordination is the process by which agents reason about and manage the interdependencies among their behaviors and try to ensure that all members of the system act consistently [2]. Coordination does not necessary imply cooperation; but cooperation can be seen as a special case of coordination among non-antagonistic agents. The objectives of coordination include reduction of task redundancy, increased system performance, and elimination of conflicts that prevent agents from achieving their goals. Coordination and conflict resolution differ. Coordination is a continuous process in the system, while conflict resolution is an event driven process triggered once a conflict is detected. Conflicts occurring among agents are intrinsic and direct when the goals or beliefs that agents hold are logically inconsistent. Conflicts can also be indirect and implicit when goals, or beliefs, contradict one another [3]. Conflict resolution (CR) is the essential and necessary part to achieving coordination. The generic conflict resolution process includes conflict detection, search for solutions, and communication among agents to reach an agreement with regard to the conflict resolution solutions to be pursued. Conflict resolution to achieve coordinated behavior is key to successful plan integration among multiple agents. Plan integration is the process of merging and coordinating agents’ individual plans to generate group level plans. The first objective of plan integration is to eliminate conflicts such that every agent can achieve their goals. The second objective is to increase performance and reduce redundant activities. Conflict detection is the triggering event for conflict resolution, in addition, conflict detection is one way to verify coordination. Effective coordination in multi-agent systems should increase the degree of mutual predictability, decrease conflicts, and therefore increase system performance [3]. Since various kinds of conflicts may co-exist at the same time within a multi-agent system, conflict detection can be

2

very complicated and time-consuming process. Communication or behavior inference may be needed to acquire an agent’s intentions before conflict detection starts. This paper presents different conflict detection techniques during plan integration. Section 2 briefly overviews previous research work. Section 3 formally specifies conflicts which are encountered during plan integration. Section 4 describes goal and plan representation while section 5 introduces conflict detection. Section 0 provides an illustrative multi-robot example. Section 7 and 0 present the implementation and simulation of the Sensible Agent environment. Section 9 presents a performance analysis case study and Section 10 concludes the paper.

2. Related research There are several research issues in conflict detection and analysis. In this section, we will first briefly describe the issues of conflict detection, starting from the general principles to difficulties encountered in Multi-Agent Systems. In general, to detect goal and plan conflicts, a check is performed on the pre- and post-conditions of each intended goal and action. If the desired post-conditions are not compatible, goal conflicts are detected. If there is a planned action whose pre-conditions become invalid because of some agent’s actions, plan conflicts are detected. Using the terminology of least commitment planning, a causal link, AP Å QAC, indicates that action AP has an effect Q that achieves precondition Q of action AC. A threat is an action, At, which has the effect ¬Q and can be inserted between AP and AC. Plan conflicts happen when a causal link, AP Å QAC, becomes ineffective by such a threat [4]. One of the issues in conflict detection for either goal conflicts or plan conflicts in a multi-agent system is knowing the intentions of other agents. Direct communication and behavior inference are two popular approaches for this issue. Behavior inference is achieved by applying plan recognition to an agent’s observed external behaviors to infer their intentions and detect conflicts [5]. The uncertainty of the inference correctness in this approach may limit the applicability of this approach. Direct communication involves an agent telling another agent of its intentions. For example, Chu-Carroll and

3

Carberry use an enhanced dialog model to capture agents’ intentions [6]. Then by comparing plans or by reasoning about their goals, an agent can detect potential conflicts. This approach is popular and effective, however, the information exchanged must be precise, accurate and true. Even with the assumption that agents will always tell the truth, there exists a possibility that an agent may not provide information that is current, and hence, possibly incorrect. Since differences between agents always exist in MAS, the purpose of conflict detection is not to detect all differences among agents, but only the ones that may prevent agents from achieving their goals concurrently. In addition, disparities among agents do not necessarily cause conflicts; different viewpoints can generate alternative solutions and contribute to problem solving in the system [7]. In the past two decades, researchers have developed various conflict resolution strategies for multi-agent systems. Some common conflict resolution strategies include negotiation[8], arbitration [9], priority conventions [10], voting [11], and self-modification [5]. Previous research provides a foundation for the decomposition and analysis of multi-agent conflicts. Researchers have analyzed goal conflicts among fully cooperative [12] and not fully cooperative [13] agents. Resource conflicts, [14] as well as plan conflicts, [15] have also been specifically examined. Domain independent conflict representation and classification is proposed in [16]. Conflict detection is an essential process for agents’ problem solving activities, especially during plan integration. In [17], problem solving activities are roughly classified in the following five phases: agent organization construction (AOC), plan generation (PG), task allocation (TA), plan integration (PI), and plan execution (PE). Various techniques can be applied at these phases and the execution order among these phases can be recursive and iterative. During plan integration, agents need to coordinate and eliminate conflicts among their local plans (which are generated according to their decisions at previous phases, i.e. agent organization construction, plan generation, and task allocation) to generate group plans.

4

Researchers in project management have similar needs; i.e. coordinating activities. PERT/CPM4 based methods [18] have been used extensively in the systems analysis and project management areas to provide a global consistent view of parallel activities within a project. A project is a set of tasks or activities related to the achievement of some objective, which is unique and non-repetitive. The CPM method involves a graphical portrayal of the interrelationships among the elements of the project (activities) using a PERT diagram and an arithmetic procedure to identify the relative importance of each element in the overall schedule. In section 4, we will re-visit details of PERT diagrams and the proposed extensions. In this paper, we extended PERT diagrams (called E-PERT diagram) for use in the plan integration activity within multi-agent systems. The E-PERT diagram contributes to maintain traceable temporal relations among agents’ local scheduled actions and provide overview of coordinated group activities. Unlike traditional PERT diagrams, E-PERT diagram allows multiple start/end points and allows multiple viewpoints coexist at the same time. An agent’s activities are blocked together and therefore, interactions among blocks graphically represent the interactions among agents or potential conflicts. By verifying the graphic properties, conflicts can be detected with proper classification. This approach has not yet found in previous research reports. The following section identifies the conflict types this research focuses on, i.e. goal conflicts, plan conflicts, and belief conflicts. Precise definitions are also provided.

3. Conflicts Types and Definition Goal conflicts: Goal conflicts are conflicts involved with a goal's property, which may or may not be represented as ordering constraints or conditions. For example, for automobile designs, optimization of speed and safety are goals that are difficult to achieve at the same time, and usually there is some compromise between them. The extreme resolution solution of such a conflict may be to forfeit one of the

4

PERT stands for Project Estimation and Review Technique and CPM stands for Critical Path Method.

5

goals. For example, collision avoidance is a common resolution approach for a robot where the robot gives up the conflicting sub-goal (e.g. a path to a position) and searches for new sub-goals (e.g. new paths) to solve the conflict. In summary, goal conflicts can be resolved by forfeiting or modifying the corresponding goals. Plan conflicts: Plan conflicts are conflicts in which certain preconditions of an agent’s intended actions (of its temporary plan) become invalid due to the post-conditions of another agents’ actions. In general plan conflicts can be resolved by reordering plans to satisfy the preconditions of each action in their plans. Least commitment planning techniques such as plan refinement operators (promotion, demotion, white knight, and separation) can be applied to resolve such conflicts. Belief conflicts: Beliefs conflicts are conflicts that involve inconsistent beliefs. Since beliefs include propositions about facts and evaluations, belief conflicts can be inconsistent descriptions about facts or incompatible evaluation statements. Belief conflicts can also occur when agents have different structures to represent their beliefs, or if their beliefs are at different abstraction levels. Belief modification (of the agent itself or of others) can affect the agent’s reasoning in reaching a solution to resolve conflicts. The result of belief modification can also influence its supporting goals and beliefs. The above classification is not meant to separate conflicts into isolated classes. They are closely related and they can transform into one another. For example, after adding constraints to goals involved in goal conflicts, the goal conflict may become a plan conflict, which requires a reordering of an agent’s plans. Additionally, after failing to find a solution for plan conflicts, the conflicts may be transformed to goal conflicts, which require a relaxation of over-constrained pre-conditions. In addition to transformations between the different types of conflicts, some conflicts can be so closely related to one another that the resolution of one may automatically resolves the others. Before formally defining these conflicts, we will introduce notations of conditions, plans, goals, beliefs, and their associated operators.

6

Let cond x specify a condition, with unique identifier x, signifying a world state or a constraint, where

(cond x ) t1 − t 2 is true iff ( t1 ≤ t ≤ t2 ) ⇒ condx, where t represents a time variable. Let goalx represent a set of desired conditions, { cond x1 , cond x 2 ,... cond xn }. Let belx specify a belief, with unique identifier x. Let planx

represent

an

ordered,

scheduled,

sequence

of

primitive

actions,

{ act x1 → act x 2 →... → act xn }. Primitive actions are operators through which agents can affect the world around them. Let { post}act represent the set of conditions that are post-conditions of act x , and let { pre}act x

x

represent the set of conditions that are pre-conditions of act x . Let Ai be a particular agent, with the unique identifier i, where

act xi specifies a particular primitive action, with unique identifier x, taken by Ai, goalxi specifies a particular goal, with unique identifier x, held by agent Ai, planxi specifies a particular plan, with unique identifier x, held by agent Ai, belxi specifies a particular belief, with unique identifier x, held by agent Ai, i i i i {bel}goal i represents the set of beliefs held by A under which agent A holds goal x , and x

i i i i {bel}plan i represents the set of beliefs held by A under which agent A holds plan x . x

Let the operators ⇒⇐ and →← denote “contradicts” and “is in conflict with” respectively. Now belx ⇒⇐ bely iff there is either a strong or weak contradiction of beliefs. Strong belief contradiction occurs when (belx ⇒ ¬bely when (bela :- belx

∧ belb :- bely





bely ⇒ ¬belx ) and weak contradiction of beliefs occurs

bela ⇒⇐ belb ) where :- means imply. Similarly, condx ⇒⇐

condy iff there is either a strong or weak contradiction of conditions. Strong contradiction of conditions occurs when (condx ⇒ ¬condy



condy ⇒ ¬condx ) and weak contradiction occurs when (condx ∧

7

condy ) ⇒ “undesirable system state”. Note that the definition of weakly-contradicting conditions is dependent on the application domain. Goal Conflicts: Goal conflicts occur when two (or more) goals cannot be achieved together. The source of goal conflicts is the conflicting conditions required by the goals. That is,

∀i, j , x, y, goal xi →← goal yj iff ∃a, b, cond a ⇒⇐ cond b ∧ cond a ∈ goal xi ∧ cond b ∈ goal yj Plan Conflicts: Plan conflicts occur when there are harmful interactions between agents’ plans; certain post-conditions of agent Ai ’s actions invalidate the pre-conditions of agent Aj ’s planned actions. It is possible that goals are compatible while selected plans to achieve those goals cause conflicts. That is,

∀i, j , x, y, planxi →← plan yj iff ∃a, b, c, d , cond a ⇒⇐ cond b ∧

∧ (cond a )ta1 −ta 2 ∈ {post}actci ∧ {post}actci ∈ planxi ∧ (cond b )tb1 −tb 2 ∈ {post}actdj ∧ {post}actdj ∈ plan yj ∧ [tb1 , tb 2 ] ∩ [t a1 , t a 2 ] ≠ φ Belief Conflicts: Since agents may operate based on their own perspectives, belief conflicts can stem from contradictory facts in an agent’s knowledge base, or from contradictory perceptions based on an evaluation of those facts. By modifying beliefs, an agent alters its reasoning process and may resolve the conflict at hand. This modification may also influence supported goals, plans, or other beliefs of an agent. Therefore, the related belief conflicts can be defined as,

∀i, j , x, y, bel xi →← bel yj iff ∃a, b, c, d , bel x ⇒⇐ bel y ∧  i j i j  i j i j   bel x ∈ {bel}acti ∧ bel y ∈ {bel}act j ∧ goala →← goalb  ∨  bel x ∈ {bel}plani ∧ bel y ∈ {bel}plan j ∧ planc →← pland  a c b d     

8

The representation of conflicts for Sensible Agents is built on an agent’s subjective local perspective, i.e., based on the agents’ own models of itself, the other agents and its environment. The reason to build on agents’ local perspective is that the cost of building a global consistent perspective is expensive and not practical. We refer interested readers to [2] in which Jennings describes the argument in more detail. The benefit of our approach for representing conflicts includes (i) the representation can be used to trace the sources of the conflict. (ii) Since conflict resolution relies on modifying agents' attributes, this representation can also serve as a road map to systematically find potential solutions. Figure 1 shows a three-level representation of conflicts based on the relationships between goals, plans

and beliefs, and their relationship to system resources. The goal level represents the interdependencies among agents' goals and the necessary resources. The plan level shows the ordering relations among goals and their sub-plans. The belief level represents relationships of beliefs, either supporting or attacking relations. Links between beliefs and goals or between beliefs and plans show their supporting relationships. Sensible Agents use this representation of conflicts as part of their model for their environment and other agents. The information within this representation can be refreshed as agents periodically maintain their models, or it can be pursued actively during the conflict resolution process. Methods to gather such information include direct communication with other agents, or by observation of others' behaviors and inference of their intentions.

Goals support

achieve

require

pre & post-conditions schedule

Plans justified by

support

Beliefs

describe

R e s o u r c e s

Figure 1. Conflict Representation Based on Goals, Plans, and Beliefs.

9

4. Goal and Plan Representation This paper focuses on conflict detection during the plan integration process where agents integrate their individual plans to achieve their own or shared goals. Specifically we focus on goal conflicts and plan conflicts. Potential agent goals can be represented as classical AND/OR goal structures [19], also called goal trees. These goal structures may be dynamic or static, and need not be fully developed before an agent begins operation. A goal structure may be constructed through social activities such as negotiation, cooperation, and observation of environmental change. The dependencies among goals are represented as links that connect an agent’s goals to its other goals or to goals of other agents. These links can be either unidirectional or bi-directional [2]. Additionally, a goal’s dependency on particular resources can be represented through unidirectional links [19]. In this fashion, resource constraints among goals can be characterized. Resource constraints are defined through the availability of system resources and need not be represented in the goal layer itself. In the current implementation of this representation, goals that agents intend to pursue are represented in an agent’s Intended Goals Structure (IGS) Time

[20]. Once an agent chooses to accept a goal Agent A’s IGS

for achievement, this goal becomes an intended

A0

goal and is inserted in the IGS. Cohen and Agent A’s IGS A0

Levesque discuss agent intentions and their

A3

characteristics in detail [21].

In general, an

Agent A’s IGS

agent must plan in order to achieve its intended

A0

goals. An agent may also assist other agents by

A3

planning, or helping to plan, for their intended

AND

goals. A9

A10

Intended goals that an agent knows

A11

about, but does not itself intend, are referred to Figure 2 Development of Agent A’s IGS Over Time.

as external goals.

10

The IGS differs from an

AND/OR goal tree in several ways. In this implementation, the goal tree contains templates, lacking instance assignments to variables, for potential goals that have not been accepted by any agent. On the other hand, the IGS is an agent’s representation of the actual goals it will attempt to achieve (its own intended goals) as well as any additional goals it has accepted for which it must plan (external goals, intended by other agents). In addition, each entry in the IGS contains a list of resource descriptors associated with that intended goal as well as a list of the constraints on achieving that goal and the agent’s level of commitment to the goal. The IGS contains AND-only compositions of these goals and therefore does not represent alternative solutions or future planning strategies as goal trees do. Instead, the IGS represents what an agent has decided to do up to this point. Also, an agent can directly accept goals from other agents. Therefore, the IGS does not represent a strict decomposition of one top-level goal. Any goal in an IGS may simply contribute in some way to the likelihood of achieving the goals to which it is linked. Each agent maintains a unique IGS, as well as a (possibly incomplete) local model of the IGSs of other agents. Agent A, planning from a single goal tree shown at the left in Figure 2, may develop its IGS over time as shown. Note that goals in the IGS reflect a subset of goals from the goal tree. The conflict detection process examines the IGSs of each agent modeled and compares the post-conditions of agents’ intended goals to determine if conflicts exist. Based on existing literature in the operations research and project management fields, we adapt and extend PERT/CPM charts for plan representation and integration. Figure 3 shows a typical PERT/CPM diagram. Links represent activities (e.g. Act1, Act2, Act3) and are attached with required time (e.g. T1, T2, T3) and required resources (e.g. R1, R2, R3). Nodes represent states as the beginning and ending of related activities. The project objective (the node at the end of PERT diagram, e.g. Node 8 in Figure 3) is equivalent to an agent’s highest level goal and the intermediate nodes to reach project objective can be viewed as sub-goals. The sequences or ordered links are equivalent to the plans that agents need to

11

execute to reach their goals. Dashed links are dummy activities which represent the dependencies among activities. Node Act5, T5 Act3, T3

Node

Act1, T1

R1

R2 Node

dummy

Node

R1

Act7, T7 R1,R3

Act2, T2

Node

Act8, T8

Node

R1

Act6, T6

R1

R4 dummy Node

Act4, T4

Node

R3

Figure 3. A PERT / CPM diagram example

CPM treats activity performance times in a simple, deterministic manner. It provides the ability to determine a project schedule that minimizes total project costs as well as answer queries about the costs involved in expanding and relaxing the time associated with activities. Scheduling systems using PERT diagrams have traditionally been based upon the idea of a fixed time of each activity. In a simplistic PERT diagram, three time estimates are obtained for each activity – an optimistic time, a most likely time, and a pessimistic time. This range of times provides a measure of the uncertainty associated with the actual time required to perform the activity. Therefore, it is possible that based on these time estimates, a cost estimate that a plan will be completed on or before a specific schedule date, can be generated. Although PERT diagrams were developed for scheduling purposes, we can use PERT diagrams as a tool to represent agents’ plans and detect conflicts. In PERT diagrams, interactions among activities and resources are explicitly represented. To summarize, rules followed by traditional PERT diagrams include the following [18]: •

Before an activity begins, all activities preceding it must be completed.



Arrows in the diagram imply logical precedence only.

12



Event numbers must not be duplicated in the diagram.



Any two nodes may be directly connected by no more than one activity.



Diagrams may have only one initial event (without predecessor) and only one terminal event (without successor). Diagrams in which more than one terminal node occurs, can be easily transformed into diagrams that have one terminal node, by creating a new terminal node with dummy activities connecting each terminal node in the original diagram to the new terminal node.

PERT diagrams were originally designed to represent a manager’s perspective. To adapt PERT diagrams for multi-agent systems, it is necessary to represent an agents’ local view of its activities and the relationships of those activities to other agents’ activities. Specifically, nodes represent conditions of agents’ goals and links represent agents’ activities. A plan is represented by a set of ordered links. An agent’s local plan is a PERT diagram which is a subset of a group plan (group PERT diagram). To accomplish this, E-PERT diagrams are proposed, which extend PERT diagrams as follows: •

Multiple starting and ending nodes are allowed. If multiple ending nodes exist, the agents need to first make sure there are no conflicts among these ending nodes, i.e. these ending nodes cannot have conflicting post-conditions on their states. This requirement is due to the interdependency among goal conflicts and plan conflicts, plan conflicts may arise due to goal conflicts and E-PERT diagram does not promote detection of goal conflicts. However, given a set of agent’s goals (ending PERT nodes) and associated plans and there is no conflict among agents’ goals, it is possible agents may fail to merge their plans (e.g. due to resource sharing issues). The solution to this situation exists in goal modification (compromising) instead plan modification (re-scheduling).



Resource requirements associated with activities have been extended to encompass applicationspecific resources and pre- and post-conditions. Thus, the flexibility of PERT diagram for different multi-agent applications is increased.

13



Partially merged PERT diagrams are allowed. Partially merged PERT diagrams fall between the following two extremes: (a) an individual PERT diagram representing the plan of a single agent, and (b) a system level diagram representing the coordination of plans for every agent in the system. Partially merged diagrams include only related agents’ plans (e.g. those of agents that enter into groups for achieving shared goals). Merged PERT diagrams have a blocked representation scheme and each block contains a set of activities and nodes for one agent.



Nodes represent a set of conditions and an agent’s goal is represented as a combination of conditions. Therefore, an agent’s goal can be decomposed and represented by a set of logically connected nodes (e.g. to be achieve at the same time or sequentially). Nodes can be shared among agents, but activities are not shared. Therefore, activities cannot cross the boundaries of individual agent blocks. However, dummy activities (dashed arrows in Figure 4), which represent ordering constraints based on resource requirements or on corresponding pre/post-conditions at the nodes, may traverse across boundaries. This type of relationship helps to order activities among agents.

5. Conflict Detection Although E-PERT diagrams are used to detect plan conflicts, the first step in conflict detection is still to detect goal conflicts. If any goal conflict is detected, the conflict resolution process should first resolve those goal conflicts. The reason is that conflicting goals cannot be transformed into compatible project objectives (the end nodes in E-PERT diagrams). Yet, other nodes (other than the end nodes in E-PERT diagrams) may have conflicting conditions; the hidden assumption is that by re-arranging the order among links and nodes, these conflicting conditions will not be held at the same time intervals (or overlapped time intervals).

14

After all known goal conflicts are resolved, agents can start detecting plan conflicts by exchanging their plans (individual PERT diagrams). Figure 4 shows an example of individual PERT tables and Figure 5 provides the corresponding PERT diagrams for three agents - agent1, agent2, and agent3. In this example, we assume there are four re-useable resources, R1, R2, R3, and R4, shared by these agents. Labeling of actions are distinct and pre / post condition requirements for all activities have been deliberately left out to simplify the illustration of the plan integration process. Figure 6 depicts a merged PERT diagram using the above individual PERT diagrams. During the merging process, agent 1 realizes that Action 5 must wait for agent 2’s Action 2 to finish and release the

Agent1 PERT Table Action

Preceded by

Resource

1 3 5

1 3

Action

Preceded by

2 4 6

1 2 3

Action

Preceded by

Resource

7 8

5, 6 7

R1, R3 R1

Time Required

Pre- condition

Post-condition

-

-

Pre- condition

Post-condition

-

-

Time Required

Pre- condition

Post-condition

T7 T8

-

-

R1 T1 R2 T3 R1 T5 Agent2 PERT Table Resource

Time Required

R1 T2 R3 T4 R4 T6 Agent3 PERT Table

Figure 4. Individual PERT Tables

3 Act5, T5

Act3, T3 Act1, T1

R1

R2 2

1

6

R1

Act8, T8

Act7, T7 6 Agent 1

7 R1,R3

Agent 3 2

6 Act6, T6 Act2, T2 R4

R1 4

Act4, T4 5 R3 Agent 2

Figure 5. Individual PERT diagrams

15

8 R1

Agent1 3 Act5, T5 Act3, T3

1

Act1, T1

Agent3

R1

R2 dummy

2

6

Act7, T7

R1 R1,R3 Act2, T2

Act8, T8

8

R1

Act6, T6

R1

R4 4

7

Act4, T4

dummy

5

R3

Agent2

Figure 6. Merged PERT Diagram

required resource R1. As a result, a dummy link is introduced between Action 2 and Action 5. Similarly, agent 3 realizes it must wait not only for Action 5 and Action 6, but also for Action 4, since it also requires resource R3. If agents fail to merge their PERT diagrams, or if the diagram violates certain rules after the merging process, plan conflicts are detected.

6. Multi-Robot System Example In this section, an example of the above approach for robot path planning is presented to demonstrate the conflict classification and detection. An example floor layout is illustrated in Figure 7. There are two runner robots, robot 1 and robot 2 (L11 and L51 being their default locations) that serve two machine groups. The two robots, with identical capabilities, deliver objects between various shop floor locations for the machine groups. Each location (or cell) is viewed as a required non-sharable resource for robots to move about. Locations L12, etc., indicate the machine pick up / drop off points, while the shaded cells,

Robot 1 L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58 Robot 2 Figure 7. Floor Layout

16

L11, etc., indicate the cells for robot navigation around the machine groups. For simplicity, it is assumed that pickup / drop off can occur on either side of the cells. A system level goal would be to minimize the cumulative distance traveled by either of the robots. An agent level goal to accomplish this system goal is assumed to be that a robot chooses the shortest path from its current location to the next, with the assumption that the goals are already chosen with this consideration. The robots build their plans in the form of PERT diagrams. The choice of the shortest path is the property of the whole PERT diagram.

6.A. Detecting Goal Conflicts Figure 8 shows an example AND/OR goal tree wherein robot1 is trying to serve location L33 and Robot2 is trying to serve location L34. Goal conflicts which involve goals that are explicitly represented in the IGS are detected by the robots comparing their respective IGSs. This is done by tracing the interdependencies among goals and resources. In this example, both robots intend to get to the same location at the same time, goal conflicts happen because they both hold the same sub-goal - getting to a specific resource (location) to achieve their stated goals. Such goal conflicts are detected by comparing post-conditions of their actions or goals. Such a conflict may be resolved using a location occupation time line diagram, as shown in Figure 9. If the times associated with these goals are known then the robots can be re-scheduled to serve these two cells in sequential time periods.

Deliver obj Y from C to D w/ shortest path

Deliver obj X from A to B w/ shortest path

Goto L51 from current loc

Pickup obj X at L51

Goto L33 from L51

Goto L41 Goto L31 Goto L32 from L51 from L41 from L31

Resource:

-

loc L51

loc L41

Shortest path

Drop obj X at L33

Goto L31 from L21

Goto L33 from L32

loc L31

Shortest path

loc L32

loc L33

Drop obj Y at L34

Goto L32 from L31

Goto L34 from L11

Goto L33 from L32

Goto L34 from L33

loc L34

Figure 8. Detecting Goal Conflicts Using IGS

17

Pickup obj Y at L11

Goto L11 from current loc

Goto L21 from L11

loc L21 loc L11

Robot 1’s Path Robot 2’s Path L51

L41 L31

L32

L33

L21

L11

L34

Time

Figure 9. Location Occupation Times

6.B. Detecting Plan Conflicts 6.B.1 Plan Conflicts Due to Overall System Constraints This section illustrates the detection of plan conflicts caused by overall system constraints that are identified while the individual PERT diagrams are merged. Such constraints may come from the domain specific requirements for the system, including constraints on time, resources, or quality criteria. Merging PERT diagram results in establishing proper ordering constraints (dummy activities) among agents’ activities based on the resource constraints. Merged PERT diagrams are still bound by the basic PERT rules. Figure 10 shows such a example caused by an initial system criteria (choosing the shortest path from one node to the next). Two dummy links are added because of resource dependencies - Robot 2 need to pass through locations 32 and 36 and it has to wait for Robot1 to go through location 32, while Robot1 may encounter a similar wait situation. That is, the robots will meet head to head if the combined plan is

Robot1 1

Move to L13, T1

Move to L37, T3

3

Move to L33, T5

5

dummy2

2

Move to L32, T2

4

7

Move to L12, T7

dummy1

Move to L36, T4

6

Move to L52, T6

8

Robot2

Figure 10. Detecting Plan Conflicts

18

9

executed individually. Their desired subgoals are not in conflict, but their path plan results in a resource (location) conflict. The presence of loops in the merged PERT diagram will signify this potential problem. Thus, potential plan conflicts, which may consequently be a result of unidentified goal or belief conflicts, are detected by searching for loops created by the insertion of dummy activities in the merged PERT diagrams. To resolve such conflicts, the source of the conflict needs to be identified. In the above scenario, the dependencies introduced by the dummy activities have introduced the conflict. This indicates that some of their basic condition requirements are strong and hence need to be relaxed. In this example, the source – (after tracing back to their overall goal) choosing the shortest path, is the cause of the conflict. Therefore, the solution is that they must (not necessary both robots) relax / compromise their “shortest path” objective. At this point, this conflict is re-identified as a goal conflict and agents seek to modify their goals. This may involve either the re-planning of the delivery routes or the reordering of the delivery stations. Implicit goal interactions like the above - shortest path - are hard to detect by just comparing agents’ IGS. Merging PERT diagrams helps in identifying such issues and leads to the detection of the source for these conflicts. To solve such a conflict, the following probable plan modification schemes may be used: (i) Robot 2 waits at L41 until Robot 1 passes L31. (ii) Robot2 may give up the shortest path solution from L32 to L36 and instead choose to go around through a longer path. (iii) Robot 2 will reorder its plan to go to L52 and then follow Robot 1 in the middle section (L31..L37) (iv) Robot 2 will reorder its plan to go to L52 and then go before Robot 1 in the middle section (L31..L37). The conflict resolution process will use utility measures to evaluate the relative merits of the above four strategies (or any other possible solution) and propose these as potential solutions for plan modification.

6.B.2 Plan Conflicts due to Conflicting Goals – Goal Modification Plan conflicts traced to conflicting goals: take the shortest path, rely on relaxation of the constraints on the corresponding goals if either their beliefs are consistent and there are no alternative plans, or it may cost too much to develop alternative plans. In such a situation, an agent may elect to swap goals to resolve

19

the conflicts. For example, Robot 2 may agree to serve both L33 and L34 thereby modifying its goal structure to include Robot 1’s goal of serving location L33.

6.B.3 Plan Conflicts due to Conflicting Goals – Plan Modification Figure 11 shows an example where a plan conflict, similar to the previous example, is detected. Both robots realize that they hold consistent beliefs about each other and their goals are consistent. For robot1, there are two paths that both satisfy the requirement of "shortest path" (passes through nine locations). Therefore, the possible solution is for Robot1’s to modify its plan by switching to the alternative route (as illustrated in Figure 11d.)

6.B.4 Plan Conflicts due to Conflicting Beliefs - belief modification Figure 12 presents the situation where Robot 2’s beliefs about Robot 1’s path is untrue. Once the Robots fail to agree on confirming their potential conflicts, Robot 2, which perceives the conflict initiates a request for Robot 1’s beliefs. Once Robot 2’s belief about Robot 1’s plans are updated the “perceived”

Robot 1

Robot 1

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

Robot 2

Robot 2

a. Actual System Route Map (Before Goal Modification)

b. Robot 1’s Local Belief

Robot 1

Robot 1

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

Robot 2

Robot 2

c. Robot 2’s Local Belief

d. Modified Robot 1’s Local Belief after communication with Robot 1

Figure 11. Plan Conflicts due to Conflicting Goals

conflict is resolved.

6.C. Detecting Belief Conflicts

20

Robot 1

Robot 1

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

Robot 2

Robot 2

a. Actual System Route Map

b. Robot 1’s Local Belief

Robot 1

Robot 1

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

L11 L12 L13 L14 L15 L16 L17 L18 Machine Group 1 L21 L28 L31 L32 L33 L34 L35 L36 L37 L38 Machine Group 2 L41 L48 L51 L52 L53 L54 L55 L56 L57 L58

Robot 2

Robot 2

c. Robot 2’s Local Belief

d. Modified Robot 2’s Local Belief after Communication with Robot 1

Figure 12. Plan Conflicts due to Conflicting Local Beliefs

Belief conflicts are second-order conflicts that are not directly identified. They are however, identified, as a result of identifying plan or goal conflicts initially. While trying to trace the course of these plan / goal conflicts, the conflict detection process will ultimately detect the source to be an agent’s belief. This is often noticed when the plan or goal conflicts identified by one agent are not agreeable with other agents working on the same goal or pursing related plans. In such a case, the conflict detection algorithm hypothesizes the existence of a belief conflict and tries to validate the beliefs of the concerned agents further. Figure 12 illustrates such a belief conflict that had been identified because of the identification of a plan conflict.

21

7. Implementation Figure 13 shows the class diagram of the implemented for the goal and plan representations. A goal is achieved by associated plans. A plan is composed of an ordered set of E-PERT nodes/states and E-PERT links/activities. Each E-PERT node contains some conditions which are the descriptions of its state. Each E-PERT link contains pre-conditions, post-conditions, required resources, and estimated time (including average time, min, and max time). The conditions may contain descriptions overlapped with resources and time requirements (time can be modeled as some kind of resource).

Go al name : type = String Goal-Holder : type = Agent Parent-Goal : type = Goal Sub-Goal : type = Goal Post-Condition : type = Condition Autonomy-Level : type = AL

Plan name : type = String

A chieve

+temporal relations

0 ..*

P ERT-node P ERT link name : type = String name : type = String end-condition : type = Condition required-resource : type = resource +link resource-consumption-rate : type = Number 0..* pre-condition : type = condition 0..* *1 +contain post-condition : type = condition start-PERT-node : type = PERT-node end-PERT-node : type = PERT-node average-time : type = Number max-time : type = Number min-time : type = Number +post-condition 1 Require Resource name : type = String quantity : type = number

+pre-condition

0..* +described by

0..*

Co nditio n name : type = String conflict-with : type = condition

Figure 13 Class diagram for goal and plan representation

Figure 14 shows class diagram for conflict representation. Goal conflicts are composed of a set of goals and their contradictory conditions. Plan conflicts are composed of a set of E-PERT links, their temporal relations, and their required resources (or their contradictory conditions).

22

Conflict name : type = String conflicting-self-agent-goal : type = C-goal

Goal Conflict

Plan Conflict 1

1

* P ERT link name : type = String required-resource : type = resource resource-consumption-rate : type = Number pre-condition : type = condition post-condition : type = condition start-PERT-node : type = PERT-node end-PERT-node : type = PERT-node average-time : type = Number max-time : type = Number min-time : type = Number

* Goal na me : typ e = String Go al-Holde r : type = Agent P arent-Goa l : type = Goal S ub-Goa l : type = Goal P ost-Cond ition : type = Condition A utono my-L evel : type = AL +contain 0..*

+conta in 0..*

Condition +described by na me : typ e = String co nflict-with : type = condition +de scrib e 0..*

0..* Resource name : type = String quantity : type = number +conflict

+contradict with

Figure 14 Class diagram for conflict representation

8. Sensible Agent Testbed The Sensible Agent testbed (Figure 15) [22] is being used as the implementation and experimentation environment for this research. The Sensible Agent testbed has been designed to (i) Simulate MAS based environments in different domains, (ii) Provide a unified platform for substantiating features of Sensible

3 rd Party SASI

Sensible Agent Environment

User Interface

Another ORB

AP

SASI

CRA

SASI AR

Xerox ORB

IIOP

Figure 15: Testbed Component Interactions

23

PM

IGS

agents, and (iii) Simulate the interactions among an agent’s behaviors and their environment. Communication among testbed components is implemented through OMG CORBA using ILU ORB [23][24]. This CORBA implementation has allowed the parallel evolution of separate Sensible Agent modules and the CORBA Internet Inter-Orb Protocol (IIOP) provides a platform- and languageindependent manner for inter-connecting different ORB implementations. Each Sensible Agent contains the following modules: (1) The Perspective Modeler (PM) contains the agent’s (i.e. self-agent) explicit model of its local (subjective) viewpoint of other agents and the environment with regard to state and declared facts. Intended Goal Structure (IGS) is a sub-module inside PM, which stores the intended goals of the Self Agent and beliefs about the intentions of other agents . (2) The Autonomy Reasoner (AR) determines the appropriate autonomy level for each of the self agent’s goals, assigns an autonomy level to each goal, and reports autonomy-level constraints to other modules in the self-agent. (3) The Action Planner (AP) interprets domain-specific goals, plans to achieve these goals, and executes the generated plans. (4) The Conflict Resolution Advisor (CRA) identifies, classifies, and generates possible solutions for conflicts occurring between the self-agent and other agents. The CRA monitors the AP and PM to identify conflicts. Once a conflict is detected, it classifies the conflict and offers resolution suggestions to the AP. Interested readers can find the details about these modules in [25]. Each agent in the system is represented by a Sensible Agent System Interface (SASI), which encapsulates the four Sensible Agent modules and provides each with a single point of contact to the rest of the system (the environment and other agents). The implementation of the CRA has three major components: (i) a representation mechanism for goals, beliefs, and plans, (ii) a CR strategies knowledge base, and (iii) conflict detection/analysis mechanism. The reasoning functions are implemented with LOOM [26] while the interface to the CRA module is in Java. Pattern matching and heuristic rules are used for reasoning to detect conflicts, select suitable CR strategies, and generate coordination plans. As for the temporal relations reasoning, we

24

implemented Waltzer’s constraint engine and Allen’s transitivity tables of temporal database [27] in Allegro Common Lisp. The implemented procedures of plan conflict detection progress as follows: •

Convert goals or plans (PERT nodes and links) as object instances into the CRA knowledge base.



Link PERT nodes and links according to agents’ local plans.



Build the temporal relations among PERT links. For every node, every link arriving at the node has the temporal relation of “before” to every link leaving the node. Every link arriving at a node has the possible temporal relations, “finish” 5 or “finished by”, with respect to other arriving links to the node. Every link leaving a node has the possible temporal relations, “start” or “start by”, with respect to other leaving links to the node. The estimated activity duration time (including average, max, and min time) helps to further identifying the possible temporal relations among links.



Apply temporal logic to other links with the assistance of Waltzer’s constraint engine.



Apply pattern match to identify plan conflicts by filtering paired PERT links that have either one of the following set: i) same required resources and possible overlapped time intervals, ii) pre-conditions of later activities contradicts with post-condition of (immediately) former activities.

9. Performance Analysis Figure 16 shows the performance of plan conflict detection. The performance measure is the CPU time used by the plan conflict detection process given increasing number of links and number of conflicts. Data is collected through experiments involving three Sensible Agents operating in the Naval Radar frequency management domains (each agent planning frequency assignment for a respective radar). The CRAs, APs, and PMs of each agent were deployed on three Linux platforms respectively, and the

5

In Allen’s temporal logic terms, time interval X “finishes” time interval Y means X starts earlier than Y but ends at the same time. “Finished by“ is the reversed relation. Time interval X “starts” time interval Y means X and Y start at the same time, but Y ends earlier than X. “Started by“ is the reversed relation

25

Performance of Plan Conflict Detection 160 140 120 CPU time (sec)

100 80 60 40 20 3 8 13 17 23 29 1

10

20

Number of PERT links

30

40

0 Number of Conflicts

Figure 16. Performance of plan conflict detection

remaining Sensible Agent modules operate on two NT platforms. As the number of PERT links increase, the required CPU time increases rapidly (faster than O(n), less than O(n2)). The major portion of CPU time is attributed to the time required to pattern match PERT links. It is also observed that as the number of detected conflicts increases, the CPU time also increases slightly. These time requirements are related to the effort associated with each detected conflict, especially the time to record related agents, PERT links, goals, and their temporal relations. For the purpose of comparison, Figure 17 shows the performance of goal conflict detection performed by pattern match among the conditions on agents’ goals. As the number of goals increases, the time needed for conflict detection increases linearly. Similar situations occur when the number of conflicts increases. Compared with Figure 16, goal conflict detection consumes much less computation time because less complexity is involved.

26

Performance of Goal Conflict Detection

12 10 8 6

CPU time (sec)

4 2 40

0 25

number of goals

35 40

10

30 20 25 15 5 10 number of conflicts

Figure 17 Goal conflict detection performance

10. Conclusion This paper presents conflict detection techniques based on extensions to PERT diagrams to promote the detection of conflicts that occur when agents integrate their individual plans produced to achieve their goals. Agents’ plans are represented as sets of ordered PERT links and nodes. The proposed Extended PERT (E-PERT) diagram contributes to maintain traceable temporal relations among parallel activities. Combined with pattern matching technique and temporal relationship reasoning, plan conflicts can be detected and dependencies (represented as dummy links) can be added into E-PERT diagram to help coordinate an agent’s local activities. Before integrating agents’ individual plans, goal conflicts are detected by verifying desired post-conditions. Implementation within Sensible Agent testbed and performance analysis is also presented. There are still open issues for identifying (or merging) the same nodes in agents’ local plans. From an agent’s local perspective, an agent may specify nodes with only conditions related with its activities and ignore other irrelevant conditions. When multiple agents integrate their local plans, there exist multiple choices for

27

merging nodes. The selection among these choices may require an agreement among agents and the agreement may impact the quality of the coordination. In addition, future work includes to the investigation of scalability issues and to the incorporation of existing research for coordinating agents’ plans. Current implementation of plan conflict detection, specifically its use of pattern matching, can be optimized to improve performance with regard to CPU time consumption.

11. Reference [1]

G. Weiss, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence . Cambridge, Massachusetts: The MIT Press, 1999.

[2]

N. R. Jennings, “Coordination Techniques for Distributed Artificial Intelligence,” in Foundations of Distributed Artificial Intelligence, Sixth-Generation Computer Technology Series , G. M. P. O'Hare and N. R. Jennings, Eds. New York: John Wiley & Sons, Inc., 1996, pp. 187-210.

[3]

C. Castelfranchi, “Conflict Ontology”: Springer-Verlag, 1998.

[4]

D. S. Weld, “An Introduction to Least Commitment Planning,” AI Magazine, vol. 15, pp. 27-61, 1994.

[5]

H. Sugie, Y. Inagaki, S. Ono, H. Aisu, and T. Unemi, “Placing Objects with Multiple Mobile Robots-mutual Help Using Intention Inference,” presented at 1995 IEEE International Conference on Robotics and Automation, 1995.

[6]

J. Chu-Carroll and S. Carberry, “Communication for Conflict Resolution in Multi-Agents Collaborative Planning,” presented at First International Conference on Multi-Agents Systems, San Francisco, CA, 1995.

[7]

E. H. Durfee, Coordination of Distributed Problem Solvers . Boston: Kluwer Academic, 1988.

[8]

K. P. Sycara, “Resolving Goal Conflict via Negotiation,” presented at Seventh National Conference on Artificial Intelligence, 1988.

[9]

R. Steeb, S. Cammarata, F. A. Hayes-Roth, P. W. Thorndyke, and R. B. Wesson, “Architectures for Distributed Intelligence for Air Fleet Control,” Rand Corp., Santa Monica CA Technical Report R-2728-ARPA, 1981.

[10]

Y. E. Ioannidis and T. K. Sellis, “Conflict Resolution of Rules Assigning Values to Virtual Attributes,” presented at Proceedings of the 1989 ACM International Conference on the Managment of Data, Portland, Oregon., 1989.

[11]

E. Ephrati and J. S. Rosenschein, “Multi-Agents Planning as a Dynamic Search for Social

28

Consensus,” presented at Proceedings of the 13th International Joint Conference on Artificial Intelligence, 1993. [12]

G. Zlotkin and J. S. Rosenschein, “Negotiation and Task Sharing Among Autonomy Agents in Cooperative Domains,” presented at Eleventh International Joint Conference on Artificial Intelligence, 1989.

[13]

K. P. Sycara, “Utility Theory In Conflict Resolution,” Annals of Operation Research , vol. 12, pp. 65-84, 1988.

[14]

A. Sathi and M. S. Fox, “Constraint-directed Negotiation of Resource Reallocations,” in Distributed Artificial Intelligence II , L. Gasser and M. N. Huhns, Eds. London: Pitman Publishing, 1989, pp. 163-193.

[15]

F. von Martial, Coordinating Plans of Autonomous Agents . Berlin: Springer-Verlag, 1992.

[16]

K. S. Barber, T. H. Liu, A. Goel, and C. E. Martin, “Conflict Representation and Classification in a Domain-Independent Conflict Management Framework,” presented at Third International Conference on Autonomous Agents, Seattle, WA, 1999.

[17]

K. S. Barber, T. H. Liu, and D. C. Han, “Agent-Oriented Design,” in Multi-Agent System Engineering: Proceedings of the 9th European Workshop on Modelling Autonomous Agents in a Multi-Agent World, MAAMAW'99, Valencia, Spain, June 30 - July 2, 1999 , Lecture Notes in Computer Science: Lecture Notes in Artificial Intelligence , F. J. Garijo and M. Boman, Eds. Berlin: Springer, 1999, pp. 28-40.

[18]

J. J. Moder, C. R. Phillips, and E. W. Davis, Project Management with CPM, PERT and Precedence Diagramming , 3rd ed. New York: Van Nostrand Reinhold Company, 1983.

[19]

V. R. Lesser, “A Retrospective View of FA/C Distributed Problem Solving,” IEEE Transactions on Systems, Man, and Cybernetics , vol. 21, pp. 1347-1362, 1991.

[20]

C. E. Martin and K. S. Barber, “Multiple, Simultaneous Autonomy Levels for Agent-based Systems,” presented at Fourth International Conference on Control, Automation, Robotics, and Vision, Westin Stamford, Singapore, 1996.

[21]

P. R. Cohen and H. J. Levesque, “Intention is Choice with Commitment,” Artificial Intelligence , vol. 42, pp. 213-261, 1990.

[22]

K. S. Barber, A. Goel, D. Han, J. Kim, T. H. Liu, C. E. Martin, and R. M. McKay, “Simulation Testbed for Sensible Agent-based Systems in Dynamic and Uncertain Environments,” in TRANSACTIONS: Quarterly Journal of the Society for Computer Simulation International, Special Issue on Modeling and Simulation of Manufacturing Systems , 1999.

[23]

OMG, “OMG CORBA Home Page,” , vol. 1999: Object Management Group, 1999.

[24]

Xerox, “Inter-Language Unification -- ILU,” , vol. 1999: Xerox/PARC, 1998.

29

[25]

K. S. Barber, R. M. McKay, A. Goel, D. Han, J. Kim, T. H. Liu, and C. E. Martin, “Design and Deployment Decisions for Distributed Agent-based Systems:

The Requirements, The

Technologies, and An Example Solution called Sensible Agents,” in IEEE Internet Computing , 1999. [26]

D. Brill, “LOOM Reference Manual,” University of Southern California, Los Angeles, CA Ver 2.0, December 1993.

[27]

J. F. Allen, “Maintaining Knowledge about Temporal Intervals,” ACM , vol. 26, pp. 832-843, 1983.

30