Reusable Patterns for Agent Coordination - CiteSeerX

3 downloads 4694 Views 95KB Size Report
Name: As the saying goes, a good name is worth a thousand words and a good pat- ... mobile agents, statically located agents, resources and hosting environments. ..... controlling the meeting, registering agents as they arrive and deregistering ..... In the first domain an example may involve two agents controlling different.
Reusable Patterns for Agent Coordination Dwight Deugo, Michael Weiss, and Elizabeth Kendall {deugo, weiss}@scs.carleton.ca, [email protected]

Abstract. Much of agent system development to date has been done on an ad hoc basis. These problems limit the extent to which “industrial applications” can be built using agent technology, as the building blocks, reusable techniques, approaches and architectures have either not been exposed or have not yet been fully elaborated. In the mid 80’s, supporters of object-oriented technology had similar problems. However, with the aid of software patterns, objects have provided an important shift in the way developers successfully build applications today. In this paper, after describing an agent pattern’s generic format, we identify a set of software patterns for agent coordination.

1. Introduction Much of agent system development has been done on an ad hoc basis [Bradshaw, et al. 97], resulting in many problems [Kendall et al. 98; Deugo and Weiss 99]: 1. 2. 3. 4.

Lack of agreed definition of an agent Duplicated effort Inability to satisfy industrial strength requirements Difficulty identifying and specifying common abstractions above the level of single agents 5. Lack of common vocabulary 6. Complexity 7. Research presents only problems and solutions These problems limit the extent to which “industrial applications” can be built using agent technology, as the building blocks, reusable techniques, approaches and architectures have either not been exposed or have not yet been fully elaborated. In contrast, agent and agent-based system researchers are starting to understand many of the principles, facts, fundamental concepts, and general components [Weiss 99]. Take for example, the call for papers of the workshop on Mobile Agents in the Context of Competition and Cooperation [Minar and Papaioannou 99] at Autonomous Agents ‘99. We found comments such as, “we are uninterested in papers that describe yet another mobile agent system.” The question we had after reading this was, if we know so much about agents and agent-based systems, why do we have the above problems? In the mid eighties, supporters of object-oriented technology had similar problems. However, with the aid of software patterns, objects have provided an important shift in the way developers successfully build applications today. Since we believe that agents are the next major software abstraction, we find it essential to begin the effort of documenting that abstraction so that others Published as Chapter 14 in the book: Omicini, A., Zambonelli, F., Klusch, M., and Tolksdorf, R. (eds.), Coordination of Internet Agents: Models, Technologies, and Applications, Springer, 2001.

can share in the vision. Patterns provide a means of documenting the building blocks in a format already accepted by the software engineering community. Patterns also have the added benefit that no unusual skills, language features, or other tricks are needed to benefit from them. As with objects, software patterns can be an enabling technology for agents. In this chapter, after describing a pattern’s generic format, we identify a set of software patterns for agent coordination. Agent patterns generally fall into architectural, communication, travelling and coordination patterns. Typically, agent coordination mechanisms are based on a master-slave or a contract-net like architecture, but the reasons for either choice are rarely made explicit. What’s missing are the forces influencing the choices. Therefore, before our discussion of particular patterns, we identify in Section 2 an initial set of global forces surrounding the task of coordination. These forces identify the trade-offs that push and pull solutions proposed by coordination patterns. They help you, the developer, make a more informed decision about when and when not to use a particular coordination mechanism. These forces may not be applicable to all coordination patterns. However, they are ones that are important for anyone developing coordination patterns to consider. In Sections 4 through 7, we select different combinations of forces; place them in coordination contexts that give rise to specific problems; and solve the problems, generating our initial set of coordination patterns. 2. Software Patterns Software patterns have their roots in Christopher Alexander’s work [Alexander 79] in the field of building architecture. He proposed a pattern as a three-part rule that expresses a relation between a certain context, problem and solution. Although there are different formats for patterns today, it is generally agreed that the mandatory parts include the following [Meszaros and Doble 98]: •



• • •

Name: As the saying goes, a good name is worth a thousand words and a good pattern name, although just a short phrase, should contain more information than the number of words would initially suggest. Would “agent” be a good pattern name? The answer is no. Although it means much more than the single word suggests, it has too many meanings! One should strive for a short phrase that still says it all. Context: A description of a situation when the pattern would apply. However, in itself the context does not provide the only determining factor as to the situations in which the pattern should be applied. Every pattern will have a number of forces that need to be balanced before applying the pattern. The context helps one determine the impact of the forces. Problem: A precise statement of the problem to be solved. Think from the perspective of a software engineer asking him/herself, “How do I ….?”. A good problem for a pattern is one that software engineers will ask themselves often. Forces: Description of items that influence the decision as to when to apply the pattern in a context. Forces can be thought of as items that push or pull the problem towards different solutions or that indicate trade-offs that might be made [Coplien 96]. Solution: A description of the solution in the context that balances the forces.

Optional parts include resulting context, rationale, related patterns, examples, code examples, indications, aliases, known uses and acknowledgements. A good pattern provides more than just the details of these sections; it should also be generative. Patterns are not solutions; rather patterns generate solutions. You take a 'design problem'

and look for a pattern to apply in order to create the solution. The greater the potential for application of a pattern, the more generative it is. Although very specific patterns are of use, a great pattern has the potential for many applications. For this to happen, pattern writers spend considerable time and effort attempting to understand all aspects of their patterns and the relationships between those aspects. This generative quality is difficult to describe, but you will know a pattern that has it once you read it. It is often a matter of simplicity in the face of complexity. Although patterns are useful at solving specific design problems, we can enhance the generative quality of patterns by assembling related ones, positioning them among one another to form a pattern language. Pattern languages can help a developer to build entire systems. For example, an individual pattern can help you design a specific aspect of your agent, such as how it models its beliefs, but a pattern language can help you to use those beliefs to build agents that plan and learn by putting individual patterns into context. The development of agent pattern languages is important for agent patterns to be successful. Forcing each pattern to identify its position within the space of existing patterns is not only good practice, it is also good research. This activity is not only helpful to you as a pattern author, but to all those other software engineers who will use your patterns to develop their systems. 3. Global Forces of Coordination Forces identify the different types of criteria engineers use to justify their designs and implementations. For example, computer scientists usually consider the force of efficiency, along the dimensions of time and space, when developing algorithms. Forces will both positively and negatively interact with one another, even within the same criterion. Time and space is a good example. Database access would be very fast if the database was maintained in memory. However, this solution is usually inappropriate due to the amount of data. Is it better for the database to provide fast access for limited data or slower access for a large amount of data? The answer is, it depends. The conflict between the forces and between the forces and the goals surrounding the problem reveal the intricacies of the problem and indicate the trade-offs that must be considered by the pattern’s solution. In this section, we discuss a number of forces that we believe impact many coordination patterns. Coordination refers to the “state of a community of agents in which actions of some agents fit in well with each other, as well as to the process of achieving this state. [...] Two manifestations of coordination that play particularly important roles in Distributed Artificial Intelligence (DAI) are competition and cooperation.” [Weiss 99, pp 598]. Our goal here is to propose an initial list of forces for agent coordination patterns authors to consider when writing their patterns. We expect this list to be continually enhanced with new forces as the issues, factors and conflicts involved in agent coordination are better understood. 3.1. Mobility and Communication Agents that are mobile and must coordinate their activities will need to communicate with other mobile agents, statically located agents, resources and hosting environments. For two agents to communicate, at least one of them needs to know where the other one is located. Although the initial sender must know where to send the message, it can include its location in the message so that the receiving agent can respond. There are potentially three solutions to this problem. One solution is for an agent to carry either the itineraries of the other agents or a directory of where they are located. The second solution is for it to use a lookup service to find the locations of the other agents. Another solution is to have an agent broadcast its message to all

environments, so that eventually the desired agent will receive the message. All solutions have their own problems. The first one requires that mobile agents carry extra data with them on their trips to other environments. The extra size of an agent implies that it will take longer to transport and that it will consume more memory within the environments. This situation may not be a problem for agent systems that have very few agents (mobile or static). However, this solution may not even be possible due to memory limitations in an agent system containing thousands of agents that need to communicate and coordinate with one another. The second solution involves agents informing a lookup service of where they are currently located. It also requires that other agents using the lookup service are able to find those locations. The problems with this solution are the latency of the network and the possible failure of the service. In highly mobile agent systems, agents may move so often that the lookup service is never in sync with where the agents have gone. Therefore, when an agent asks where another one is located, it gets an old address rather than the new one. Even worse, if the lookup service fails, an agent will not be able to find the location of any agent. The final solution is not without its performance and security problems either. Sending a message everywhere takes time and effort by the sending environment and consumes time, effort and space in the receiving environment. Most of this is wasted since the message is actually only for one agent. When the ratio of agents to environments becomes very large, this solution is impractical. This becomes even more true when the overhead of managing the security of the messages – making sure a message reaches the correct agent – is considered. Even if agents know where others are located, any coordination solution must deal with latency and failure of messages over the network. Messages can get lost or take a long time to reach their destinations. You want to have fast, reliable messaging between coordinating mobile agents. However, you also need a solution that does not increase the memory requirements of agents and their execution environments. In addition, the solution should not involve complex failure and security mechanisms to handle lost or slow messages, failures in lookup services and message broadcasts. 3.2. Standardization The computer industry has seen the benefits of standardization. Java and XML are two good examples of it in action. Although much of agent framework development done today is in Java [MAL 99], there are still many differences in the implementations of the following: environments, execution and management of agent types, management of identifiers, persistence, navigation, communication, the interaction with external resources and security. [Rodrigues Da Silva et al., 2000]. The OMG’s Mobile Agent System Interoperability Facility (MASIF) [Milojicic et al., 1998] and the Foundation for Intelligent Physical Agents (FIPA) [Chiariglione 87] are two attempts at standardizing agent technology. Agent communication is outside MASIF’s scope, leaving it extensively addressed by CORBA. Coordination is also not standardized. The implication is that agents of different types and implemented using different languages will potentially need to coordinate with one another, each using their own control mechanisms. Assuming for the moment that agents can send messages between one another, there is also the question of the content of a message. On one hand, it would be good to have a standardized coordination and communication protocol. However, this takes a great deal of time and money to put into place and dedicated individuals to drive the process. The Mobile Agent List [MAL 1999] indicates that there are at least

sixty-four different mobile agent frameworks, as of September 1999. Therefore, it would be difficult to get all of the associated designers to agree to an expensive update of their systems, even if they could agree on a reference model. 3.3. Temporal and Spatial Coupling Assuming that agents have the ability to communicate with one another, developers must also consider the spatial and temporal coupling between the agents in their systems. Spatially coupled models require that agents share a common name space [Cabri et al., 1998]. Therefore, agents can communicate by explicitly naming the receiving agents. In order to support this ability, a naming or locating service is often used to prevent the need for explicit references to receiving agents’ locations by the sending agents. Spatially coupled agent systems allow agents to communicate in a peer-to-peer manner. However, the ease of communication comes at a cost of using agreed communication protocols, locations and times. It also requires the assumption that network connections and intermediate network nodes will be stable and reliable. In the case of mobile agent systems, this approach is not adequate. Since mobile agents move frequently, locating and direct messaging are expensive operations, and rely heavily on the stability of the network. Statically located agents, especially those located in the same environment, can benefit from spatially coupled models, since the protocols, locations and coordination times can be agreed upon a priori and network involvement is minimized. Spatially uncoupled models allow agents to interact without having to name one another. With these models, the information an agent has to share and its type are more important than its name. Therefore, several different agents can serve the same purpose, making the system more redundant and less likely to fail. In addition, naming services are not required, resulting in fewer system components to manage. Temporally coupled models imply that there is some form of synchronization between the agents. Temporal coupling is very important for spatially uncoupled models. Agents still need to agree on what to share, when to share it, and how to share it. Spatially uncoupled models put less emphasis on direct agent-to-agent communication, as the environment is used more often as the medium for information exchange. This architectural decision increases the complexity of the environment and increases its computational requirements. Moreover, the model requires that the agents share a common knowledge representation and are aware of schedules and positions for information exchange. Temporally uncoupled models relax the synchronization requirement. Therefore, agents are no longer dependent on meeting and exchanging information with others at specific times, and they do not have to worry as much about other agents’ schedules. However, this model increases the importance of knowledge representation and on how the environment is used to transfer knowledge. It is easy for two agents to communicate directly, but effective multi-agent communication (one agent with many other agents) increases the complexity of the agent. You want it to be easy for agents to exchange information. Direct agent-to-agent messaging seems to be the best approach. There are fewer synchronization problems, as an agent can just send messages to another when it needs to. However, in the dynamic case, such as mobile agents, the implementation and execution costs of locating and getting a message to another are high and message reliability is a problem when network problems occur. Even the static solution is not without its problems. For example, it would be impossible for a large number of agents to inhabit one environment and send messages to one another. The environment would not have the computational capacity.

So you try the spatially uncoupled approach. Even with this, agents have to agree where and when to meet, and doing so at the same place causes the same overhead problem as when static agents try to send messages to one another from the same place. In addition, the environment must now handle the information exchange. Therefore, you allow agents to come and go as they please and let them leave whatever information they want to in the environment for others agents to read. However, now your environment is more complex and getting the agents’ knowledge representation correct can be difficult. 3.4. Problem Partitioning An agent can partition a task between others in order to increase reliability, performance and accuracy. For example, rather than have one agent look for the best airfare, why not have three, or four, or one hundred. If part of the network fails, killing a few agents, you will still get answers from the others. A benefit is that one answer may be better than the rest. In the situation where you also need to book a hotel, you could have other agents doing that task while the rest are finding the best airfare. This solution involves coordinating and collating the agents’ results, which increases the overall computational effort of the global task, not only in the home environment, but also in those environments that agents visit. The trade-off one makes between reliability, performance, accuracy, complexity and computation depends on the problem. For simple problems the overhead of partitioning is usually too large. However, more complicated problems can benefit from task partitioning. 3.5. Failures You can make two assumptions with respect to failures when developing coordination models: nothing ever fails, or entities like messages, connections and even agents will fail sometimes. The first assumption will often make the problem much easier to solve, resulting in protocols and solutions that are easy to understand. The downside is that these solutions will never work in practice. The second assumption makes problems harder to solve, requiring more time to get them right and requiring complex solutions. The benefit is that you can use the solutions on a live network. We all want simple solutions to problems. We also want simple solutions that work with ‘real’ systems. Time constraints put on developers often make these two forces conflict. 3.6. Summary Now that we have summarized the major forces acting on agent coordination, the remainder of this chapter covers particular patterns of agent coordination. The patterns we will cover are: Blackboard, Meeting, Market Maker, Master-Slave, and Negotiating Agents. 4. Blackboard Pattern 4.1. Context You already have a set of agents that are specialized to perform a particular subtask of an overall task. You need to provide a medium that allows them to monitor each other and build on their mutual progress. 4.2. Problem How do you ensure the cohesion of the agents?

4.3. Forces Agents need to collaborate to perform complex tasks that extend beyond their individual capabilities. One way to enable collaboration is to hard wire the relationships between agents, for example, through a list of acquaintances in each agent. However, this is difficult to achieve if the locations of the collaborating agents are not fixed; some agents haven't been created at the time an agent wants to interact with them; or if agents are mobile. Agents may have been independently designed. In this case, it could be difficult to enforce a common way of representing relationships between agents that each agent has to implement. It would more suitable if the agents did not make assumptions about which specific agents will be available to collaborate with, and were built with the notion that there will be some agents available, unknown to the agent at design time. The coordination protocol needed for agent collaboration can be expressed using the data access mechanisms of the coordination medium. That means that the coordination logic is embedded into the agents. Although a logical separation between algorithmic and coordination issues would provide more flexibility, the cost of a more complex coordination medium is considered too high. Keeping the interface to the coordination medium small allows agents to be easily ported to use another coordination medium. 4.4. Solution The solution to the problem involves a Blackboard to which Specialists can add data and which allows them to subscribe for data changes in their area of interest. Specialists can also update data and erase it from the Blackboard. The Specialists continually monitor the Blackboard for changes and signal when they want to add, erase or update data. When multiple Specialists want to respond to a change, a Supervisor decides which Specialist may make a modification to the Blackboard. The Supervisor also decides when the state of the Blackboard has sufficiently progressed and a solution to the complex task has been found. The Supervisor only acts as a scheduler for the Specialists, deciding when and whether to let a specialist modify the Blackboard. It does not facilitate their interaction with each other. The structure and main interactions of this pattern are shown in Figure 1 and Figure 2. The Blackboard being a passive coordination medium, we chose not to represent it by an agent.

Supervisor

1 controls

Blackboard

observes & signals *

Specialist notifies

Figure 1: Role diagram of the Blackboard pattern

6: schedule Blackboard

Supervisor

7: acceptChangesFrom('B') 1: signal 3: notify 5: signal Specialist A

2: notify

Specialist C

4: signal Specialist B

Figure 2: Interactions between the participants of the Blackboard pattern 4.5. Rationale The Blackboard pattern decouples interacting agents from each other. Instead of communicating directly, agents interact through an intermediary. This intermediary provides both time and location transparency to the interacting agents. Transparency of time is achieved because agents who want to exchange data don't have to receive it when it is sent but can pick it up later. The locations of the receiving agents are transparent in that the sending agent does not need to address any other agent specifically by its name; the mediator forwards the data accordingly. Time transparency can not always be guaranteed when using this pattern. Shared state data may persist on the Blackboard for as long as it is needed. However, events (data of a transient type) will typically not persist, unless they are converted to persistent data. In particular this implies that agents only get notified about events after they subscribe. We thus cannot uphold time transparency for event-style data when using the Blackboard pattern. Mobility is also supported well by this pattern once a small extension is made to the protocol between Specialists and the Blackboard. Since exchange of non-event data is asynchronous, agents can leave data for other agents on a Blackboard. Agents that arrive after the originator of the data has left the place can still read the data from the Blackboard. Upon arrival at a place that hosts a Blackboard, an agent subscribes to all areas of interest. In addition, the Blackboard should notify the agent about any data that was posted to the Blackboard before the agent subscribed. The Blackboard is an example of a passive coordination medium. While it allows agents to share data, it does not specify how the agents are expected to react to the data they receive. In other words, all the real coordination knowledge remains hidden in the agents. The reason for this is that coordination protocols need to be expressed using the data access interface to a Blackboard. If it is not possible to express a coordination protocol in this way, the agents are forced to implement the coordination protocol in their own code. Extensions of the Blackboard paradigm such as [Weiss 92] and [Omicini 99] avoid this, but we would therefore really consider them instances of the Market Maker pattern. 4.6. Known Uses This pattern has been documented in many forms. We can only reproduce some of the pointers to the literature here. The blackboard concept, as described here, that includes a control component, goes back to Hayes-Roth's BB1 system [Hayes-Roth 85]. Various agent-based systems have em-

ployed the blackboard pattern in their design (for example, [Talukdar 86] and [Weiss 92]). The tuple space concept originally introduced by Gelernter is another type of blackboard, although it lacks its control features [Gelernter 85]. Recent extensions of tuple spaces to reactive tuple spaces make them much more powerful. For the reasons expressed above, we consider this type of blackboard a broker as discussed in the Market Maker pattern (section 14.5). 5. Meeting Pattern 5.1. Context Agents in a system you are developing need to interact in order to coordinate their activities without the need for explicitly naming those involved in the overall task. These agents may be statically located or mobile. Direct messaging between agents and between environments and agents is possible. However, the precise time and location for agents to coordinate their activities are not known a priori. 5.2. Problem How do agents agree to coordinate their task and mediate their activities? 5.3. Forces Messaging between agents located within the same environment is fast, secure and simple. By not involving the network and its connections, agents can communicate using the built-in facilities of the environment’s infrastructure. Since these infrastructures are often constructed using languages that support message passing, such as Java, communication is reliable. Building applications that force agents to reside permanently on the same machine fails to make use of potential gains in efficiency by not allowing agents to execute partial tasks in other environments. Moreover, it is unlikely that the information sources agents require can be located on the same machine. However, remote agents will require several messages and interactions with other agents across the network to perform a task. For example, buyer and seller agents negotiating on the terms of a transaction from different locations incur a significant messaging overhead. If the amount of information is large, this can impose severe loads on remote environments providing the information and on the current environment handling the responses. Direct interaction is a technique that people understand well. Although we may live in distant places, the phone and mail have helped us to use this form of interaction. Direct interaction between remote agents is possible, but it requires a dedicated connection between both agents’ environments. This approach is not feasible when the number of agents involved in the coordination task is large since an environment can only support a small number of connections. The reliability of direct interaction is also dependent on the reliability of the network. Agents could send mail to one another, but this is slow and requires a more complex synchronization strategy to coordinate a task because of unpredictable delays. It is easier to build applications that have agents interact (remotely or directly) with knowledge of one another’s names. To send a message, all an agent has to do is use the messaging services provided by the application. Name servers help with the mobility issue. However, application developers must now consider failures and network delays. If agents are not required to have names, messaging to the appropriate one seems difficult. However, it does allow an application to use many different agents for the same purpose, provided they support the coordination requirements. However, using different agents for the same

task raises security concerns. Is the agent helping with the task the right one for the job and will it act in a rational manner? On the one hand, you want only the required agents, named or unnamed, to coordinate with one another. You want to minimize the number of messages passed between agents. You do not want to force them to be statically located and you want your application to be secure. On the other hand, you don’t want to suffer problems due to network reliability and unpredictability. You do not want coordination to be slow or complex and you don’t want to add to the load on the network. 5.4. Solution Create a place in an environment for agents to meet. Let an agent call for a meeting at that meeting place. Permit interactions to occur in the context of the meeting that enable agents to communicate and synchronize with one another in order to coordinate their activities. The first part of the solution involves the construction of a named meeting place in the context of an existing environment. Permit agents to move to this meeting place and allow them to message to a statically located, named agent at the meeting place, called the Meeting Manager. Consider a meeting as an event, and make the Meeting Manager responsible for notifying agents interested in a meeting when one is proposed. Therefore, the Meeting Manager and agents interested in the meeting are an instantiation of the Observer pattern [Gamma et al. 95]. The Meeting Manager accepts messages from remote or local agents wanting to register for notification of specific meetings. In their registration messages, agents must identify who they are and where they can be located. The Meeting Manager also accepts messages from remote or local agents interested in calling a meeting and informs registered agents of when the meeting will occur. Since the Meeting Manager is located at one physical location, it can make meeting announcements and registered agent information persistent, permitting it to recover its current state should the underlying node crash. The Meeting Manager has the additional responsibilities of controlling the meeting, registering agents as they arrive and deregistering them as they departure. The structure and main interactions of this pattern are shown in Figure 3 and Figure 4.

Figure 3: The Meeting Role Diagram Use the solution many times if several different meetings are required. The structure of the MeetingManager handles more than one meeting and since a MeetingManager exists on each environment, meetings can occur at different places. 5.5. Rationale The solution does not completely remove the use of named agents, but it does minimize the number of names that other agents must remember to one: the name of the MeetingManager – or the environment where the MeetingManager is located. The act of calling or registering for a meeting requires that all agents know about the MeetingManager. And, while it is true that the MeetingManager must know the names of the other agents in order to notify them of meetings, it is only loosely coupled to their names since agents inform the MeetingManager of their names (locations) when registering for a meeting. The MeetingManager does not have to know the names a priori.

Figure 4: Meeting Lifecycle Sequence Diagram This solution does not restrict agents from telling others about a meeting, provided they know the names of the other agents, but it does not require it. Having agents go to one place for a meeting cuts down on the overhead of messaging between agents since the network is not involved. Security is still an issue, but it can be localized to the responsibility of the environment the meeting is held at. In addition, coordination between agents is no longer susceptible to network failures or delays since all agents meet at one location. If an environment fails, the environment can use its persistent services to reinitialize the agents and the meeting to the state preceding the crash. The solution also does not restrict how agents collaborate. Once a meeting begins, it is up to the agents to decide how to proceed, athough, they must still use messages to communicate with one another. 5.6.

Known uses IBM’s Aglet’s framework makes use of a meeting structure similar to the one shown here [Aridor and Lange 98]. Concordia [Wong et al. 97 ] uses the pattern’s MeetingManager for its EventManager, which manages group-oriented events enabling collaborating agents to communicate. Place-oriented communication [Kitamura et al 99] is another example of where a community of agents can coordinate and cooperate with one another by moving to a single place to exchange information.

6. Market Maker Pattern 6.1. Context Your agent system evolves constantly. Instead of setting up relationships between agents upfront you let the agents locate their transaction partners on-the-fly. 6.2. Problem How do you match up service users (buyers) and providers (sellers)? 6.3. Forces Agents need to collaborate to perform complex tasks that extend beyond their individual capabilities. As discussed for the Blackboard pattern, one could hardwire the relationships between the agents, but this isn’t always practical. Again, the agents may have been independently designed. In this case, it would more suitable, if the agents did not make assumptions about which specific other agents will be available for collaboration. Embedding the coordination logic into the agents would notably increase the agents’ complexity. It would also affect the global application design, because now coordination rules are distributed between the coordination medium and the agent. Although a logical separation between algorithmic and coordination issues increases the cost of the coordination mechanism, you need the flexibility to implement and modify coordination protocols, while keeping any changes hidden from the collaborating agents. 6.4. Solution The solution involves a Broker or Market Maker that accepts requests from Buyers for bids for goods (resources or services) and matches them up with Sellers. The Broker handles the coordination logic needed to advertise requests to Sellers, collect their bids, and introduce the selected Seller to the Buyer. To emphasize that agents may play the roles of Buyers and Sellers at the same time, we have introduced the role of a Trader that contains the Buyer and Seller roles. The structure of this pattern is shown in Figure 5. Broker

Trader

Buyer

Seller

Figure 5: Role diagram of the Market Maker pattern Like a real-estate broker, the Broker does not simply select a matching Seller on behalf of a Buyer, but leaves the ultimate choice between multiple bids to the Buyer. The rationale for this is that the individual definitions of what Buyers consider a "good" fit may be different from one

Buyer to another, depending on what attributes of the requested good they care most about (such as cost or quality of service). The Broker, acting on behalf of the Buyer, sends requests for bids to all Sellers. The Sellers decide if they want to provide the good requested and submit bids, which may also spell out any attached conditions that constrain their ability or willingness to provide the good. For example, Sellers may ask for a minimum price to be paid for the good. The Broker, now acting on behalf of the Sellers, presents the list of bids to the Buyer requiring it to select the "appropriate" bid. It then sets up a direct relationship between the chosen Seller and the Buyer, which lasts until the good has been delivered (for a service that is the whole duration of providing the service). Figure 6 shows the main interactions of this pattern. Broker 2: requestForBids

5: bidAcceptance

6: bidAcceptance

1: requestForBids 4: bids Buyer

3: bids Seller

Figure 6: Interactions between the participants of the Market Maker pattern 6.5. Rationale The Broker at the core of this pattern is an example of a coordination medium that takes an active role in the coordination process. The Broker essentially enforces the house rules of agent interaction. These other agents are therefore delegating their coordination duties to the Broker. Agent designers only need to define the application logic, not the coordination protocol. Once Buyers have located appropriate Sellers, they may still need to negotiate the terms of the transaction (such as the delivery conditions). This leads to the Negotiating Agents pattern. 6.6.

Known Uses There are many accounts of the Market Maker pattern in the literature. One of the first examples is the Contract Net protocol pioneered by Smith [Smith 80]. Wellman's market-oriented programming framework supports a variety of auction types each imposing a set of market rules on the agent interactions [Wurman 98]. Freeman [Freeman 99] describes a marketplace framework based on the JavaSpaces technology that allows producers and consumers to interact to find the best deal. The MAGNET market infrastructure [Steinmetz 99] provides support for a variety of transactions, from simple buying and selling to multi-contract negotiations. Our formulation of the pattern has also been influenced by the role-based agent modelling approach practiced at BT [Collis 99]. 7. Master-Slave Pattern 7.1. Context You decide to partition a task into subtasks to increase the reliability, performance, or accuracy of its execution.

7.2. Problem How to delegate subtasks and coordinate their execution. 7.3. Forces An agent can partition a task, and delegate subtasks to other agents in order to improve the reliability, performance, or accuracy with which the task is performed. For example, while an agent has other agents working on subtasks, the agent could continue with its own work in parallel. In particular, an agent can move to a remote host to execute a subtask there and offload work from the client host. However, this solution also increases the overall computation effort of the global task. Simple problems may not benefit from partitioning for this reason. Whether a task is partitioned or not, this should remain transparent to clients. When the task is partitioned the agent could return a set of partial results back to the client. However, the client may not be capable of processing the separate results and synthesizing them into one solution. It doesn't want to be concerned with the internals of the computation. As an aside, this is a principle that PC and operating system makers mostly seem to violate. 7.4. Solution The solution to these forces involves a Master who divides the task into subtasks, delegating the subtasks to Slaves and computing a final result from the partial results returned. In the process the master creates a Slave for each subtask and dispatches it to a remote host. While the Slave computes the partial result to the task it has been assigned, the Master can continue its work. When the Slaves have all finished their work, the Master compiles the final result and returns it to the client. The structure of the pattern is shown in Figure 7.

Master

delegates

*

Slave

sends result

Figure 7: Role diagram of the Master-Slave pattern In the basic pattern the Slaves are created by the Master and then move to a remote host to perform their tasks. Upon completion they send their results back to the Master. The Slaves then dispose of themselves. In a variation of the pattern, the Slaves exist permanently in remote locations and can be assigned new tasks once they have completed their current task. Figure 8 shows the main interactions of this pattern.

dispatch to host B, C, ...

1: createAgent Master

Slave

Host A

doTask 2: result

Slave

Host B, C, ...

Figure 8: Interactions between the participants of the Master-Slave pattern A typical application of the Master-Slave pattern is parallel computation. A number of Slaves can be assigned to a work pool which is controlled by a Master. Each of the Slaves offers the same services. Clients send their requests to the Master that handles these requests by dividing each one into subtasks and dispatching them to idle Slaves in the work pool. 7.5. Rationale The Master-Slave pattern is an example where vertical coordination is used to coordinate the activity of two or more agents. Vertical coordination is an interaction where one agent carries out a subtask for another agent, but this subtask is still logically part of the former agent’s task. The term was introduced by Collins [Collins 98] in the context of a new theory of action. 7.6. Known Uses The Master-Slave pattern has its roots in parallel computation. Consult [Carriero 90] for a much more detailed account of various applications of this pattern. Descriptions in patterns format first appeared in [Buschmann 95] (using objects) and [Aridor 98] (using agents). The primary distinction between the object and the agent formulations of the pattern is that the agent pattern specifically accounts for mobility, where the object pattern does not. 8. Negotiating Agents Pattern 8.1. Context In your application agents interact as peers. That may cause them to give conflicting instructions to the environment. 8.2. Problem How do you detect and resolve conflicting intentions between agents?

8.3. Forces Agents need to align their actions, because this can be useful, desirable, or even essential to the achievement of their individual goals. One possible solution is for each of the agents to implement the "ostrich's algorithm", that is, to ignore the possibility of conflicts and respond to them only after they have occurred. The disadvantage of this algorithm is that it may not always be possible to rollback the agents' and the environment's states to where they were before the conflict happened. For example, an instruction to the environment may be to forward a call to a blocked number. By the time the conflict is detected, the phone might already have been picked up at the receiving end. 8.4. Solution These forces drive a solution where the agents make their intentions explicit. For example, they exchange constraints on what the other agents are allowed to do. In response to the intentions disclosed by other agents, the agents may replan their actions to avoid detected interactions with their own intentions. Replanning involves choosing among alternative courses of action. In selecting alternative courses of action, agents are guided by policies such as giving preference to one kind of action over another or maximizing the utility associated with their actions. The agents subsequently exchange their revised intentions with each other. The exchange of intentions and subsequent replanning proceeds until the agents manage to align their actions, or if they reach an impasse which makes further negotiation futile. The solution involves an Initiator who starts a negotiation round by declaring its intention to its peers, which are all the other agents who must be consulted before the Initiator can go ahead with its action as intended. The peers take on the role of Critics in the negotiation. They test if there is a conflict between the declared intention and their own intended course of action. If there is none, a Critic accepts the proposed action, otherwise it makes a counter-proposal or rejects the action outright. Counter-proposals contain alternative actions that are acceptable to the Critic. Rejections indicate that there is an impasse that can only be handled outside the negotiation framework. Figure 9 shows the structure of this pattern. Participant

Initiator

Critic

Figure 9: Role diagram of the Negotiating Agents pattern An agent can be an Initiator and a Critic in different negotiations at the same time. Therefore we create another role, that of a Participant, which contains the Initiator and Critic roles. Participants often act on behalf of other agents for whom they are negotiating. For instance, an Initiator may negotiate on behalf of a buyer agent about the terms of a transaction. Once the terms have been determined the buyer pays the negotiated amount to the seller and receives the good. The main interactions of this pattern are shown in Figure 10.

1: propose Initiator

Critic

2: accept / counter-propose / reject

Figure 10: Interactions between the participants of the Negotiating Agents pattern For purpose of illustration, consider that agents represent their alternative courses of actions as an AND/OR tree as in Figure 11. AND nodes designate a set (possibly a sequence) of actions that needs to be performed together, or not at all. OR nodes indicate choices between alternative action trees. This model of actions is fairly generic as the literature suggests (for example, [Barbuceanu 98] or the recent work on the theory of actions by Collins [Collins 99]). But it is by no means all-inclusive. This should be kept in mind when applying this solution. Agent A

Agent B

propose A counter-propose E A

B

C

(conflicts with D)

D

E

F

propose B accept (no conflict)

AND/OR tree

Figure 11: Illustration of using the Negotiating Agents pattern In the example, agent A, the Initiator, decides to perform action A. Before it goes ahead with its execution, however, it publishes its intention to do so to its peer, agent B, the Critic, (propose A). Assume that B detects a conflict between the proposed action and its own action D that it was about to perform. B therefore searches for an alternative, and, via backtracking, arrives at action E. B makes a corresponding counter-proposal to A. Agent A can now rule out action A and search itself for an alternative course of action, which it finds in B. Upon receiving the updated proposal, B agrees by accepting the action proposed by A. A and B now proceed by executing their actions B and E. 8.5. Rationale The Negotiating Agents pattern deals with the situation where the interacting agents appear as peers to each other, but need to align their actions for some reason. Unlike the Master-Slave pattern, the tasks of the interacting agents are not instances of one agent's carrying out another agent's subtask. We refer to this type of coordination as horizontal. Once you adopt the key idea in this pattern that agents declare their intentions to each other, there is a multitude of protocols that the agents can follow in their negotiation. The example

given serves only as an illustration; many other protocols are possible. This suggests that there is a whole pattern language for agent negotiation waiting to be written, with this pattern only providing the root from which the other, more specific patterns descend. 8.6. Known Uses This pattern is popular in the telecommunications [Griffeth 93] and the supply-chain domains [Barbuceanu 98]. In the first domain an example may involve two agents controlling different parties in a phone call. The agents interact with the underlying switch by listening for events and independently proposing a next action in response to these events. The actions proposed by the agents may, however, conflict with each other. For example, the callee's agent proposes to forward the call to another destination, but the caller does not want to be connected to this destination. If instead, the callee's agent identifies its intention to forward the call, the caller's agent is now able to reject that action. As a result the call is not forwarded. The concept of market sessions in MAGNET [Steinmetz 99] also exemplifies this pattern. Sessions encapsulate a transaction on the market. Agents can play two roles with regard to a session. The agent that initiates a session is known as the session initiator, while the other participating agents are known as session clients. The supply chain role model in the ZEUS agent building toolkit introduces similar roles: negotiation initiator and partner [Collis 99]. 9. Summary We have provided only a small sampling of the possible coordination patterns. Since assembling related patterns and positioning them relative to others will form a pattern language that ultimately increases the generative quality of the patterns, we encourage others to develop coordination patterns. We believe that agent pattern languages are essential for the future success of agent systems. They are not only helpful to you, but also to all the software engineers who will use the patterns to develop their systems in the future. 10. References Alexander, C., The Timeless Way of Building, Oxford University Press, 1979. Aridor, Y., and Lange, D., Agent Design Patterns: Elements of Agent Application Design, Conference on Autonomous Agents, IEEE, 1998. Barbuceanu, M., Coordination with Obligations, Conference on Autonomous Agents, IEEE, 62-69, 1998. Bradshaw, J., Dutfield, S., et al, KaoS: Towards an Industrial-Strength Open Distributed Agent Architecture. In: Bradshaw J. (ed.), Software Agents, AAAI/MIT Press, 375-418, 1997. Buschmann, F., The Master-Slave Pattern. In: Coplien, J. and Schmidt, D., Pattern Languages of Program Design 1, 133-142, Addison-Wesley, 1995. Carriero, N., and Gelernter, D., How to Write Parallel Programs: A First Course. The MIT Press, 1990. Cabri, G., Leonardi, L., and. Zambonelli, F., Reactive Tuple Spaces for Mobile Agent Coordination, Mobile Agents: Second International Workshop, LNCS 1477, Springer, 237-248, 1998. Chiariglione, L., Foundation for Intelligent Physical Agents, http://drogo.cselt.stet.it/fipa/, 1997. Collins, H., and Kusch, M., The Shape of Actions. What Humans and Machines Can Do, The MIT Press, 1998. Collis, J., and Ndumu, D., The Role Modelling Guide. In: ZEUS Methodology Documentation, BT Laboratories, 1999, http:// www.labs.bt.com/projects/agents/zeus/docs.htm. Coplien, J., Software Patterns, SIGS Management Briefings Series, SIGS, 1996. Deugo, D., and Weiss, M., A Case for Mobile Agent Patterns, Workshop on Mobile Agents in the Context of Competition and Cooperation (MAC3), Autonomous Agents, 19-23, 1999.

Freeman, E., Hupfer, S. and Arnold, K., JavaSpaces: Principles, Patterns, and Practice. The Jini Technology Series. Addison-Wesley, 1999. Gamma, E., Helm, R., et al, Observer. In: Design Patterns: Elements of Reusable Object-Oriented Software, Addison Wesley, 293-303, 1995. Gelernter, D., Carrierio, N., et al., Parallel Processing in Linda, International Conference on Parallel Processing, 255-263, 1985. Griffeth, N., and Velthuijsen, H., Reasoning about goals to resolve conflicts. In: International Conference on Intelligent Cooperating Information Systems, IEEE, 197-204, 1993. Hayes-Roth, B., A Blackboard Architecture for Control, Artificial Intelligence, 251-321, 1985. Kendall E., Krishna, P., et al, Patterns of Intelligent and Mobile Agents, Autonomous Agents, 1998. Kitamura, Y., Mawarimichi, Y., and Tatsumi, T., Mobile-Agent Mediated Place Oriented Communication, International Workshop on Cooperative Information Agents, LNAI 1652, Springer, 232-242, 1999. Mobile Agent List, University of Stuttgart, Germany, September 1999, http://www.informatik.unistuttgart.de/ipvr/vs/projekte/mole/mal/mal.html Milojicic, D., Breugst, M., et al, MASIF, The OMG Mobile Agent System Interoperability Facility, International Workshop on Mobile Agents, 1998. Also in: Springer Journal on Personal Technologies, 2:117-129, 1998. Meszaros, G., and Doble, J., A Pattern Language for Pattern Writing, in R. Martin, et al, Pattern Languages of Program Design 3, Addison Wesley, 529-574, 1998. Minar, N. and Papaioannou, T., Workshop on Mobile Agents in the Context of Competition and Cooperation (MAC3), Autonomous Agents, 1999, http://mobility.lboro.ac.uk/MAC3. Omicini, A., and Zambonelli, F., Tuple Centres for the Coordination of Internet Agents, ACM Symposium on Applied Computing, ACM, 1999. Rodrigues Da Silva O, A., Romão, A., Deugo, D., Mira Da Silva, M., Towards a Reference Model for Surveying Mobile Agent Systems, Autonomous Agents and Multi-Agent Systems, 2001. Smith, R., The Contract Net Protocol: High-Level Communication and Control in a Distributed Problem Solver. IEEE Transactions on Computers, 1104-1113, 1980. Steinmetz, E., Collins, J., et al, Bid Evaluation and Selection in the MAGNET Automated Contracting System, Agent-Mediated Electronic Commerce, LNAI 1571, Springer, 1999. Talukdar, S., Cordozo, E., and Leao, L., Toast: The Power System Operator's Assistant, IEEE Expert, 5360, July, 1986. Weiss, G., Multiagent Systems, The MIT Press, 1999. Weiss, M., and Stetter, F., A Hierarchical Blackboard Architecture for Distributed AI Systems, Conference on Software Engineering and Knowledge Engineering, 349-355, IEEE, 1992. Wong, D., Paciorek, N., et al, Concordia: An Infrastructure for Collaborating Mobile Agents. Proceedings of the First International Workshop on Mobile Agents. 1997. Wurman, P., Wellman, M., and Walsh, W., The Michigan Internet AuctionBot: A Configurable Auction Server for Human and Software Agents. In: Autonomous Agents, 301-308, IEEE, 1998.