Why Autonomy Makes the Agent - CiteSeerX

8 downloads 72511 Views 233KB Size Report
programmers, who needn't have complete understanding of exactly how the .... 3 Although there are examples in Network Management applications that avoid ...
Chapter 1

Why Autonomy Makes the Agent Sam Joseph NeuroGrid Consulting 205 Royal Heights, 18-2 Kamiyama-cho, Shibuya-ku, Tokyo 150-0047, Japan +81-90-1795-8393 [email protected] Takahiro Kawamura Computer & Network Systems Laboratory Corporate Research & Development Center Toshiba Corp., 1 Komukai Toshiba, Kawasaki, 212-8582, Japan +81-44-549-2237 [email protected]

Introduction This chapter presents a philosophical position regarding the agent metaphor that defines an agent in terms of behavioural autonomy; while autonomy is defined in terms of agents modifying the way they achieve their objectives. Why might we want to use these definitions? We try to show that learning allows different approaches to the same objective to be critically assessed and thus the most appropriate selected. This idea is illustrated with examples from distributed mobile agent systems, but it is suggested that the same

reasoning can be applied to communication issues amongst agents operating in a single location. The chapter is structured as follows. Section 2 looks at the fundamental metaphors of agents, objects and data, while section 3 moves on to consider the more complex concepts such as autonomy and mobility. In section 4 the authors attempt to define what a mobile agent actually is, and how one might be used to conserve network resources is addressed in section 5. Finally we explore the relationship between autonomy and learning, and try to clear up some lose ends.

Agents, Objects & Data This paper works on the premise that the position stated by Jennings et al. (1998) is correct. Specifically that, amongst other things, the agent metaphor is a useful extension of the object-oriented metaphor. Object-oriented (OO) programming (Stroustrup, 1991) is programming where data-abstraction is achieved by users defining their own data-structures (see figure 1), or “objects”. These objects encapsulate data and methods for operating on that data; and the OO framework allows new objects to be created that inherit the properties (both data and methods) of existing objects. This allows archetypeal objects to be defined and then extended by different programmers, who needn’t have complete understanding of exactly how the underlying objects are implemented. While one might develop an agent architecture using an object-oriented framework, the OO metaphor itself has little to say about the behavioural autonomy of the agents, i.e. their ability to control access to their methods. In OO the process of hiding data and associated methods from other objects, and other developers, is achieved by specifying access permissions on object-internal data elements and methods. Ideally, object internal data is invisible from outside the object, which offers functionality through a number of public methods. The locus of control is placed upon external entities (users, other objects) that manipulate the object through its public methods. The agent-oriented (AO) approach pressures the developer to think about objects as agents that make requests of each other, and then grant those requests based upon who has made the request. Agent systems have been developed that rely purely on the inherited network of accessibility of OO

systems (Binder 2000), but ideally an AO programming environment would provide more fine-grained access/security control through an ACL (Agent Communication Language) interface (see figure 1).

Figure 1. Example specifications at each level of abstraction. 1) a user-created data structure, 2) methods are added to allow manipulation of the underlying data, giving us an object, 3) to create an agent the object (s) are wrapped in an ACL interface that specifies how to interact with the agent in this case via a DTD (Document Type Definition).

Thus, the important aspect of the Agent Oriented approach is that, in opposition to object method specification, an ACL interface requires that the communicating parties must be declared allowing the agent to control access to its internal methods, and thus its behaviour. This in itself means that the agent’s objectives must be considered, even if only in terms of which other entities the agent will collaborate with. The AO framework thus supports objects with objectives, which leads us on to the subject of Autonomy.

Autonomy, Messages & Mobility Autonomy is often thought of as the ability to act without the intervention of humans (Evans et al., 1992; Beale & Wood, 1994; Brown et al., 1998; Etzioni & Weld, 1995). Autonomy has also been more generally defined as

an agent’s possession of self-control, or self-motivation, which encompasses the broader concept of an autonomous agent being free from control by any other agent, including humans (Castelfranchi, 1995; Covrigaru & Lindsay, 1991; Jennings et al., 1998). This is all well and good but what does it mean in functional terms? Autonomous behaviour is often thought of as goal-directed (Luck & D'Inverno, 1995) with autonomous agents acting in order to achieve their goals. Pro-activeness is also thought of as another fundamental quality of autonomous agents (Foner, 1993) in as much as an agent will periodically take the initiative, performing actions that will support future goal achievement. Barber & Martin (1999) define autonomy as the ability to pursue objectives without outside interference in the decision-making process being employed to achieve those objectives. Going further they make the distinction that agents can have different degrees of autonomy with respect to different goals. For example, a thermostat autonomously carries out the goal to maintain a particular temperature range, but it does not autonomously determine its own set point. To be as concrete as possible, given that an objective may be specified (e.g. transfer data X from A to B), Autonomy can be thought of as the ability of an entity to revise that objective, or the method by which it will achieve that objective 1 , and an Agent is an Object that possesses Autonomy. We consider messaging as a pre-requisite of Autonomy in that if an agent cannot interact with its environment then it has no relevant basis upon which to modify, or even achieve its objectives. Perhaps we should say sensing/acting rather than messaging, but exchange of information in computer networks is arguably closer to a messaging paradigm, than a sensing/acting one. This relates to the previous discussion in which we considered how an agent communication language (ACL) forces an agent developer to specify who to collaborate with, which is part of the process of specifying an objective. For the purposes of this chapter, let us take an objective to be a goal with a set of associated security constraints. One might argue that there is not a strong connection between being able to modify one’s objectives, and restrictions about who to communicate with. However if one thinks of the different agents that one can communicate with as offering different functionalities, then the extent to which one can modify one’s objective becomes dependent on what information and functionality 1

To save space from now on, when we talk about modifying an objective we also mean modification of the way in which it is achieved.

we can gain from those around us. For example; for out hypothetical agent attempting to transfer data X from A to B, it’s ability to change its approach in the light of ongoing circumstances depends crucially on it’s continuing interaction between the drivers that are supporting different message protocols, and the agents it has received its objectives from. If transfer cannot be achieved by available methods, then the agent will need to refer back to other agents to get permission to access alternate transport routes, or receive new instructions. This might all be seen as a needless change in perspective over existing object development frameworks, but before we can demonstrate the benefits of this approach we need to consider code mobility, or the ability to transfer code from one processor to. If we start to ask questions about whether this means that a process running in one location gets suspended and continued in a new location we head into dangerous territory. The actual advantage of mobile agent 2 techniques over other remote interaction frameworks such as Remote Procedure Calls (RPC), mobile code systems, process migration, Remote Method Invocation (RMI), etc., is still highly disputed (Milojicic, 1999). There are various studies that show advantages for mobile agents over other techniques under certain circumstances, but in general they appear to rely on assumptions about the degree of semantic compression that can be achieved by the mobile agent at a remote site (Strasser & Schwehm, 1997; Chia & Kannapan, 1997; Baldi & Picco, 1998; Ismail & Hagimot, 1999; Theilmann & Rothermel, 1999). In this context semantic compression refers to the ability of an agent to reduce the size of the results of an operation due to its additional understanding of what is and isn't required (e.g. disposing of copies of the same web page and further filtering them based on some user profile). However it is difficult to predict the level of semantic compression a particular agent will be able to achieve in advance3 . By moving into the area of mobile agents we encounter various disputes; particularly as regards the concept of a multi-hop agent, a mobile agent that moves to and performs some activity at a number of remote locations without returning to its starting location. Some researchers (Nwana & Ndumu, 1999) even go so far as to question the value of current mobile agent research. Nwana & Ndumu advocate that we should solve the 2

Rest assured this term will soon be more concretely defined. Although there are examples in Network Management applications that avoid this problem, e.g. finding the machine with the most free memory c.f. Baldi & Picco, 1998 3

problems associated with stationary agents before moving on to the more complex case of mobile agents. While there might be some truth in this, the authors of this chapter would like to suggest that in fact it is possible to gain insight into solutions that can be applied to statio nary agents by investigating mobile agents. This seemingly backwards notion might become a little clearer if we allude to the possibility of constructing virtual locations within a single location.

Defining a Mobile Agent For further clarity we shall have to dive into definitions of state, mobile code, and mobile agent; but once we have done so we hope to show the utility of all these definitions. Specifically that they help to think about the different types of techniques that can be used to help an agent or group of agents achieve an objective. In terms of a distributed environment possible techniques include messaging between static agents, or multi-hop mobile agents, or combinations thereof. It will hopefully become clear to the reader that these approaches can be translated into virtual agent spaces, where we consider interactions between agents in a single location. We can perhaps rephrase the issue in terms of the question: Is it more valuable to perform a serial operation (using multi-hop mobility) or a parallel operation (using messaging)? Or in other words, if we need to poll a number of knowledgeable entities in order to solve a problem, should we ask them all and potentially waste some of their time, or should we first calculate a ranking of their ability to help us and then ask them each in turn, finishing when we get the result we want? Or some combination of the two? This question is especially pertinent in the distributed network environment, since transferring information around can be highly expensive, but in the case that all our agents reside in the same place (potentially on a number of adjacent processors), the same issues arise, and the same kinds of tools (chain messages, serial agents, parallel messages) are available as alternate strategies, and their respective utilities need to be evaluated on a case-by-case basis. So let’s be specific and further define our terms: • • •

Message: State: Code:

read-only, data structure read/write data structure a set of static operations on data

Where an “operation” means something that can be used to convert one data structure into another. A data structure is taken to follow the C/C++ language idea of a data structure, a variable or set of variables that may be type-specified (e.g. float, int, hashtable , etc.) that may be arbitrarily nested (e.g. hashtable of hashtables of char arrays). State is often used to refer to the maintenance of information about the currently executing step in some code, which requires a read/write data structure. Given that we are transmitting something from one location to another it is possible to imagine the transmission of any of the eight possible combinations of the three types defined above (e.g. message & code, message & state, etc.). Some of the possible combinations are functionally identical since a read/write component (state) can replicate the functionality of a read-only component (message). We might have considered write-only components as well, but they would not appear to add anything to our current analysis. In summary we can distinguish four distinct entities: • • • •

MESSAGE (implicitly

parallel) CHAIN MESSAGE (serial) MOBILE CODE (parallel) MOBILE OBJECT (serial)

Message Only Message & state Code Only Code & State

We can consider each of the above entities in terms of sending them to a number of network locations in either a serial or parallel fashion (see figures 2 & 3). While there are other possibilities such as star-shaped itineraries (Tahara et al., 1999) or combinations of serial and parallel, we shall leave those for the moment. The important thing to note is that in a parallel operation, state has little value since any individual entity will only undergo a single-hop (one step migration), while state becomes essential to take advantage of a serial operation in order to maintain and compare the results of current processing with previous steps.

Figure 2. Serial Chain Message or Mobile Object framework. SA (Stationary Agent), MA (Mobile Agent). Arrows represent movement of object or message.

Basically we are considering the utility of each of these entities in terms of performing distributed computation or search. If the objective is merely to gather a number of remote data items in one location, then sending a request message to each remote location will probably be sufficient. If we want to run a number of different processes on different machines, mobile code becomes necessary, if not a mobile object. However, if we think an advantage can be gained by remotely comparing and discarding the results of some processing then chain messages and mobile objects seem more appropriate (since they can maintain state in order to know what was has been achieved so far etc.).

Figure 3. Parallel Messaging or Mobile Code framework. SA (Stationary Agent), MA (Mobile Agent). Arrows represent movement of messages or objects.

Thus we define a mobile agent as a mobile object that possesses autonomy, where autonomy was previously defined as the ability to revise ones objective. In order to support autonomy in an entity we need some way of storing previous occurrences, e.g. state. Which means that a message or a piece of mobile code cannot by itself support autonomy. We also require some kind of processing in order to make the decision to change an objective or method of achieving it, which means that by itself a chain message cannot be autonomous, although by operating in tandem with the processing ability of multiple stationary agents, autonomous behaviour can be achieved. Which leaves mobile objects, which carry all the components required to support autonomy. This by itself does not make a mobile object an autonomous entity, but given that it is set up with an objective and framework for revising it 4 , it may be made autonomous, and we would suggest that in this case that it is worth breaking out a new term, i.e. Mobile Agent. So just to be clear about the distinction we are making: in the serial itinerary of figure 2, a mobile object will visit all possible locations, while a mobile agent has the ability to stop and revise the locations it plans to visit at any point. While the 4

In the simplest case this could be a while{} loop monitoring some environmental variable; it’s complexity is not at issue here.

reader might disagree with the use of these particular words, there does seem to be a need to distinguish between the two concepts, particularly since, as we shall discuss in the next section, the presence of autonomy enables a more efficient usage of network resources.

Efficient Use of Network Resources It might well be the case that there is no killer application for mobile agents or indeed for non-mobile agents. Unlike previous “killer-apps.” such as spreadsheets or web-browsers that introduced users to a new way of using a computer, agents should perhaps instead be considered as a development methodology with no associated killer-app. There is perhaps little disagreement that software should be easy to develop, maintain, upgrade, modify, re-use, and fail gracefully. One might go so far as to suggest that these kinds of qualities are likely to be provided by systems based on independent autonomous modules, or indeed agents. The pertinent question is what is the associated cost of creating such a system, and will we suffer a loss of efficiency as a consequence. What is efficiency? When we employ a system to achieve some objective on our behalf, any number of resources may be consumed, such as our patience, or emotional resolve, but more quantifiably, things like time (operation-, development-, preparation-, maintenance-), CPU cycles, network bandwidth, heap memory usage. In determining whether a (mobile) agent system is helping us achieve our goals it is important to look at all the different resources that are consumed by its operation in comparison with alternate systems. Some are more difficult to measure than others, and different people and organisations put different premiums on different resources. The authors' research into mobile -agents has focused on time and bandwidth consumption since these are considered to currently be in short supply. If we can keep all of this in mind then we might be able to assess the agent-oriented metaphor with a greater degree of objectivity than previously. The OO metaphor has overheads in terms of specifying and maintaining object hierarchies and permissions, but it seems to have become widely accepted that this is outweighed by the greater maintainability and flexibility of the code developed in this fashion. If we can show that the costs of constructing more complex agent-oriented systems is outweighed by some

similar advantage then perhaps we can put some arguments about agents to rest.

Figure 4. Communicating across the network with RPC calls. Copyright General Magic 1996.

A key paper in the recent history of the mobile agent field is the Telescript white paper (White, 1996) in which some benefits of using mobile agents were introduced. There are two diagrams from this paper that have been reproduced both graphically and logically in many papers/talks/discussions on mobile agents. The first diagram shows us the Remote Procedure Call (RPC) paradigm approach to communicating with a remote location (figure 3), while the second (figure 4) indicates how all the messy individual communication strands of the RPC can be avoided by sending out a mobile agent. The central idea being that the mobile agent can reduce the number of messages moving around the network, and the start location (perhaps a user on their home computer or Personal Data Assistant・PDA) can be disconnected from the network.

Figure 5. Communicating across the network with mobile agents. Copyright General Magic 1996.

The advantage of being able to disconnect from the network is tied up with the idea that one is paying for access to the network, i.e. connecting twice for twenty seconds, half an hour apart will be a lot cheaper than being continuously connected for half an hour. While this might be the case for a

lot of users connecting to the network through a phone-company stranglehold, it in fact does not work well as an argument for using mobile agents throughout the network. A TCP/IP based system will break the mobile agent up into packets in order to transmit it, so the real question becomes, “Is the agent larger than the sum of the size of the messages it is replacing?”. Or more generally does encoding our communication in terms of a mobile agent gain any tangible efficiency improvements over encoding it as a sequence of messages? The problem is predicting which communication encoding will be more effective for a given task and a network environment.

Prediction Issue The use of an agent-oriented development methodology helps in the design and maintenance of large software systems, at least as far as making them more comprehensible. But this does not automatically mean that mobile agents will necessarily have any advantage over a group of stationary agents communicating via messages. To illustrate the point let us imagine an example application that is representative of those often used to advocate mobile agent advantages. Let us say that we are searching for a number of web pages, from a variety of different search engines, a meta-search problem so to speak (see figure 6.). Search engines currently available on the web allow us to submit a set of search terms, but will not host our mobile agents. In some future situation in which we could send mobile agents out to web search engines, or in some intranet enterprise environment where database wrappers can host mobile agents (Kawamura et al., 2000), we might be tempted to try and send out a single mobile agent rather than lots of separate queries.

Figure 6. MetaSearch through the Internet or Intranet.

Quite apart from whether we might benefit from a multi-hop mobile agent performing this search for us, we can ask whether we can gain anything from having our agent perform some local processing at a single remote search engine/database wrapper. For example perhaps we are keen not to receive more than ten results from each remote location; perhaps we are searching for documents that contain the word ``Microsoft'', but rather than just returning the top ten hits when we get more than ten, maybe we would like the search to be narrowed by the use of additional keywords; a sort of conditional increase in search specificity as shown by the flow-chart in figure 7.

Figure 7. Conditionally increasing search specificity

The flow chart summarises the kind of code that an agent might execute at a remote location as part of our meta-search. The main point is that if the number of matched documents is actually less than the threshold then all the information apart from the first search term is not needed. Sending out code has just consumed bandwidth without delivering any benefits. Of course, you cry, sometimes the rest of the code will be used, just not on every occasion. Exactly, but what is the likelihood that we will need the extra code or indeed extra information? Clearly we need to hedge our bets; in a search where we expect large numbers of results to require some semantic compression at a remote location, then we can happily send out lots of code and data just in case to make sure we don’t take up too much bandwidth. However, we need to be more specific about the details of this trade-off. If we want to show any kind of non-situation specific advantage of transferring code/agents over the network, we need to be able predict the kinds of time/bandwidth efficiency savings they will create against the time/bandwidth their implementation consumes.

Joseph et al. (2001) work through a more specific example of an object search application, showing that the ability to roughly predict the location of an object allows efficient switching between two different search protocols. If we refer back to figure 3 the parallel diagram indicates either Mobile Code or Message transfer, while the serial diagram in figure 2. indicates Mobile Object or Chain Message paradigms. Let us re-emphasise what it means to change our mobile object into a mobile agent. In the serial diagram we can see that the presence of behavioural autonomy, the ability to adjust one’s method of achieving a goal, would allow the entity being transferred around the system to return early if the desired item was found, the network environment changed etc. The ability to adjust a plan, in this case to visit four locations in sequence, and return to base at will allows network resources to be conserved. For our single -hop agent performing meta-search, autonomy concerns controlling when to finish processing and return the results. All this requires is a while loop waiting for some change (achieving the goal), which can be achieved using a mobile object, you protest; but wait, the OO paradigm has nothing to say about whether or not that kind of framework should be set up. What the AO framework should provide is a way for these goals and the circumstances under which they should be adjusted, to be easily specified (Woolridge et al., 1999). The remaining issue is how can an agent make a decision to adjust its goals or method of achieving them, if it can’t predict the effects of the change. It is the authors’ humble opinion that in the absence of predictive ability, agents cannot effectively make decisions, except in relatively simple environments. If a problem is sufficiently well understood then the probabilities of any occurrence might well be known, but in all those really interesting problems, where they aren’t known in advance, we are forced to rely upon learning as we go along.

Learning We define learning as the adjustment of a model of the environment in response to experience of the environment, with the implicit objective being that one is trying to create a model that accurately reflects the true nature of the environment, or perhaps more specifically those sections of the environment that influence the objectives an agent is trying to achieve. In these terms the simple updating of a location database to accurately reflect

the contents of a network location can be considered learning, but really what additional characteristics are required? Where learning is mentioned one tends to think of the benefits gained through generalisation and analogy, which are in fact properties of the way in which the environment is represented in the memory of our learner. A more concrete example of this is provided in Joseph et al. (2000), which we summarise here. Specifically that learning about the location of objects within a distributed network environment can lead to a more efficient use of resources. Essentially the particular learning algorithm used is not as important as the representation of the objects; although the learning algorithm needs to be able to output probabilistic estimates of an objects location (for a review of probabilistic learners see Buntine (1996)), and Joseph et al. (2000), used a representation based on object type 5 after Segal (1993). To make a long story short, when a chain message or mobile agent is performing a serial search for an object, knowing the probability that it exists in each of the search locations allows one to estimate when the search will terminate. Even when using parallel messaging the same estimates can be used to choose a subset of possible locations to make an initial inquiry. That information then allows the alternative methods of achieving the same objective to be quantitatively compared and the most efficient option selected. Of course how can we be sure that our probability estimates are correct? We can’t, but the reasoning of Etzioni (1991) seems sound, that we should use the results of previous searches, or processing, in order to create future predictions. It might also be expedient to rank the different representational units in terms of their predictive ability, such as finding that knowing a file is a word file allows us to predict its location with some accuracy, while knowing that something is a binary file is not so useful. In the meta-search example an estimate of how many results will be generated in response to a particular set of search terms can perform the same function. Effectively setting up a profile of which search engines are experts in which domains so that the most appropriate subset can be contacted depending on our current query. An approach that already has some currency; if InfoSeek’s recent patent on a web-based meta -search technique is anything to go by (Schwartz, 1998).

5

In fact file type: executable, text, word file, etc.

Discussion The main points that have been brought up in this chapter are that the Agent-Oriented approach to software might have something to offer over and above the Object-Oriented approach. We can think of an Agent-Oriented approach as offering a developer an easy way to establish an ACL and a format for specifying the objectives of individual agents. This can be thought of as just a shift in terminology, but the authors of this chapter go further, to suggest that if this Agent-Oriented framework allows for the dynamic adjustment of agent’s objectives, then functional differences can be achieved in the system performance and efficiency. We have tried to present an argument that there is a quantifiable difference between those mobile objects that can decide to adjust their objectives en route and those that can’t; and if we want to take advantage of “mobile objects” they should be able to switch behaviours to suit circumstances and make those decisions on the basis of predictions about the most effective course of action. There are of course many unanswered questions, such as what our Agent-Oriented programming languages should look like, and what functions they should provide in order to assist developers in creating agents with objectives that can be modified in the face of their ongoing experience; which we hope to make the subject of future publications. For an example of the kind of work going on in this area we refer readers to the work of Wooldridge et al. (2000). Put more in here Appendix Now, on to some of those prickly issues. Firstly there is the question of how we specify an agent’s objective, for example in terms of the BeliefDesire-Intention (BDI) framework (Rao & Georgeff, 1991). While this kind of framework is clearly very important in the long term, it would seem expedient in the short term at least to simply encourage agent developers that they think about their agent's objectives. Insisting that agent system developers employ (and by implication learn) an unfamiliar new objective modelling language is likely to put many people off. In the short term the objectives of agents get specified implicitly by way of any number of restrictions on agent behaviour (security restrictions, temporal restrictions, etc.). For example, an agent may be trying to load balance, but be restricted

in which resources can be used to balance the processing load over a number of machines, for security reasons or whatever; this creates a bounded objective, e.g. ``Solve this problem, but don't use machine B to do it''. It is likely to only be a matter of time before agreed upon (or at least widely used) formats for these kind of specifications arise, but in their advance the authors' believe it is useful to work towards some kind of philosophically consistent agent-oriented metaphor before working on a detailed specification; in much the same way that the object-oriented language specifications came after the philosophical development of the OO metaphor. Next comes the problem of definitions and their value. Throughout this paper we make a number of definitions, and the fall out may be that, for example, some systems people would not like to think of as agents will be labelled as agents. An analogy could be drawn with trying to define a concept like “alive”, which might be the quality an entity possesses if it matches a number of criteria such as growth, metabolism, energy use, nutrition, respiration, reproduction and response to stimuli (as one of the authors seems to remember from an old biology textbook). The point is that with any such definition there might be unfortunate side effects such as a car-making factory being classified as more “alive” than a virus, or that kind of thing. While this might be regarded as a horrific consequence by some, it seems that rather than repeatedly modifying definitions to try and make them fit in with our “intuition” about what is meant by a particular term, we should focus on making definitions that make a distinction that is of some value, e.g. whether or not a system can modify its stated objectives, and gaining insight from the categorisations that follow. There are various issues relating the messaging protocols since autonomy is by its nature tied up with the ability to communicate. This might not be clear at first but if autonomy is defined as an ability to modify ones objectives there needs to be some basis to make those decisions upon. In the absence of any interaction with an environment (whether or not it has any other autonomous entities in it our not), any such decision becomes of no consequence. In a relatively static environment we might want to talk about sensing instead of communicating, but when we think about computer networks, any sensing of the environment takes place in an active fashion, i.e. we might just be “sensing” the file system, but increasingly we are communicating with some file system agent or wrapper.

In order for any sensing or communication to be useful in the computer network environment, protocols are necessary; or perhaps we mean ontologies? The distinction becomes complex and a full investigation is beyond the scope of this paper. In order to summarise the current convoluted state of affairs let us describe four possible outcomes of current research: 1. Everyone spontaneously agrees on some communication framework/protocol (FIPA-ACL, KQML, Labrou et al., 1999), 2. Someone works out how to formally specify all the different ACL (Agent Communication Language) dialects within one overarching framework, that includes lots of helpful ontology brokering services that make communication work (Sycara et al., 1998), 3. Someone figures out how to give agents enough wits to be able to infer the meanings of speech acts from the context in which they are communicated (Kirby, 1999), 4. Some combination of the above. While this is not a trivial issue, it is possible to overlook it in a given agent system by assuming that all the agents subscribe to a single protocol, which is often the case in most implemented agent systems. There is also the “who does what for free?” issue. Jennings et al. (1998) summarise the difference between objects and agents in terms of the slogan ``Objects do it for free; agents do it for money''. However due to possible semantic conflict with the saying ``Professionals do it for money; amateurs do it for the love of it'', we suggest the possible alternate slogan ``Objects do it because they have to; agents do it because they want to'' in order to directly capture the point that the agent-oriented approach is advocating that software entities (i.e. agents) have a policy regarding their objectives: what they are intending to achieve, and which objectives they are prepared to collaborate in achieving. Finally we should look more closely at our definition of code. The issue being that any piece of information could be taken to represent an operation. We can get into complex epistemological questions about whether a meteor shower, or RNA protein manufacture constitute data processing. However for the current purposes we seek to define an operation as something that can be interpreted within our current system as an operation. For example a simple list of letters (e.g. E,F,U,S) could be taken to represent a series of

operations in a system set up to recognise that representation. In summary we are tempted to think of code as a set of operations that can be interpreted within the system in question. One final note is that we could actually construct a read-only chain message by having the remote stationary agent checking its own ID against the read-only destination IDs in the chain message, but this is not a general solution and would create security issues about untrusted hosts knowing the complete itinerary of the chain message. Also of note is that we have a lot more possibilities than just sending purely parallel or purely serial messages, but then our search space gets very big very quickly. Still, this possibility deserves further attention

Acknowledgements We wish to thank Takeshi Aikawa, Leader of the Computer & Network Systems Laboratory, for allowing us the opportunity to conduct this research, and Shinichi Honiden & Akihiko Ohsuga for their input and support.

References Baldi M. & Picco G. P. (1998) Evaluating the Tradeoffs of Mobile Code Design Paradigms in Network Management Applications. In Kemmerer R. and Futatsugi K. (Eds.), Proc. 20th Int. Conf. Soft. Eng. (ICSE'98), ACM Press, 146155. Barber S. K. & Martin C. (1999) Specification, Measurement, and Adjustment of Agent Autonomy: Theory and Implementation. Technical Report TR99-UTLIPS-AGENTS-04, University of Texas. Beale R. & Wood, A. (1994) Agent-based Interaction. Proc. People and Computers IX: Proceedings of HCI'94, Glasgow, UK, 239-245. Binder W. (2000) Design and Implementation of the J-SEAL2 Mobile Agent Kernel. 6th ECOOP Workshop on Mobile Object Systems: Operating System Support, Security, and Programming Languages. http://cui.unige.ch/~ecoopws/ws00/index.html Brown S. M., Santos Jr. E., Banks S. B., & M. E. Oxley (1998) Using Explic it Requirements and Metrics for Interface Agents User Model Correction. Proc.

Second InternationalConference on Autonomous Agents, Minneapolis, St. Paul, MN. 1-7. Buntine W. (1996) A guide to the literature on learning probabilistic networks from data. IEEE Trans. Knowl. & Data Eng., 8(2):195-210. Carzaniga A., Picco G. P., & Vigna G. (1997), Designing distributed applications with mobile code paradigms. In Taylor R. (Ed.), Proc. 19th Int. Conf. Soft. Eng. (ICSE'97), ACM Press, 22-32. Castelfranchi C. (1995) Guarantees for Autonomy in Cognitive Agent Architecture. Intelligent Agents: ECAI-94 Workshop on Agents Theories, Architectures, and Languages, M. J. Wooldridge and N. R. Jennings, Eds. Berlin: Springer-Verlag. 56-70. Chia TH. & Kannapan S. (1997) Strategically mobile agents. In Rothermel K. and Popescu-Zeletin R. (Eds.) Lecture Notes in Computer Science: Mobile Agents, Springer, 1219:174-185. Covrigaru, A. A. & Lindsay R. K. (1991) Deterministic Autonomous Systems. AI Magazine, vol.12. 110-117. Etzioni 0. (1991) Embedding decision-analytic control in a learning architecture. Artificial Intelligence, 49:129-159. Etzioni O. & Weld D. S. (1995) Intelligent Agents on the Internet: Fact, Fiction, and Forecast. IEEE Expert. 10(4). 44-49. Evans M., Anderson J. & Crysdale G (1992) Achieving Flexible Autonomy in MultiAgent Systems Using Constraints. Applied Artificial Intelligence, vol. 6. 103-126. Foner L. N. (1993) What's An Agent, Anyway? A Sociological Case Study. MIT Media Lab, Boston, Technical Report, Agents Memo 93-01. Fuggetta A., Picco G. P., and Vigna G. (1998) Understanding code mobility. IEEE Trans. Soft. Eng., 24(5):342-361. Ismail L. & Hagimont D. (1999) A performance evaluation of the mobile agent paradigm. OOPSLA, ACM SigPlan Notices, 34(10):306-313. Jennings N. R., Sycara K., and Wooldridge M. (1998) A roadmap of agent research and development. Autonomous Agents and Multi-Agent Systems, 1:7-38. Joseph S., Hattori M. & Kase N. (2001) Efficient Search Mechanisms For Learning Mobile Agent Systems . Concurrency: Practise and Experiment. In Press. Kawamura T., Joseph S., Hasegawa T., Ohsuga A. & Honiden S. (2000) ASA/MA 2000, Submitted to ASA/MA. Kirby S. (1999) Syntax out of learning: the cultural evolution of structured communication in a population of induction algorithms. In Floreano D., Nicoud J.-D. and Mondada F. (eds). Advances in Artificial Life. Lecture Notes in Computer Science 1674. Labrou Y., Finin T., & Peng Y. (1999) Agent communication languages: the current landscape. IEEE Int. Sys. & their App., 14:45-52. Luck M. & D'Inverno M. P. (1995) A Formal Framework for Agency and Autonomy. Proc.First International Conference on Multi-Agents Systems, San Francisco, CA, 254-260.

Nwana H. S. & Ndumu D. T. (1999) A perspective on software agents research. To appear in Knowledge Engineering review. Milojicic D. (1999) Mobile agent applications. IEEE Concurrency, 80-90. Rao A. S. & Georgeff M. P. (1991) Modeling rational agents within a BDIarchitecture. In Fikes R. & Sandewall E. Proceedings of Knowledge Representation and Reasoning, Morgan Kaufmann. 473-484. Schwartz C. (1998) Web search engines. Journal of the American Society for Information Science, 49(11):973-982. Segal R. (1993) St. Bernard: the file retrieving softbot. Unpublished Technical Report, FR-35, Washington University. Strasser, M. & Schwehm, M. (1997) A performance model for mobile agent systems. In H Arabnia (Ed.) Proc. Int. Conf. on Parallel and Distributed Processing Techniques and Applications (PDPTA'97) II: 1132-1140. Stroustrup B. (1991) What is ‘‘Object-Oriented Programming’’? AT&T Bell Laboratories Technical Report. Sycara K., Lu. J & Klusch M. (1998) Interoperability amongst heterogeneous software agents on the internet. Carnegie Mellon University, PA(USA), Technical Report CMU-RI-TR-98-22. Tahara Y., Ohsuga A. & Honiden S. (1999) Agent system development method based on agent. Proc. ICSE, IEEE. Theilmann W. & Rothermel K. (1999) Disseminating mobile agents for distributed information filtering. Proc ASA/MA, IEEE Press, to appear. White J. (1996) Mobile agents white paper. http:// wwwiiuf.ch/~chantem/white_whitepaper/ whitepaper.html Woolridge M., Jennings N. R. & Kinny D. (1999) A Methodology for AgentOriented Analysis and Design. Autonomous Agents 99: 69-76. Woolridge M., Jennings N. R. & Kinny D. (2000) The Gaia Methodology for AgentOriented Analysis and Design. Autonomous Agents and Multi-Agent Systems: 3, 285-312.