Pluggability Issues in the Multi Protocol? - CiteSeerX

3 downloads 9442 Views 196KB Size Report
On the sending side, data from the MP send bu er is presented to the device as a byte array. Similarly, when receiving, the MP bu er software simply requests an ...
Pluggability Issues in the Multi Protocol?

Simon Gray1, Norbert Kajler2, Paul S. Wang1 Kent State University, Kent, OH 44242-4001, USA fsgray, [email protected] Ecole des Mines de Paris, 60 Bd. St-Michel, 75006 Paris, France [email protected] 1

2

Abstract. There are several advantages to providing communication

links between independent scienti c applications. Important problems to solve include data and control integration. Solving these problems separately is an essential aspect of the design and implementation of a protocol for mathematics. The Multi Protocol (MP) speci cation addresses the exchange of mathematical data by focusing only on the data encoding issues. In this way, MP can be plugged into various existing data transport mechanisms addressing control integration, or augmented by a higher control-related protocol layer. Our implementation of MP is independent of the data transport mechanism and can work with several devices. An application puts/gets data to/from MP bu ers which communicate with the transport device through an abstract device interface. This paper describes the general design of the interface between MP and a transport device and the lessons we have learned during its implementation.

1 Introduction and Motivation There has been increasing interest, over the last few years, in the ability to integrate independent software packages to work cooperatively on the solution to a problem. Indeed, the design of scienti c computing packages is moving away from single image monolithic systems and toward interconnected components that can run independently (see for instance [10, 20, 9, 11] and the proceedings of PASCO'94 [18]). Advantages to this approach include: 1. Researchers gain the freedom to choose applications most suited to their needs and, with a plug-and-play style of interoperability, can experiment with di erent applications to determine suitability. 2. Autonomous components can be developed and maintained separately, providing access to a wealth of software resources. 3. These components can run on di erent platforms for convenience, better performance, or to meet license restrictions. 4. The components can be reused independently of each other. ?

Work reported herein has been supported in part by the National Science Foundation under Grant CCR-9503650

Three dimensions to the problem of tool integration are readily identi able: data, control, and user interface integration [22, 19]. Data integration involves the exchange of data between separate tools, including the de nition of a mechanism allowing the tools to share a common format (and possibly a shared understanding of the meaning of the data). Control integration concerns the establishment, management, and coordination of inter-tool communications. Finally, the aim of user interface integration is to provide the user with a logical and consistent style of interaction with each of the components of the integrated system. Furthermore, it is important within the context of integration to be able to support di erent computational paradigms. Especially promising areas for research are parallel symbolic computation [7, 10] and distributed problem solving environments (PSEs) [12, 21]. Figure 1 gives a sampling of the possibilities and further illustrates that these paradigms are not mutually exclusive. The Multi Project at Kent State is part of an ongoing research e ort into the integration of software tools for scienti c computing. A key philosophy of the project has been to view the dimensions of tool integration as individual problems to be solved separately, recognizing that each of these problems may have multiple solutions. The focus of our rst e orts has been on data integraParallel computation:

master initiates the work

....

slaves work in parallel

master collects results

Software bus:

Numeric Server

Graphics Server

front end

CAS

GUI

....

compute engine

Point−to−point:

input

phase 1

phase 2

phase 3

Fig.1. Computing paradigms to be supported

result(s)

Time

tion and has produced the Multi Protocol (MP) [15], a speci cation for encoding mathematical expressions for ecient communication among scienti c computing systems. A library of C routines based on the MP speci cation has been implemented and is publicly available. In implementing MP we had two goals in mind with respect to computing paradigms3: 1. That we be able to transmit MP-formatted data using a variety of transport devices, promoting plug-and-play at the communication level. 2. That the integration of MP and transport devices be as seamless as possible. That is, we wanted to hide details about the transport device. While we have had good success with the rst goal, our second goal has been more elusive and our experiences have suggested not just changes in our implementation, but also a rethinking of the goals themselves. This paper describes the lessons learned while trying to do control integration in a generic way through an abstract device interface contained in a separate layer of software.

2 MP Overview The MP speci cation focuses on the ecient encoding of mathematical expressions and numerical data. The major design and implementation decisions made in MP are given in [14] and [15] (respectively versions 0.5 and 1.0 of MP). In short, MP de nes a set of basic types and a mechanism for constructing structured data. Numeric data ( xed and arbitrary precision oats and integers) are transmitted in a binary format (2's complement, IEEE oat, etc.). Expressions are represented as linearized, annotated syntax trees (MP trees) and are transmitted as a sequence of node and annotation packets, where each node packet transmits a node from the expression's syntax tree. The node packet has elds giving the type of the data carried in the packet, the number of children (for operators) that follow, the number of annotations, some semantic information, and the data. Operators and constants which occur frequently have an optimized encoding and are known as \common". Annotations eciently carry additional information which is either supplementary and can be safely ignored by the receiver, or may contain information essential to the proper decoding of the data. In any case, each annotation is tagged in such a way that the receiver always knows whether it can safely ignore the annotation content or not. In a layer above the data exchange portion of the protocol, MP supports collections of de nitions for annotations and mathematical symbols (operators and symbolic constants) in dictionaries. Dictionaries address the problem of application heterogeneity by supplying a standardized representation and semantics for mathematical objects. They are identi ed within packets through a dictionary tag eld. Applications that communicate according to de nitions provided in dictionaries do not need to have direct knowledge of each other, promoting a 3

We use \MP" here to refer to both the speci cation and the implementation.

plug-and-play style of inter-operation at the application level. A dictionary for polynomials has been built on top of MP in this manner [3]. Applications send and receive messages containing one or more MP trees which are created by calling routines from the MP Application Programming Interface (API). Logically, the format and representation of the data are completely separate from the mechanism used to transmit it. We have maintained this separation in the implementation of MP and thus are able to support a diverse collection of delivery mechanisms.

3 MP Links and Device Independence Within MP, an application communicates with other applications through an

MP link, which is simply an abstraction of an underlying data transport mech-

anism that is bound to the link at the time of its creation. The link sits on top of a transport device. Data is exchanged between the link and the transport through an abstract device interface. This section discusses the issues involved in maintaining device independence.

3.1 MP Links A link is created within an MP environment. The environment contains a set of resources to be shared by a collection of links created within an application, including a list of the transport devices available to links and a pool of memory bu ers. The programmer may customize the environment by setting options that reset the bu er size and determine how many bu ers are initially in the pool. As illustrated in Fig. 2, a link has two layers: a transport layer and a bu er layer. The bu ering layer lies between the MP API and the transport mechanism and is where messages are assembled and disassembled. The programmer's view application layer

application layer

MP_Put()

MP_Put()

MP_Get()

MP_Get()

Link

buffer layer send buffer

Link

recv buffer

buffer layer send buffer

recv buffer

transport layer

transport layer transport

transport

transport mechanism

Fig. 2. Link connection organization

is that each put and get operation accesses a link connecting the application with one or more other applications. In reality, all accesses are to one of two bu ers: puts access a link's send bu er and gets the receive bu er. These bu ers are implemented as a linked list of bu er segments. Information is exchanged between applications in the form of messages. A message is composed of one or more message fragments, where each fragment corresponds to the data in a single bu er segment. For some transport devices, the sending side has the option of choosing between building an entire message in the link's send bu er before actually sending it, or to send each bu er segment when it becomes full. Similarly on the receiving side. This allows the sender and receiver to overlap I/O with processing. A sending application builds a message through a series of puts. The sender marks the end of a message with a call to MP EndMsg() which sets a bit in the message segment header indicating that this is the last fragment in a complete message. It then ushes the link's send bu er to the underlying transport mechanism. One or more MP trees may be contained within a single message. A receiving application parses the incoming message with a series of gets. The receiver must perform some message alignment by calling MP SkipMsg() before reading in each message. It is the receiver's responsibility to ensure that the entire contents of a message are consumed before proceeding to the next message. This can be done with a call to MP TestEofMsg(), which returns true if the end of the current message has been reached. A useful convention is to transmit a single expression per message. This provides a level of error recovery and allows the receiving side to eciently skip to the end of an expression it cannot process. A link is created and initialized with a call to MP OpenLink (MP Env pt env, int argc, char *argv[]) which returns a pointer to an MP link structure on success and NULL on failure. Arguments passed in the argv vector identify the transport to use, what mode to open it in, etc. These arguments may be given on the command line when the application is launched, hard coded within the application, or, for interactive applications, provided by user input.

3.2 The Abstract Device Interface

Transmitting data generally consists of two steps: marshalling (or packing) the data, producing a linearized encoding of the data suitable for transmission, and sending the data. Receiving follows this process in reverse. Communication packages that can send and receive an array of bytes are candidates for transmitting MP trees. On the sending side, data from the MP send bu er is presented to the device as a byte array. Similarly, when receiving, the MP bu er software simply requests an array of bytes up to the limit of the MP bu er segment size. To maintain exibility and support di erent communication packages as transport systems for MP, these operations are provided by the transport layer through an Abstract Device Interface (ADI). A transport device is represented by a generic transport device structure which contains elds common to all devices, including a device operations structure, and a pointer to an opaque structure that is speci c to the transport device bound to the link.

The operations in the device interface are: 1. dev open connection() - Allocate memory for the device structure, assign the appropriate device operations structure, and perform device-speci c open operations to make a connection. 2. dev close connection() - Break down the connection and release all memory allocated for the device structure. 3. dev read() - Read a speci ed number of bytes from the device into the link's receive bu er. This routine is called by the bu ering layer when the application has attempted to get a data item and the link's receive bu er is empty. 4. dev write() - Write a speci ed number of bytes from the link's send bu er to the device. This routine is invoked by the bu ering layer either to empty the link's send bu er to free it for another message fragment, or when the message has been completed and needs to be transmitted. 5. dev get status() - Determine the status of the device. The intended behavior of this interface made three assumptions about the capabilities of the actual transport device: 1. The device provides bu ering of the messages it delivers, allowing the MP bu ering layer to read and write its messages in fragments. 2. The read/write routines in the ADI simply need to move data between the device's bu ers and MP's bu ers. 3. The source and destination information can be easily kept in the transportspeci c device structure. Under these assumptions, adding a new transport requires providing only the device-speci c structure and the functions that implement the operations in the abstract device interface. Figure 3 shows the relationship between the MP API and the operations in the ADI. As we will see, the degree to which this can be done seamlessly varies with the characteristics of the device. Additional interface routines were sometimes necessary. MP_OpenLink()

dev_open_connection()

MP_CloseLink()

A

MP_GetLinkStatus() D MP_SkipMsg() MP_Get() MP_Put() MP_EndMsg()

dev_close_connection() dev_get_status() dev_read()

buffer layer

I dev_write()

Fig.3. MP API to device operations

4 Current Transports This section brie y describes the transport interfaces currently supported by MP. These devices represent the range of computing paradigms illustrated in Fig. 1. The devices discussed here also provide contrasting features and requirements, and demonstrate that the ADI is no panacea { it cannot elegantly solve all the problems set out for it.

4.1 Files and TCP Sockets

The FILE and TCP transport devices are the simplest and adhere most closely to the blueprint for a generic transport device. Both rely on a Unix I/O descriptor for access to the \device", and use standard Unix system calls to open, close, read, and write on the descriptor. The read/write calls block, but it is possible to test for device readiness rst through a call to the MP GetLinkStatus() routine. Discussion. These devices meet all the assumptions we made for the abstract

device interface. The reason is not dicult to see. As a system that communicates via messages, the MP bu ering layer ts most naturally with a stream-oriented device that provides its own bu ering. We expect that a shared memory device would also fall into this category. The FILE device is useful for archiving and was also used in some early experimentation for systems that originally only communicated using les. The TCP device provides reliable, ordered delivery of data for point-to-point communication. The best throughput is gained when the MP environment's bu er option is used to reset the bu er segment size to that of the TCP packet size. A smaller segment size favors better response times. We have used these devices most extensively. Both were used in our experiments to connect Maxima with the graphing package IZIC [2] and to provide point-to-point communication between the packages Singular, Mathematica, and factory [3].

4.2 PVM

The Parallel Virtual Machine (PVM) [13] is a message-passing system that makes a collection of computers (workstations and parallel machines) appear to the user as a single multiprocessor system. When used in conjunction with PVM, an MP link is an endpoint for communication with one or more other processes. In a typical PVM application, a single master will spawn one or more identical slave tasks which are assigned work by, and return their results to, the master. Tasks are identi ed through PVM-assigned task identi ers (tids). The user's data is packed into the PVM send bu er and, when complete, is sent to one or more recipients who are identi ed in a list of tids. Also, messages may be tagged so that the receiver may specify exactly which messages it is willing to accept. The receiver performs a receive operation to read data into the PVM receive bu ers, from which the application may unpack the data as needed into the user's data

structures. Arguments in the receive routine allow the receiver to specify the kind of message it will accept and from which source. Because PVM is quite exible with regard to the packing/unpacking of data, it ts fairly nicely into our abstract device model. PVM allows you to pack data in pieces; that is, there is no requirement that you have all your data ready and, in a single call, pack it into PVM bu ers. Instead, the sender can pack data items individually until the message is complete and then issue a send command, which transmits the entire message. Moving the data from a link's send bu er to the PVM bu er is done with the pvm pkbyte() routine, which treats its data simply as an array of bytes. This call amounts to a memcpy() from the MP bu er space to the PVM bu er space. On the receiving side, the transport read routine performs both the receive and unpack operations. A receive call reads an entire message into the PVM read bu er. The unpack routine pvm upkbyte() moves the requested number of bytes from the PVM bu er to the MP receive bu er. Discussion. PVM's ability to do packing (unpacking) independently of sending (receiving) simpli es the interface and supplies the behavior expected by the bu ering layer, allowing MP trees to be exchanged in fragments. However, several factors complicate tting PVM seamlessly into the abstract device interface. First, typically a master spawns a set of slave processes and the connection between them is established by the PVM daemon, so there is little work for the device open and close routines to do. Second, PVM supports send/receive routines for point-to-point, multicast, and broadcast communication.4 So the device interface write/read routines should be able to determine at execution time which is appropriate. Third, the PVM send routines take an argument specifying the intended receiver. Similarly, the receiver may specify the source from which it expects data. These tids could be placed inside the PVM device structure by an auxiliary routine in the interface and accessed by the device write/read routines. Finally, recall that messages may be tagged. A tag could also be stored in the PVM device structure. But keeping this kind of information in the device structure is really only viable as long as it doesn't have to be changed often. If an application uses di erent sets of tids or message tags, having to constantly reset them in the device structure is awkward, and, for regular PVM users, unnatural.

4.3

ToolBus

ToolBus [4] is one of several new software bus architectures that can be used to glue independent software components together (either on a single machine or over a network). Like ToolTalk [24], ToolBus requires that all data travel through a central manager process, but it is unique in that it uses a script to completely de ne and control the set of interactions that can occur between tools connected by the 4

Point-to-point communication is one-to-one, multicasting is one-to-many, and broadcasting is one-to-all.

bus. Data is exchanged between tools in the form of typed terms. Among these is a binary type which treats the data as an array of bytes. Through patterns in a script, a tool speci es what kinds of service requests it can carry out and which types of terms it will accept. Tools open ports to the toolbus through which they exchange terms. Each port has a handler associated with it that is invoked whenever a term arrives on the corresponding port. The handler's job is to determine if the incoming request matches any of the services o ered by the tool, and, if so, to invoke the routine that provides the service. The service routine returns a (possibly NULL) term as its result, which the handler passes back to the toolbus (and from there possibly to other tools). The user's data is packed into a term by TBmake(), which takes a scanflike argument list containing a pattern describing the data to be packed in the term. The term is sent to the toolbus either by returning it as the value of the handler function or by making an explicit call to TBsend(). It is also possible to incrementally build a recursive list of terms stored within a single term. We use this feature to pack a link's send bu er fragments into a single toolbus term. Unpacking terms is done through TBmatch(), which takes the incoming term, a pattern against which to match the term, and the address of the program's variables where the data from the term will be unpacked (if the match succeeds). ToolBus required more work to adapt to the ADI paradigm than the other transports we have used, but with the addition of some auxiliary routines, it t very nicely5 . The device write routine simply packs the link's send bu er fragments into a special term (write_term - a recursive list of binary type terms) stored in the device structure. When the sending tool is ready to send the completed message, an auxiliary routine provides access to write_term, which the sender can then return to the toolbus. This was simple to implement, but requires that the sender build a complete message (MP tree) in its memory before transmitting (so sender and receiver cannot overlap I/O and processing), and requires more memory copies. The receiving side was slightly more complicated. To meet the requirements of the MP bu ering layer, a simple bu ering layer had to be incorporated into the device read routine to move the MP data packed in the incoming ToolBus term to a bu er fragment in the link's receive bu er. The auxiliary routine MP_SetTbTerm() places the incoming term into read_term (the MP link's reading-side complement to write_term). Recall that this term is a recursive list of binary type terms, each of which stores a single sender bu er fragment. The device read routine unpacks a single binary term (sender bu er fragment) from read_term into a bu er kept in the device structure. Requests for data made by the MP bu ering layer are satis ed from this device bu er. When the device's bu er is empty, another binary term (sender bu er fragment) is unpacked from read_term.

Discussion.

5

We anticipate that other software bus packages such as ToolTalk [24] will present similar problems.

Our experience is that the auxiliary routines do not complicate the interface and that despite the additional memory copies, performance is quite good. Certainly the cost of the extra copies does not outweigh the advantages gained by the connectivity that is provided.

4.4 Mail and MIME MP trees can also be sent using the Multipurpose Internet Mail Extensions (MIME) standard [5], which allows textual and non-textual data to be transported together through the network by SMTP (Simple Mail Transfer Protocol) without loss of information. At rst appearance, mail would not seem to t any computing paradigm. However, the reader program need not simply be a display or conversion package. It could be associated with, for example, a computation package. In this scenario, the reader would queue the incoming requests, taking note of the address of the sender, launch an application (possibly one of a selection of computation tools) to service the request and return an MP formatted result in another MIME message. This model nicely ts the point-to-point paradigm, but it also ts the multicast paradigm if the request is sent to multiple servers. Discussion. The MIME standard de nes several new header elds including

to specify the type and subtype of the data that follows, and to specify the encoding used to make the message contents acceptable to SMTP. In our use, we specify the application content type with an mp subtype and base64 as the encoding type. Our .mailcap customization le speci es a reader program for processing MP trees sent via email. Currently we simply use one of the MP test routines to write a human readable version of the MP tree to a le. It is a simple matter to write a more elaborate display program to format the output on the screen. Also, the MP content received via email is readable directly by any MP-compliant compute engine.

Content-Type Content-Transfer-Encoding

5 Related Work MathLink [25] is a commercial mathematics protocol packaged with Mathematica. Although it has a set of Mathematica-speci c routines, it is a general protocol that can be used independently of Mathematica. MathLink's notion of a link is very similar to MP's. At the time a link is created, a transport device is bound to it. Currently MathLink supports several built-in devices: TCP sockets (Macintosh and Unix), Unix pipes, the Macintosh System 7 program-to-program (PPC) communication mechanism, and, under Macintosh System 6, the Apple Talk Data Stream Protocol (ADSP). At this time MathLink is the only other implementation of a mathematics protocol which has this kind of exibility. ASAP (A Simple ASCII Protocol) [8] is a public domain mathematics protocol. It uses a pair of TCP socket connections, called channels, to establish a point-to-point connection between two processes. One socket is dedicated to the

data channel for transmitting expressions. The other is dedicated to the urgent channel for handling exceptional conditions and uses the Unix SIGIO signal to

interrupt the receiving process. The POlynomial System SOlver (PoSSo) project includes a protocol, PossoXDR [1], de ning the external representation of the data types manipulated inside PoSSo processes. The encoding extends the XDR technology [23] with some new types and constructors, and by tagging each data element with its type. PossoXDR communicates through a TCP socket or a le.

6 Lessons Learned and Future Work Clearly some devices t into our device abstraction more neatly than others. But it is very important to remember that the MP put and get routines an application uses to send and receive MP data are completely indi erent to which transport device is used to communicate the data. Indeed, the development and testing of these devices was done with the same suite of test routines and the only changes made were isolated to the few incompatibilities described for PVM and ToolBus. The clear advantage of this approach is that by focusing on the MP API, one can reuse most of the interface code developed for an application when plugging in a di erent transport, protecting the initial investment in the interface. This means, for example, that the interface we developed for Singular [16], may be largely reused when plugging it in as a specialized server (in a distributed PSE or a point-to-point connection to augment another system, as we did with Mathematica) or as a subsystem for parallel computation (as we want to do with PVM for parallel Grobner bases computations). Our initial goal was to be able to plug di erent transport devices into MP. Our perspective was that of a programmer who is familiar with MP and simply wants to use some speci c device as a data transport mechanism. In such a case, we want to assume knowledge of MP and o er transparent use of the selected device. But there is a second perspective to consider, that of a programmer accustomed to the particulars of the transport device who wants to use MP for transmitting mathematical data. In this second case, we want to assume knowledge of the transport device and its interface, and little or no knowledge of MP. This usually requires extending the device's API with a series of functions built on top of the MP API (and trying to make these functions as coherent as possible with the functions in the device's API). For example, we have pvm mp pkList() and pvm mp pkPoly() to pack a list and polynomial respectively for PVM-MP. But for systems such as ToolTalk that have no notion of data types and simply provide the delivery of messages marshalled using an independent mechanism, the issue of whether to provide an MP-style or package-style API is largely irrelevant. How best to resolve these issues is not immediately clear and is a subject of ongoing investigation. The exibility gained through our approach opens a wide range of possibilities. We are especially interested in pursuing two areas.

1. Parallel distributed symbolic computation. When done within the context of p4 [6], MPI [17], or PVM, this work is readily portable to tightly-coupled, shared memory machines such as the T3D which have optimized implementations of both MPI and PVM. In this scenario, the applications typically know each other quite well, allowing data exchange performance optimizations. 2. Distributed problem solving environments. We want to explore using MP in conjunction with software buses such as ToolBus and ToolTalk to integrate symbolic, numeric, and graphics processing. Clearly here the challenges are greater. Exchanging data is more complicated, as are the MP-application interfaces. Solving the problems inherent in this kind of integration should be useful for designing and implementing larger scale problem solving environments. We continue to add new transport devices and to work on cross-platform communication. In particular, the point-to-point mechanism should translate well to other platforms (Macintosh PPC, Windows WinSocket). Work is underway on a shared memory device for Unix-based machines. In our opinion (see [15]), commands providing inter-tool control should be kept outside the de nition of a mathematics protocol. This includes being able to stop a remote computation, get the status of a remote computation, determine if a link is still alive, and so on. Clearly, such commands are not provided by MP. Instead, they are are expected to be available from the device used in complement of MP { at least when such commands make sense (which is not always the case, see MIME for instance). Still, we may provide such a capability as part of a separate, higher software layer, to be used with the core of MP when no other technology provides its own control mechanism. Along these lines we want to enhance the MP GetLinkStatus() routine to accept requests for those control aspects that relate to the device itself and to have the device-speci c dev get status() routine handle them as provided for by the device.

7 Conclusion A key enabling technology for distributed and parallel mathematical computation is a standard protocol for exchanging mathematical objects. MP is our attempt to contribute to a standard, non-proprietary mathematics protocol. An essential feature of MP's design and implementation is its independence from the mechanism used to transmit the data. When designing MP, we focused on not interfering with communication-level issues in order to ensure the highest possible degree of \pluggability" for our protocol. So, instead of solving both data and control problems inside a single unique protocol, we decided to address the data encoding aspect only and make sure that the resulting technology t well inside a variety of communication and computing paradigms. At the implementation level, MP attempts to provide this independence by communicating through an abstract device interface which is mapped to an

actual device structure at the time a transport link is created. Some devices map well to this abstract interface, while others require auxiliary routines. We believe this approach allows a better separation of problems and leads to a highly pluggable protocol which can be used together with well-known communication and computing technologies such as les, TCP sockets, Mail, ToolBus, or PVM, and hopefully others such as CORBA, ToolTalk, or WWW.

8 Availability The source for the MP library is available from ftp.mcs.kent.edu in /pub/MP. Also see http://SymbolicNet.mcs.kent.edu/areas/protocols/mp.html.

Acknowledgments The authors would like to thank Olaf Bachmann and Hans Schonemann for their insightful comments on earlier drafts of this paper.

References 1. J. Abbott and C. Traverso. Speci cation of the POSSO External Data Representation. Technical report, September 1995. 2. R. Avitzur, O. Bachmann, and N. Kajler. From Honest to Intelligent Plotting. In A. H. M. Levelt, editor, Proc. of the International Symposium on Symbolic and Algebraic Computation (ISSAC'95), Montreal, Canada, pages 32 { 41. ACM Press, July 1995. 3. O. Bachmann, H. Schonemann, and S. Gray. A Framework for Distributed Polynomial Systems Based on MP. To appear in the Proceedings of ISSAC'96. 4. J.A. Bergstra and P. Klint. The Discrete Time ToolBus. Technical Report P9502, Programming Research Group, University of Amsterdam, 1995. 5. N. Borenstein and N. Freed. MIME (Multipurpose Internet Mail Extensions) Part One: Mechanisms for Specifying and Describing the Format of Internet Message Bodies. RFC 1521, September 1993. 6. R. Butler and E. Lusk. Monitors, messages, and clusters: the p4 parallel programming system. Parallel Computing, 1994. 7. G. Cooperman. STAR/MPI: Binding a Parallel Library to Interactive Symbolic Algebra Systems. In A. H. M. Levelt, editor, Proc. of the International Symposium on Symbolic and Algebraic Computation (ISSAC'95), Montreal, Canada, pages 126 { 132. ACM Press, July 1995. 8. S. Dalmas, M. Gaetano, and A. Sausse. ASAP: a protocol for symbolic computation systems. INRIA Technical Report 162, March 1994. 9. M. C. Dewar. Manipulating Fortran Code in AXIOM and the AXIOM-NAG Link. In H. Apiola, M. Laine, and E. Valkeila, editors, Proceedings of the Workshop on Symbolic and Numeric Computing, pages 1{12. University of Helsinki, Finland, 1994. Available as Technical Report B10, Rolf Nevanlinna Institute.

10. A. Diaz, E. Kaltofen, K. Schmitz, T. Valente, M. Hitz, A. Lobo, and P. Smyth. DSC: A System for Distributed Symbolic Computation. In S. M. Watt, editor, Proc. of the International Symposium on Symbolic and Algebraic Computation (ISSAC'91), Bonn, Germany, pages 323{332. ACM Press, July 1991. 11. Y. Doleh. SUI: A system Independent User Interface for an Integrated Scienti c Computing Environment. PhD thesis, Kent State University, May 1995. 12. E. Gallopoulos, E. Houstis, and J. Rice. Computer as Thinker/Doer: ProblemSolving Environments for Computational Science. IEEE Computational Science and Engineering, pages 11{23, 1994. 13. A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. PVM3 User's Guide and Reference Manual. Technical Report ORNL/TM-12187, Oak Ridge National Laboratory, Septempber 1994. 14. S. Gray, N. Kajler, and P. S. Wang. MP: A Protocol for Ecient Exchange of Mathematical Expressions. In M. Giesbrecht, editor, Proc. of the International Symposium on Symbolic and Algebraic Computation (ISSAC'94), Oxford, GB, pages 330{335. ACM Press, July 1994. 15. S. Gray, N. Kajler, and P. S. Wang. Design and Implementation of MP, a Protocol for Ecient Exchange of Mathematical Expressions. 1996. Forthcoming in Journal of Symbolic Computing. 16. G.-M. Greuel, G. P ster, and H. Schonemann. Singular: A system for computation in algebraic geometry and singularity theory. University of Kaiserslautern, Dept. of Mathematics, 1995. Available via anonymous ftp from helios.mathematik.uni-kl.de. 17. W. Gropp, R. Lusk, and A. Skjellum. Using MPI. MIT Press, 1994. 18. H. Hong, editor. Proc. of the 1st Intl. Symp. on Parallel Symbolic Computation (PASCO'94), volume 5. World Scienti c, September 1994. 19. N. Kajler. Building a Computer Algebra Environment by Composition of Collaborative Tools. In J. P. Fitch, editor, Proc. of DISCO'92, Bath, GB, volume 721 of LNCS, pages 85{94. Springer-Verlag, April 1992. 20. N. Kajler. CAS/PI: a Portable and Extensible Interface for Computer Algebra Systems. In P. S. Wang, editor, Proc. of the International Symposium on Symbolic and Algebraic Computation (ISSAC'92), Berkeley, USA, pages 376{386. ACM Press, July 1992. 21. John Rice. Scalable Scienti c Software Libraries and Problem Solving Environments. Technical Report CSD TR-96-001, Department of Computer Science, Purdue University, January 1996. 22. D. Schefstrom and G. van den Broek, editors. Tool Integration. Wiley Press, 1993. 23. Sun Microsystems, Inc., Mountain View, CA. Network Programming Guide (revision A), 1990. Part number 800-3850-10. 24. SunSoft Press. The ToolTalk Service: An Interoperability Solution. 1992. 25. Wolfram Research, Inc. MathLink Reference Guide (version 2.2). Mathematica Technical Report, 1993.

This article was processed using the LaTEX macro package with LLNCS style