Algebraic Semantics of Coordination(*) - CiteSeerX

5 downloads 517 Views 89KB Size Report
separation between the modelling of individual software components and their ..... Σ is a signature and Φ is a set of sentences over the language of Σ that is ...
Algebraic Semantics of Coordination(*) or, what is in a signature? J.Fiadeiro ‡ and A.Lopes Department of Informatics Faculty of Sciences, University of Lisbon, Campo Grande, 1700 Lisboa, PORTUGAL {llf,mal}@di.fc.ul.pt Abstract. We propose an algebraic characterisation of the notion of coordination in the sense of recently proposed languages and computational models that provide a clear separation between the modelling of individual software components and their interaction in the overall software organisation. We show how this separation can be captured in Goguen’s categorical approach to General Systems Theory and borrow examples from specification logics, program design languages, mathematical models of behaviour, and coordination languages to illustrate the applicability of our algebraic characterisation.

1

Introduction

Several recently proposed languages and computational models, e.g. those discussed in [4], support the separation between what, in the definition of a system, is responsible for its computational aspects and what is concerned with coordinating the interaction between its different components. As explained in [12]: "(A) computation model allows programmers to build a single computational activity: a single-threaded, step-ata-time computation; (a) coordination model is the glue that binds separate activities into an ensemble". The clean separation that is achieved between individual software components and their interaction in the overall software organisation makes large applications more tractable, supports global analysis, and enhances reuse of software. Hence, it is not surprising that the significance of this original work on “coordination languages” has now been recognised in areas of Software Engineering concerned with system configuration and architectural description languages [11]. In this paper, we show how the separation between computation and coordination can be captured in the framework of Goguen’s categorical approach to General Systems Theory [13]. We capitalise on our previous work on the formalisation of architectural principles in software design [7], which was based on a formal notion of “coordination” that we wish to revise, motivate, discuss and put forward in a more comprehensive way. Examples are drawn from specification logics, concurrency models, parallel program design languages and, of course, coordination languages.

(*) This work was partially supported by LMF-DI (PUC-Rio, Brazil) through the ARTS project, and through PRAXIS XXI contracts PCSH/OGE/1038/95 (MAGO), 2/2.1/TIT/1662/95 (SARA) and PCEX/P/MAT/46/96 (ACL). ‡ On leave at Laboratório de Métodos Formais, Departamento de Informática, Pontifícia Universidade Católica do Rio de Janeiro, Brazil, as a grantee of CNPq (Brazil) and ICCI (Portugal).

–1–

2

Coordination in the context of General Systems

We start by illustrating the categorical framework that we have been adopting for modelling the development of complex systems, and then motivate the formalisation of “coordination”. 2.1

The categorical approach to systems modelling – an example

The basic motto of the categorical approach to systems, as explained in [13], is that morphisms can be used to express interaction between components, so that "given a category of widgets, the operation of putting a system of widgets together to form some super-widget corresponds to taking the colimit of the diagram of widgets that shows how to interconnect them" [15]. As shown in [5,16], these categorical principles can be used to formalise process models for concurrent systems such as transition systems, synchronisation trees, event structures, etc. We shall illustrate the approach using a trace-based model. A process alphabet is a finite set, and a process is a pair where A is an alphaω ω bet and Λ⊆2A is the language of the process, where 2A denotes the set of infinite A sequences over 2 . The alphabet models the set of actions in which the process may involve itself. Each sequence of events in the language of the process captures a possible behaviour of the process, where each event consists of a set of actions that occur concurrently during that event. The empty set of actions models an event of the environment. We take a morphism of process alphabets to be a total function, and a process morphism f:→ as an alphabet morphism f:A1→ A 2 such that, for every λ∈Λ2, f -1(λ)∈Λ1, where f -1(λ)(i)=f -1(λ(i)). The idea is that a morphism of processes captures the relationship that exists between a system (target of the morphism) and any of its components (source). That is, every morphism identifies a component within a system. Hence, an alphabet morphism identifies each action of a component with an action of the system. Each such morphism f defines a contravariant mapping between the sets of events associated with each process f -1: 2A2→2A1. That is, each event in the life of the system is mapped to an event in the life of the component. The empty event arises when the component is not involved in that specific event of the system, which then acts as the environment for the component. Finally, each behaviour of the system is mapped to one of the possible behaviours of the component. Diagrams express how a complex system is put together through the interconnection of simpler components. The colimit of such a configuration diagram returns the process that results from the interconnection. The simplest configuration diagram expresses the interconnection between two components via a third one: f1 f2



–2–

The colimit (pushout) of this diagram is calculated as follows: the pushout (amalgamated sum) of the underlying diagram of process alphabets is calculated, returning A1 A2 g1 g2 A’ The alphabet A’ is obtained from the disjoint union of A1 and A2 through the quotient that results from the equivalence relation generated by the set of all pairs where a∈A. That is to say, each action of A establishes a synchronisation point for the component processes and . The resulting process is then calculated over the alphabet thus computed by taking the intersection of the inverse images of the component behaviours: Λ’={λ∈2A’ : g1-1(λ)∈Λ1 and g2-1(λ)∈Λ2} That is to say, the system thus configured can execute all the actions that its components can, subject to the synchronisations specified by the interconnection, and exhibits the behaviours that are allowed by both its components. 2.2

Separating coordination

How can we talk about computation and coordination in the example above, and in what sense can they be separated? It seems intuitive to associate the “computational” part of the model to the set Λ of traces in the sense that this set is what captures the local behaviour of the process. It seems also clear that interconnection between processes, the “coordination” part of the model, is achieved through the alphabets. Indeed, in the proposed model, processes interact by synchronising at designated actions identified via morphisms from what we could call channels (or points of rendez-vous). These channels correspond to the middle process that we used in the previous example. It is not difficult to see that only alphabets are involved in interconnections. On the one hand, as we have seen, the alphabet of the process resulting from an interconnection is obtained from the pushout of the underlying diagram of alphabets. The behaviours of the processes involved do not interfere in this calculation. On the other hand, the behaviour of the middle process is not relevant for determining the behaviour of the resulting process: the interconnection is expressed, completely, in the alphabet morphisms that result from the pushout. This property is illustrated by the fact that: Every diagram of the form f1 f2 P1 admits the same pushouts as the diagram

–3–

P2

ω

f1 f2 P1 P2 Hence, for the purposes of interconnecting processes, it is sufficient to use middle processes whose set of behaviours is the whole language. That is to say, we can identify a precise class of channels from which any interconnection can be built. Notice that the fact that such channels have the whole language for their set of behaviours means that they have no behaviour of their own, i.e. are “passive” and just transmit signals between components. Hence, they can be identified with alphabets. In this sense, alphabets represent the coordination part of the model. They provide the interfaces over which interconnections between components are established. We can then say that coordination is separated from computation, and that the trace model we described is “coordinated over alphabets”. 2.3

Coordinated formalisms

Let us see how, from the example above, we can generalise a set of requirements for considering a category of systems, or abstractions of systems, to be “coordinated” over a given notion of interface. We shall take the separation between coordination and computation to be materialised through a functor mapping systems to interfaces. We require this functor to be faithful (injective over each hom-set), meaning that morphisms of systems cannot induce more relationships between systems than between their underlying interfaces. Consider given a category S Y S (of systems) and a category INT (of interfaces) together with a faithful functor int:SYS → INT. Which properties should we require of int that make S Y S coordinated over INT? Basically, we have to capture the fact that any interconnection of systems is established via their interfaces. Part of this intuition can be expressed by the property that The coordination functor int lifts colimits. That is to say, given any diagram dia:I → SYS and colimit (int(Si)→C) i:I of (dia;int) there exists a colimit (Si→S)i:I of dia such that int(Si→S)=(int(Si)→C). In other words, if we interconnect system components through a diagram, then any colimit of the underlying diagram of interfaces can be lifted to a colimit of the original diagram of system components. There are two aspects in this requirement that should be noted. On the one hand, lifting means that if the configuration of interfaces is “viable”, in the sense that it has a colimit (i.e. gives rise to a system), then so is the corresponding configuration of components. Indeed, although the category of processes defined in 2.1 is cocomplete, meaning that every diagram admits a colimit and, hence, represents the configuration of a “real system”, not every category of systems needs to satisfy this property. The requirement that int lifts colimits means that the computations assigned to the components cannot interfere with the viability of the underlying

–4–

configuration of interfaces. On the other hand, lifting of colimits also requires that the system that results from the colimit in SYS be mapped to the underlying colimit of interfaces. That is to say, the computations assigned to the components cannot restrict the interface of the resulting system, which is calculated through the colimit in INT. The inverse property requires that every interconnection of system components is an interconnection of the underlying interfaces. In particular, it requires that the computations do not make viable a configuration of system components whose underlying configuration of interfaces is not viable. This property is verified when The coordination functor int preserves colimits. That is to say, given any diagram dia:I → SYS and colimit (S i→S)i:I of dia, (int(S i)→ int(S)) i:I is a colimit of (dia;int). This property means, using the terminology of [1], that all colimits in SYS are concrete. These two properties together imply that any colimit in S Y S can be computed by first translating the diagram to INT, then computing the colimit in INT, and finally lifting the result back to SYS. Preservation and lifting of colimits are two properties that relate diagrams in INT and diagrams in SYS. We would now like that, similarly to what we showed for processes, interconnections of components can be achieved by using interfaces, or system components that “correspond” to interfaces (channels), as “middle objects”. The property that we have in mind is the existence of discrete structures for int as a concrete category in the sense of [1]: For every interface C:INT there exists s(C):SYS such that, for every morphism f:C→ int(S), there is a morphism g:s(C)→ S such that int(g)=f. That is to say, every interface C has a “realisation” (a discrete lift) as a system component s(C) in the sense that, using C to interconnect a component S, which is achieved through a morphism f:C → int(S), is tantamount to using s(C) through any g:s(C)→S such that int(g)=f. Notice that, because int is faithful, there is only one such g, which means that f and g are, essentially, the same. That is, sources of morphisms in diagrams in SYS are, essentially, interfaces. This property allows us to use graphical representations in which interfaces are used as connectors between components, a kind of “hybrid diagrams” that are more economical. For instance, in the previous section, discrete lifts are given by the pairs ω . Indeed, morphisms between alphabets A and B are exactly the same as morω phisms between and any process . Hence, we could have used A f1 f2 P1 P2 to express the interconnection between P1 and P2. Because int is faithful, the existence of discrete structures implies that int admits a left adjoint sys:INT → SYS such that (1) sys(C)=s(C) for every C:INT, and (2)

–5–

sys;int=idINT. Hence, sys is a full embedding which means that, as illustrated in the previous section, interfaces can be identified with a particular subclass of system components: the subcategory of channels. In [7,8], we characterised coordinated formalisms precisely in terms of the existence of a full embedding that is a left adjoint for the forgetful functor int. We feel, however, that the additional properties of preservation and lifting of colimits are equally important. They are the ones that establish that colimits in INT and in S Y S have the same expressive power as far as interconnections of components are concerned. The existence of discrete lifts allows us to simplify the way in which we interconnect components by limiting “middle objects” to channels. Another consequence of this fact is that, being faithful, int preserves colimits. Therefore, the characterisation of coordination can be reduced to lifting of colimits and existence of discrete structures. There is an observation that sheds additional light on the nature of the formalisms that we have been characterising. The fact that int lifts colimits and has discrete structures implies that S Y S is “almost” topological over INT. To be topological [1], i n t would have to lift colimits uniquely, which would make the concrete category amnestic, i.e. the fibres of interfaces would have to be partial orders. As far as the algebraic properties of the underlying formalism are concerned, this is not a problem because every concrete category can be modified to produce an amnestic, concretely equivalent version. However, and although the process category discussed in 2.1 is amnestic, we shall see two examples of concrete categories that are not topological but which we would still like to consider to be coordinated. 2.4

Summary

Definition: A functor int:SYS → INT is said to be coordinated, and S Y S is said to be coordinated over INT, iff • int is faithful; • int lifts colimits; • int has discrete structures. Proposition: Let int:SYS → INT be coordinated. The following properties hold: • int admits a left adjoint sys:INT → SYS which is a full embedding and satisfies sys;int=id INT ; • int preserves colimits; • if INT is (finitely) cocomplete then so is SYS. Proposition: Every topological category is coordinated. This last property tells us that the class of coordinated categories has many “interesting” categories. It also includes many categories that are “interesting” in Computing: Proposition: Let THE I and PRE I be the categories of theories and theory presentations of an institution I [14]. Both THEI and PREI are coordinated over the underlying category of signatures.

–6–

The same result holds if we work with π-institutions [6]. Recall that, in both cases of institutions and π-institutions, the objects of THE I consist of pairs where Σ is a signature and Φ is a set of sentences over the language of Σ that is closed under consequence. Theory morphisms are signature morphisms that induce inclusions between the sets of theorems. Such categories are topological. However, if we take the usual definition of PRE I as having for objects pairs where Σ is a signature and Φ is a set of sentences over the language of Σ, not necessarily closed under consequence, and for morphisms all signature morphisms that induce theory morphisms between the presented theories (i.e. preserve theorems), then we obtain a coordinated category that is not topological. Indeed, the class of presentations over a given signature is not a partial order because any two presentations of the same theory are isomorphic but not necessarily identical. In practical terms, this means that colimits are not lifted uniquely from signatures to presentations. We can also find plenty of examples of coordinated categories among models of concurrency, of which the model presented in section 2.1 is a particularly simple case. 2.5

An example from coordination languages

In this section, we briefly discuss an example borrowed from coordination formalisms: the language Gamma [2] based on the chemical reaction paradigm. A Gamma program P is consists of • a signature Σ=, where S is a set of sorts, Ω is a set of operation symbols and F is a set of function symbols, representing the data types that the program uses; • a set of reactions, where a reaction R has the following structure: R where • •



X, t1, ..., tn → t’1, ..., t’m ⇐ c

X is a set (of variables); each variable is typed by a data sort in S; t1, ..., tn → t’1, ..., t’m is the action of the reaction – a pair of sets of terms over X; • c is the reaction condition – a proposition over X. An example of a Gamma program is the following producer of burgers and salads from, respectively, meat and vegetables: PROD ≡

sorts meat, veg, burger, salad ops vprod: veg→salad, mprod: meat→burger reactions m:meat, m → mprod(m) v:veg, v → vprod(v)

Parallel composition of Gamma programs, defined in [2], is a program consisting of all the reactions of the component programs. Its behaviour is obtained by executing the reactions of the component programs in any order, possibly in parallel. This leads us to the following notion of morphism. A morphism σ between Gamma programs

–7–

P1 and P2 is a morphism between the underlying data signatures s.t. P1⊆σ(P2), i.e., P2 has more reactions than P1. Concerning system configuration in Gamma, let us consider that we want to interconnect the producer with the following consumer: CONS ≡

sorts food, waste ops cons: food →waste, reactions f:food, f → cons(f)

The interconnection of the two programs is based on the identification of the food the consumer consumes, that is, the interconnection is established between their data types. For instance, the coordination of the producer and the consumer based on meat is given by the following interconnection: sorts s s£burger

s£food

PROD

CONS

Gamma is, indeed, coordinated over the category of data types: • the forgetful functor dt from Gamma programs to data types is faithful; • given any diagram in the category Gamma, a colimit σi:(dt(Pi)→Σ)i:I of the corresponding diagram in the category of data types is lifted to the following colimit of programs σi:(Pi→)i:I; • the discrete lift of a data type is the program with the empty set of reactions. Notice, however, that we have extended the way in which Gamma programs are traditionally put together. Gamma assumes a global data space whereas we have made it possible for Gamma programs to be developed separately and put together by matching the features that they are required to have in common. This localisation further enhances the reusability of coordinated programs.

3

An example from parallel program design

In order to consolidate the definitions put forward in the previous section we shall now discuss an example drawn from parallel program design. The language COMMUNITY [9] is similar to IP [10] and UNITY [3]. Its definition has evolved over the years through experience gained in using it in different contexts. It is precisely the changes that were required to make it coordinated that we shall illustrate in this section. On the one hand, we feel that these changes reveal more of our intuition of what it means to be “coordinated”. On the other hand, they reflect some of the typical hesitations that one faces when designing formalisms, and for which the need to establish coordinated frameworks helps to make a decision.

–8–

3.1

COMMUNITY

We assume a fixed algebraic specification representing the data supported by the language. That is to say, Σ= is a signature in the usual algebraic sense and Φ is a set of (first-order) axioms over Σ defining the properties of the operations. Data types can be made local to each program but assuming them to be fixed simplifies considerably the presentation. A COMMUNITY program P has the following structure: P

where • •

• • •

• •



var V read R init I [] g: [B(g) → do g∈Γ

|| a∈D(g)

a:=F(g,a)]

V is the set of local attributes (i.e. the program "variables"); each attribute is typed by a data sort in S; R is the set of read-only attributes used by the program (i.e. attributes that are to be instantiated with local attributes of other components in the environment); each attribute is typed by a data sort in S; Γ is the set of action names; each action name has an associated statement (see below) and can act as a rendez-vous point for program synchronisation; I is a condition on the attributes – the initialisation condition; for every action g∈Γ, D(g) is the set of attributes that g can change (its domain or write frame); we also denote by D(a) the set of actions in Γ that have the attribute a in their write frame; for every action g∈Γ, B(g) is a condition on the attributes – its guard; for every action g∈Γ and attribute a∈D(g), F(g,a) is a term denoting the value that g assigns to a.

An example of a COMMUNITY program is the following vending machine: VM ≡

var init do drink:=ff] drink:=ff] eat:=ff]

ready, eat, drink: bool ¬ready∧(eat∨drink) coin : [¬ready∧(eat∨drink) → ready:=tt || eat:=ff || [] cake : [ready∧¬(eat∨drink) → eat:=tt || [] coke: [ready∧¬(eat∨drink) → drink:=tt || [] reset: [ready∧(eat∨drink) → ready:=ff]

The machine is initialised so as to accept only coins. Once it accepts a coin it can deliver either a cake or a coke (but not both). After delivering a cake or a coke it can only be reset, after which it is ready to accept more coins. A morphism σ between COMMUNITY programs P1 and P2 consists of: • a map σα:V1∪R1→V2∪R2; • a map σγ:Γ1→Γ2

–9–

such that, 1) For every a∈V1∪R1, sort(a)=sort(σα(a)); 2) For every a∈V1, σα(a)∈V2; 3) For every a∈V1, σγ(D1(a))=D2(σα(a)); 4) Φ î (I2 ⊃ σ(I1)); 5) For all g1∈Γ1, a1∈D1(g1), Φ î B2(σ(g1)) ⊃ (F2(σ(g1),σ(a1))=σ(F1(g1,a1))); 6) For every g1∈Γ1, Φ î (B2(σ(g1)) ⊃ σ(B1(g1))). where î means consequence in the first-order sense, and σ is also used to denote the translation induced by the morphism over the language of the source signature. Condition 1 indicates that morphisms have to respect the sorts of the attributes. Condition 2 means that local attributes of a component must also be local within the system. It also allows read attributes of a component to become local in the system: this is the typical situation when the attribute being read by the component is local to some other component within the same system. Condition 3 does not allow actions of the system that do not belong to the component to change the local attributes of the component. Condition 4 means that the initialisation condition of the component must be respected by the system. Condition 5 means that assignments made by the component are preserved. Condition 6 means that guards cannot be weakened. These conditions capture what in the literature is known as superposition [3]. An example of a morphism is the identity mapping the program below to the vending-machine defined above: SW ≡

var init do []

ready: bool ¬ready coin : [¬ready → ready:=tt] reset : [ready → ready:=ff]

The morphism identifies a component within the vending machine, namely the mechanism that sets and resets it. Notice how new actions can be introduced which use the old attributes in the guards but cannot update them. The guards of the old actions can be strengthened and so can the initialisation condition. Is COMMUNITY coordinated? Over what notion of interface? 3.2

Lifting colimits

When one is defining a logic, or a model for concurrency, the nature of interfaces seems pretty obvious because there is a clear separation between “syntax”, i.e. the identification of the symbols over which language is generated, and “semantics” in the sense of what is defining the “contents” of the individual components. In COMMUNITY, the choice is perhaps less clear. It seems obvious that a program signature will have to include the set of attributes (read and local) and the set of actions. But what about the other features?

– 10 –

One criterion for deciding what to place in a signature is the need to be able to lift colimits. Naturally, the more we put in signatures the easier it is for colimits to be lifted. However, we want to put in signatures as little as possible so that we end up with interfaces that are as simple as possible. As shown, for instance, in [8], this is important for facilitating the establishment of relationships like adjunctions between the interface categories of different coordinated formalisms. It is not difficult to see that, if we consider that program signatures are triples , we are not able to lift colimits. For instance, we cannot interconnect the following programs P1 and P2 by synchronising actions g1 and g2 in P1. P ≡ do g 1: [tt → skip] [] g 2: [tt → skip] g1,g2£g

id

P 1 ≡ var do

a: bool g 1: [tt → a:= tt] [] g 2: [tt → skip]

P 2 ≡ do

g: [tt → skip]

Such an interconnection of programs does not admit a colimit although the corresponding diagram of signatures clearly admits a pushout consisting of a local attribute a and an action g. This happens because the restriction on domains (3) applied to g1 requires that the resulting synchronised action belongs to the write frame of a whereas, when applied to g2, it requires that it does not belong to the write frame of a. This example shows that, without including domains in signatures, it is not possible to lift colimits. More concretely, it shows that what is being left in the computation side of programs interferes with the interconnections. Indeed, action domains enforce locality of attributes and, therefore, constrain the interference that can exist between programs. That is to say, action domains are part of what in COMMUNITY is responsible for coordination and, therefore, must be part of interfaces. The suggestion, then, is that program signatures are triples where Γ, rather than a set, is a 2V indexed family of sets (the index of a set is the domain of the actions in that set) and signature morphisms satisfy the equality of domains expressed in condition 3 of program morphisms. Notice that, in this case, the diagram of signatures obtained from the diagram above does not admit a colimit, meaning that the configuration is not viable. Indeed, we are attempting to synchronise two actions within a program, which may not be feasible due to conflicting types. 3.3

Existence of discrete structures

Consider now the need to define, for every program signature, its discrete lift, i.e. the program over that signature that can replace it when establishing interconnections with other programs. The condition that we discussed in section 2.3 basically means that such discrete lifts need to be “neutral” with respect to the computational aspects

– 11 –

so as not to compromise the establishment of relationships (morphisms). A neutral initialisation condition is any tautology. The same holds for action guards. Assignments raise a more interesting case. Let σ:→ be a signature morphism, where is the signature of some program P’. For σ to be a morphism from the discrete lift of to P’ it is necessary that Φ î B’(σ(g)) ⊃ (F’(σ(g),σ(a))=σ(F(g,a)), for every a∈D(g). Clearly, given g∈Γ and a∈D(g), we cannot find a value F(g,a) that satisfies that property for any possible F’. That is to say, given an action and an attribute in its domain, it is not always possible to find a value for the assignment such that we can match any other assignment. Does this mean that we should shift assignments into signatures? Shifting assignments into signatures would mean that they are one of the factors that restrict the kind of interconnections allowed in COMMUNITY. This is indeed the case. For instance, we cannot interconnect the following two programs P 1 ≡ var do

a: {1,-1} g: [b1 → a:=1]

P 2 ≡ var do

a: {1,-1} g: [b2 → a:=-1]

by a middle object of the form P ≡ var do

a: {1,-1} g: [tt → a:=e]

in order to make them share attribute a and synchronise at action g. Indeed, the local assignments on a are conflicting, i.e. that there is no term e that can be mapped to both 1 and -1. Hence, it is only natural that we recognise that assignments are one of the instruments of coordination. Notice that we can, however, interconnect the signatures of the two programs through int(P) so as to produce the desired synchronisation. The problem is that the middle signature, consisting of local attribute a and action g with domain {a} cannot be lifted to a middle program. On the other hand, recognising this fact may make us feel uncomfortable about the model of coordination that we have defined. For instance, we might feel that the interference between the assignments is only a problem if the synchronised action occurs. The practical effect of guarding the equality between assigned values in condition 5 of morphisms should be to forbid the execution of the system action whenever it is required to perform conflicting assignments. Hence, we might be interested in a model of coordination that would postpone the resolution of interfering assignments to execution time, allowing the configuration to be established. This means that we need to change our notion of program to allow for discrete lifts! This is exactly what happened between [9] and [7]. The solution we found was to introduce non-deterministic assignments. The assignment lifted from a signature is the universal one, i.e. it assigns the whole range of possible values, thus ensuring “neutrality”. In the case of the example above, we would use the program (channel)

– 12 –

P



var a: {1,-1} do g: [tt → a:∈{1,-1}]

together with identity morphisms for the interconnections. The program resulting from the interconnection is (see the summary section for details on the construction) P’ ≡

var a: {1,-1} do g: [b1∧b 2 → a:∈Ø]

which is idle for as long as b1 or b2 are false, and deadlocks when they both are true. 3.4

Summary

The resulting coordinated category can be defined as follows: Definition: A program signature is a triple where • V and R are S-indexed families of sets where S is the set of sorts. • Γ is a 2V-indexed family of sets. We denote by D(g) the type of each g in Γ. All these sets of symbols are assumed to be finite and mutually disjoint. Definition/Proposition: Given signatures θ1= and θ2=, a signature morphism σ from θ1 to θ2 is a pair of functions such that, 1) For every a∈ V1∪R1, sort(a)=sort(σα(a)). 2) For every a∈V1, σα(a)∈V2. 3) For every a∈V1, σγ(D1(a))=D2(σα(a)). Program signatures and morphisms constitute a category SIG. Definition: A program is a pair where θ is a signature and ∆ , the body of the program, is a triple where • I is a proposition over the local attributes (V); • F assigns to every action g∈Γ a non-deterministic command, i.e. F maps every attribute a in D(g) to a set expression F(a); • B assigns to every action g∈Γ a proposition over the attributes (V and R). Definition/Proposition: A program morphism σ:→ is a signature morphism σ:θ1→θ2 such that 1) Φ î (I2 ⊃ σ(I1)). 2) For all g1∈Γ1, a1∈D1(g1),Φ î (F2(σ(g1),σ(a1))⊆σ(F1(g1,a1))). 3) For every g1∈Γ1, Φ î (B2(σ(g1)) ⊃ σ(B1(g1))). where î means validity in the first-order sense. Programs and superposition morphisms constitute a category PRO. Proposition: The forgetful functor sig mapping programs to the underlying signatures lifts colimits as follows: let dia:X → PRO be a diagram and (σi:sig(Si)→θ)i:X a colimit of (dia;sig); the colimit (σi:Si→ S)i:X of dia lifted by sig is characterised by:

– 13 –

• • •

the initialisation condition I is ∧ {σi(Ii) | i:X}; given any action g∈Γ, B(g) is ∧ {σi(Bi(g’)) | σi(g’)=g, g’∈Γi, i:X}; given any action g∈Γ and a∈D(g), F(g,a)= ∩ {σi(Fi(g’,a’)) | σi(g’)=g, σi(a’)= a, i:X}.

Proposition: The functor sig has discrete structures. The discrete lift for a signature is the program defined by: var V read R init tt [] g: [tt → || do g∈Γ a:∈sa] a∈D(g) where sa is a term expression denoting the whole set of elements of sort a.

4

Concluding remarks

In this paper, we proposed a formalisation for the property according to which a framework for system design supports the separation between computation and coordination. We used Goguen’s categorical approach to systems design [13,15] as a platform for the formalisation. The perceived advantages of the proposed notion of coordination are the following. On the one hand, it provides us with a way of checking whether a given formalism supports the separation between computation and coordination, which we take as being a good measure of the ability of the formalism to cope with the complexity of systems. In the paper, we borrowed examples from specification logics, mathematical models of behaviour, parallel program design languages and coordination languages to illustrate these points, which shows that “coordination” is more that a property of “programming languages”, i.e. it applies to other levels of specification and design. On the other hand, because such an algebraic characterisation of coordination is independent of specific languages and models, it provides us with a framework for the integration of different formalisms for software specification and design that is based on relationships between their interaction models rather than their computational paradigms (the latter being recognisably much harder to integrate). For instance, the earlier work reported in [7] provides a formal account of some of the contributions of “coordination” to the architectural approach to software design [17]. It also suggests ways of extending the expressive power of current architectural description languages by supporting heterogeneous connectors, i.e. connectors in which the roles and the glue are not necessarily described in the same formalism. In a related context, the work reported in [8] shows that interconnections between programs can be synthesised from interconnections between their specifications in “coordinated” frameworks. It also characterises a stronger notion of compositionality in which implementation of computations is decoupled from coordination aspects. Work is now in progress towards studying the impact that coordination may have on

– 14 –

the analysis of behavioural properties of systems, as well as on the characterisation and analysis of the behavioural properties of configurations of systems.

Acknowledgements Thanks are due to Tom Maibaum and Michel Wermelinger for many discussions that have helped to focus the proposed characterisation of coordination.

References 1 . J.Adámek, H.Herrlich and G.Strecker, Abstract and Concrete Categories, John Wiley & Sons 1990. 2 . J.P.Banâtre and D.Le Métayer, "Programming by Multiset Transformation", Communications ACM16(1), 1993, 55-77. 3 . K.Chandy and J.Misra, Parallel Program Design - A Foundation, Addison-Wesley 1988. 4 . P.Ciancarini and C.Hankin, Coordination Languages and Models, LNCS 1061, Springer-Verlag 1996. 5 . J.F.Costa, A.Sernadas, C.Sernadas and H.-D.Ehrich, "Object Interaction", Mathematical Foundations of Computer Science, LNCS 629, Springer-Verlag 1992, 200208. 6 . J.L.Fiadeiro and A.Sernadas, "Structuring Theories on Consequence”, in D.Sannella and A.Tarlecki (eds), Recent Trends in Data Type Specification, LNCS 332, SpringerVerlag 1988, 44-72. 7 . J.L.Fiadeiro and A.Lopes, "Semantics of Architectural Connectors", Theory and Practice of Software Development, M.Bidoit and M.Dauchet (eds), LNCS 1214, SpringerVerlag 1997, 505-519. 8 . J.L.Fiadeiro, A.Lopes and T.Maibaum, "Synthesising Interconnections", in R.Bird and L.Meertens (eds), Algorithmic Languages and Calculi, Chapman Hall 1997, 240264. 9 . J.L.Fiadeiro and T.Maibaum, "Categorical Semantics of Parallel Program Design", Science of Computer Programming 28(2-3), 1997, 111-138. 10. N.Francez and I.Forman, Interacting Processes, Addison-Wesley 1996. 11. D.Garlan and D.Le Metayer, Coordination Languages and Models, LNCS 1282, Springer-Verlag 1997. 12. D.Gelernter and N.Carriero, "Coordination Languages and their Significance", Communications ACM 35(2), 1992, 97-107. 13. J.Goguen, "Categorical Foundations for General Systems Theory", in F.Pichler and R.Trappl (eds) Advances in Cybernetics and Systems Research, Transcripta Books 1973, 121-130. 14. J.Goguen and R.Burstall, "Institutions: Abstract Model Theory for Specification and Programming", Journal of the ACM 39(1), 1992, 95-146. 15. J.Goguen, "A Categorical Manifesto", Mathematical Structures in Computer Science 1(1), 1991, 49-67. 16. V.Sassone, M.Nielsen and G.Winskel, "A Classification of Models for Concurrency", in E.Best (ed) CONCUR'93, LNCS 715, Springer-Verlag, 82-96. 17. M.Shaw and D.Garlan, Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall, 1996.

– 15 –