Methodology for Java Distributed and Parallel Programming Using ...

4 downloads 7902 Views 96KB Size Report
methodology of programming Java distributed and parallel applications. ... grammer on the degree and the granularity of parallelism is not necessarily taken ...
Methodology for Java Distributed and Parallel Programming Using Distributed Collections Violeta Felea , Bernard Toursel∗ LIFL (UPRESA CNRS 8022) - University of Science and Technology of Lille 59655 Villeneuve d’Ascq CEDEX - FRANCE ∗´ Ecole Universitaire d’Ing´enieurs de Lille (EUDIL)

Abstract Designing distributed and parallel applications is an important issue in the context of programming and execution environments. Designing applications as independently and as transparently as possible of the distributed system is not an easy issue. At the same time, improving efficiency of execution is considered to be difficult especially for irregular applications executing on a heterogeneous environment. In ADAJ (Adaptive Distributed Applications in Java), the concept of distributed collections is used as a guide for the methodology of programming Java distributed and parallel applications. Distributed collections encapsulate data parallelism and make use of threads transparent for the user. Asynchronous calls are also proposed in order to achieve task parallelism. The article shows the interest of using distributing collections and asynchronous calls, evaluating both conception and execution of ADAJ applications.

1. Introduction The last decades are witnesses of the increasing demand of a number of applications in performances of elementary processors, in terms of power of processing or memory storage and communication capacities. These applications (as meteorology, data mining, bio-computing) ask for more than it can result from the technological development. Associating parallel and distributing technologies becomes more and more a necessity in order to address problems that these applications rise. Parallelism helps improving performances of the execution of an application, while distribution is necessary in order to satisfy the demand in resources of an application. A distributed and parallel model, which can respond to such demands, is offered by the Java language [12]. Moreover, Java is a convenient language, as it masks heterogeneity of the platforms that are used, through the Java Virtual Ma-

chine, thus offering portability of its applications. The distributed model offers distribution of objects, named remote objects, over a distributed platform. Communications between objects placed in different address spaces are achieved through the equivalent of the message passing model in an object oriented environment, the RMI protocol [14]. This protocol does not offer complete transparency to the user at both programming and execution level. In the distributed Java object model, the programmer has to adapt its applications to the client-server programming model, and has to precise in advance all remote objects the application will use. In Java, parallelism is proposed through the use of threads. Even though this functionality exists, using threads is not necessarily easy for the user. Moreover, there is no transparency offered for parallel processing. However, disposing of the basic tools gives oneself the possibility of constructing frameworks which address easy conception of a parallel and distributed program. In the context of a Java distributed environment, natural questions rise. The first is how it would be possible for programmers to design distributed and parallel applications in a simple way: as transparently as possible of the environment (closely to a standard local application), not having to deal with the location of remote objects (location transparency) or to their particular remote handle (remote access transparency). Transparent or semi-transparent tools for expressing parallelism are necessary for the programmer to design applications. Parallelism can be achieved both by parallel processing of different data (SPMD model) or by asynchronous calls (MIMD model). The decision of the programmer on the degree and the granularity of parallelism is not necessarily taken during the development phase of programs. Not in the least, the programmer should be allowed to design parallel programs without being conscious of the characteristics of the execution platform, without explicitly specifying the object distribution or the degree of

0-7695-1573-8/02/$17.00 (C) 2002 IEEE

parallelism. The ADAJ (Adaptive Distributed Applications in Java) project proposes a methodology of programming using the concept of distributed collection which covers the cited problems. The second problem concerns the deployment of applications, which should be automatic, depending on the execution platform in order to adapt object distribution to changes in the platform’s design or to evolution of the application’s execution. In this paper, we present a solution to the issue of designing parallel and distributed Java applications, in terms of facility of conception, transparent distribution, expression of parallelism.

2. A Java distributed environment - ADAJ ADAJ is designed as an environment for developing parallel and distributed Java applications. Being a distributed Java model, ADAJ is based on the RMI distributed object model. Nevertheless, transparency, one of our aims, cannot be achieved in the context of the RMI model. What we are using instead is the JavaParty distributed object model [11]. Also based on RMI, this model proposes distributed and remote objects with several features: remote distance creation, transparent remote access, migration, static part handling. In ADAJ, migration and object distribution are used in strategies for load balancing. In order to decide on the optimal choice, we want to extract certain information about the behavior of both application (in terms of relations between objects [2]) and execution platform (in terms of resource consumption [3]). Only some particular objects are taken into account by the mechanism of observation of the relations between objects. These objects are called global, being remote (in the JavaParty meaning) and observable. This paper presents aspects which use features of global objects, as location transparency or remote access transparency, for the conception of parallel and distributed applications, while strategies for load balancing are not precised here.

3. Parallelism in ADAJ Designed over Java, the ADAJ environment inherits its features, among which, the expression of a multithreading environment through threads. In this context, one can improve execution efficiency of tasks that can be decomposed in independent sub-tasks. Java does not offer transparent mechanisms which correspond to the expression of object parallelism. In ADAJ, this is achieved through the concept of distributed collection, presented next.

Another way of increasing efficiency is method parallelism. For global objects, ADAJ proposes asynchronous method calls also based on the use of threads. In the next two sections we briefly present the two features concerning the expression of parallelism proposed by ADAJ.

3.1. Object parallelism Grouping objects is a common technique, currently integrated in the Java language as the collection framework [13]. Iteration over a group and applying a same task (method, in Java) is a very common operation and quite difficult to express when blocking remote methods are to be avoided or arguments are to be passed. A distributed collection is a group of objects, mostly for the purpose of activating transparently parallel processing over distributed data. The distributed collection groups fragments, being a two-level hierarchy. Figure 1 presents the structure of a distributed collection, on two levels: the first one is composed of a fragmented object, fragObj, located on the JVM1, and the second level is composed of four fragments frag1, frag2, frag3 and frag4 respectively, distributed on the Java machines 2, 1 and 3. The fragments contain data (objects the user intends to manipulate). JVM 1

fragObj ... frag2 data

JVM 3

JVM 2

frag3 frag1 data

data

frag4 data

Figure 1. The structure of a distributed collection The distributed collection is proposed in order to increase the granularity of parallelism or, in general, to offer the user the possibility of controlling this granularity. Thus, a parallel processing is executing on every fragment instead of executing on every object. The distribute primitive, which consists of invoking parallel tasks, is attached to every distributed collection. The hierarchical structure of a distributed collection allows distribution of fragment processing, associated to the fragment placement (see figure 1). Taking into account that processing objects can be executed in parallel, object parallelism (fragment parallelism) is achieved. More details concerning the organization of a distributed collection are presented in [6].

0-7695-1573-8/02/$17.00 (C) 2002 IEEE

3.2. Method parallelism

} }

In ADAJ, the expression of a conventional task parallelism is achieved through asynchronous calls on global objects. Asynchronous calls avoid blocking RMI calls and can be used to anticipate results. As functionality, a distribute call can be seen as multiple asynchronous calls over an object containing a number of fragments. Combining the two-sided face of parallelism, object and method parallelism, ADAJ proposes its methodology of programming, offering tools to develop MIMD applications.

In this section, we show some extracts from simple ADAJ programs which express parallelism in its two-way form. The major features of the ADAJ methodology of programming are presented next.

4.1. Distribute and asynchronous calls Distributed collections are instances of the DistributedCollection class, while fragments are instances of the RemoteFragment class, being global objects. In the library, classes that simulate the use of fragments as vectors or stacks are also proposed. The hierarchy of the library classes and user classes for fragments is shown figure 2. library classes for fragments

RemoteFragment

VectorInterface

VectorFragment

DistributedCollection distrCol = new DistributedCollection("MyFragment");

• Invoking parallel tasks (distribute call) Invoking a method methRes (which returns results) on all fragments of a distributed collection, as the one created previously, is done in the following manner: Collector c = DistributedTask. distribute(distrCol,"methodRes",null);

4. Programming in ADAJ

Fragment

Afterwards, a distributed collection can be formed as:

user classes for fragments

The recovered collector, more detailed in [6], has the role of a future object and stores results of the not-yetfinished invoked method. • Asynchronous calls The user can invoke asynchronous methods on every global object, including the fragment. The following syntax, Return r = Asynchronous.mReturnAsync (distrCol.get(i), "methodRes", {}+);,

corresponds to the asynchronous invocation of the methodRes method on the fragment of index i of the previous distributed collection. A similar call can be constructed in order to invoke a parallel task that does not return any result.

MyFragment

StackFragment

Figure 2. Diagram of library classes and user classes for fragments. The classes are marked in rectangles and the interfaces they implement in ellipses. The ”extend” link is continuous, while the ”implement” link is dotted.

• Distributed collection declaration In order to declare a distributed collection, one has first to declare the type of fragments it contains: class MyFragment extends RemoteFragment{ public void methodVoid(...){...} public Object methodRes(...){ ...; return ...;

The Return class is functionally similar to the Collector class (as future object), gathering the result of the asynchronous method. The difference is that a Collector recovers one or more results (depending on the number of fragments), while a Return gathers one result.

4.2. Methodology of programming in ADAJ The use of distributed collections in ADAJ induces a methodology of programming in the asynchronous SPMD (ASPMD) form, less constraining than the BSP model (Bulk Synchronous Programming) which imposes a synchronization phase after the computing phase. ADAJ enables: • easily expressing object parallelism (in which fragments are processed in parallel) and also easy and asynchronous recovery of results. Distribution of objects by fragmentation and distribution of data structures is source of parallelism (SPMD model). At the

0-7695-1573-8/02/$17.00 (C) 2002 IEEE

same time, exploiting asynchronism allows to proceed processing once a result is available, which is another source of parallelism (MIMD model). • implicitly increasing granularity of parallelism through the activation of parallel methods over fragments and not individually on every object or on globally scattered objects, • flexibility over explicit granularity and degree of parallelism. Thus, the programmer is not constrained to fix at the conception time the degree or granularity of parallelism, but he can deferred these choices at the execution time, depending on the real data to be processed and on the characteristics of the execution platform. Moreover, the degree of parallelism may vary dynamically during execution, by adding or removing fragments. • an independence towards explicit object placement and their possible migration.

4.3. An illustration of the ADAJ methodology of programming Distributed and parallel programming in ADAJ is facilitated by transparent thread launching, transparent parameter passing (avoiding explicit indirection of calls), or by easy recovery of results. The following example, extracted from a distributed version of the longest growing sub-sequence computing (used in bio-computing algorithms), shows the interest of programming in ADAJ using distributed collections, by the few transformations which are to be done to a sequential program in order to obtain a distributed one, compared to a classical RMI or JavaParty version. sequential version

class ThreadAsync extends Thread{ Result r; Sequence seq; public ThreadAsync(Sequence seq, Result r){ this.seq = seq; this.r = r; } public void run(){ r.add(seq.max()); } }

where Result gathers the result of the invoked method. The parallel RMI or JavaParty code would be: Result r = ...; //for the results recovery Sequence[] seq = ... ; // initialisation for (i=0; i< n; i++){ // task creation Thread task = new ThreadAsync(seq[i],r); task.start(); // task spawn }

As a divided sequence is organized like a distributed collection, processing independently the maximal growing sub-sequence can be a parallel task to be applied on every fragment. In ADAJ, this parallel computing can be expressed simply as:

class Sequence extends Vector { public Object[] max(){ add(...); } }

JavaParty version

The distributed solution consists of dividing the sequence in sub-sequences, launching the computing of the maximal sub-sequence on every resulting sequence (through the max method) and centralizing results in order to process the possible continuation of a maximal subsequence on the next sub-sequence (borders). The max method locally computes the first and the last growing subsequence, the longest growing sub-sequence, returning at the same time the elements which allow processing borders. Taking into account the synchronism of the remote calls, in the RMI case, as in the JavaParty version, the user has to develop mechanisms which deal with asynchronous calls (thread declaration and activation), in order to improve efficiency, and he also deals with the recovery of results:

ADAJ version using distributed collections

remote class Sequence { remote class Sequence Vector v; public Object[] max(){ extends VectorFragment { v.add(...); // use of v public Object[] max(){ } add(...); } } }

DistributedCollection distrCol = new DistributedCollection ("Sequence"); Collector c = DistributedTask.distribute (distrCol,"max", null);

The methodology of programming in ADAJ is a compromise between the facility of programming parallel and distributed applications (in the way presented in section 4.2) and conserving the Java programming model. In the next section, we present some of the compromises imposed by the ADAJ methodology and solutions of improvement.

0-7695-1573-8/02/$17.00 (C) 2002 IEEE

5. Improvements

5.1. Transparent asynchronous calls

Distribute calls as well as asynchronous calls have useful functionalities but the way of expressing them has some drawbacks. Parameters for the invoked methods cannot be of primitive types, they can only be Objects. This forces the user to wrap primitive types into object types. Moreover, badly-formed calls (either because of the method’s name or of the type of parameters or their number) are only detected at runtime. We wish to detect this kind of errors at compilation time in order to avoid further runtime exceptions, more difficult to deal with for the user. Taking into account that we wish to leave to the user the option of invoking a synchronous or an asynchronous call over a global object, we could offer some other expression, for an asynchronous call, like:

An asynchronous call can be achieved using threads, by two means: by the invocation of a thread dealing with the remote call either on the machine of the invocation, or on the machine of the method’s execution. In the first case, as shown in figure 3, a new thread is created locally and it is passed references to the object making the call, as well as references to the parameters needed for the method. This means that the caller is not blocked in the execution of the next method, because it’s the new thread which deals with the remote call. Moreover, as results may come later, all methods return a future object which is locally created (on the caller’s machine) and passed to the thread. JVM

JVM

Return r = Async.(obj, *);,

newCaller

invoking m(param)

caller

where is the class of the obj object ans * have the same type as the parameters of the declared method. It corresponds to call of the same method of a static class, but for the Async keyword, concatenated to the name of the class, to indicate an asynchronous call. Our intention is inspired from the Jacob platform [5] developed at Bordeaux, in which these invocations are called semi-transparent, because the user is indicating the intention of making an asynchronous call. The implementation is however adapted to the features of the ADAJ environment. Some other improvement concerns thread use. In some cases, a pool of threads can improve performances. For example, in a case of a regular application, which repeatedly processes the same amount of distributed data (like in the traveling salesman problem), holding a pool of threads would be better than creating at each time new processing threads. In this case, tasks are assigned to an already created thread and when a thread completes its task, it regains the pool of threads, being ready for another task. This improves performances because thread creation does have a significant overhead. In the case of distributed collections, a thread of pools can be efficient for a certain type of applications. At the same time, it reduces the degree of parallelism, so pool of threads of variable size are to be used. The problem that rises is how is the pool of threads resizing? To what extend and how often? Resizing too often implies overhead associated to thread creation and destruction; at the same time, doing this too rarely may block executions one expects to happen at once. Different libraries (of the Ninja project [4] or [8]) of pool of threads are already proposed and, being Java implemented, can be easily integrated in ADAJ environment.

callee

Return r result return

Figure 3. Client-side asynchronism

In the second case, as shown in figure 4, the new thread concerning the remote call is created on the remote object’s machine. This means that when a remote asynchronous call is made, the caller is blocked for the time of a thread creation and start. The result is also created on the caller’s machine and passed as parameter to the remote method. JVM

JVM

invoking m(param) newCallee

caller

callee Return r result return

Figure 4. Server-side asynchronism

The two methods correspond to what is called in the Jacob platform, and already cited in the GARF project [9] as client and server-side asynchronism. Server-side asynchronism is not appropriate in our environment from two points of view. Firstly, the blocking time of the caller corresponds to a remote call invocation which executes a thread creation and start, while in the client-side

0-7695-1573-8/02/$17.00 (C) 2002 IEEE

asynchronism, the caller is only blocked during a thread creation and start. The second point concerns the result recovery. The Return object is not a global (remote) object, a choice we made in order to avoid distant calls and remote object placement decision. In these circumstances, problems due to local object’s semantics appear. Being a remote call, in the case of server-side asynchronism, the result object is copied on the server side, and the deferred wait on the client side is not going to be done on the same object as the object on which the update is made on the client side. Consequently, we choose client-side asynchronism, which corresponds to packing the call in a local threadedobject, that is a local object which redirects remote calls in order to be executed in a thread. Thus, executing the method of the RemoteClass class in an asynchronous way on the obj instance of RemoteClass class, is done as in the following code: RemoteClassAsync.(obj,*);, where RemoteClassAsync is the transcription of the RemoteClass class by rewriting every public method of the class into a new one that creates and starts a thread which deals with the remote method invocation. The thread executes: obj.(*);. Using this type of asynchronous calls the programmer can use primitive types in the signature of the methods. Besides, if the call is badly-formed, a compile error is generated, as methods of the remote class are eventually invoked. This solution has its own drawbacks, which is that results returned by methods must be of Object type. The implementation consists of a parser which analyses user code and makes transformations using the classes of the java.nio and java.util.regex packages of the JDK 1.4 [15]. This tool is currently under development and testing. A similar mechanism of semi-transparent call can be achieved for distribute calls. In this case, a distributed call is rewritten in the following way : DistributedCollection distrCol = ...; Collector c = SequenceDistr.max(distrCol);

where the SequenceDistr class is generated from the Sequence class. Making asynchronous and distribute calls semitransparent gains the Java language’s feature of being strongly typed, aspect added to the methodology of programming proposed by ADAJ.

6. Related work In this section we briefly describe other parallel environments or libraries for developing Java distributed applications.

The ProActive PDC [1] is a 100% Java library which provides asynchronous calls on ”active objects”. Active objects are single-threaded and distributed explicitly by the instantiation code. Asynchronism is systematically present in invocations between active objects only. DPJ library [10] library proposes containers in order to treat data distribution over the network. Object parallelism characterizes this type of structures. Distribution of data is not transparent and communications are achieved through the MPI protocol. JavaParty [11] is an environment for executing Java applications on clusters of workstations. It offers powerful tools as remote object creation, which is transparent under default or user strategies, migration of remote objects (for objects that do not currently execute any method). Parallelism remains an explicit feature for the user. Jacob [5] (Java Active Containers of Objects) is another distributed platform for dynamic asynchronous remote method invocations. The concept of active container enables objects to dynamically become remote and methods invoked on it to become asynchronous (totally or partially, at both server and client side). Compared to the cited projects, ADAJ offers a both programming and execution environment for distributed and parallel applications. Location transparency is achieved, as in the JavaParty project, for global objects, but their location in ADAJ is based on an intelligent placement strategy that exploits information about the platform’s features. Remote access transparency enables the use of global objects just like local objects. However, the user has to be conscious of the difference in semantics between global and local objects (just like in RMI between remote and local objects). A specific pattern, the distributed collection, is provided in order to transparently express parallelism over distributed objects and to easily recover results, which can be explicitly expressed in JavaParty (as shown in the example). Asynchronous calls are provided both at the level of distributed collection as well as at the level of global objects. Inspired from the Jacob platform, in a second version which is under development, we provide semi-transparent client-side asynchronism, that avoids reflection, so regains one important feature of the Java language, that of being strongly typed.

7. Conclusion and future work Programmers may nowadays dispose of various platforms of execution for their applications, from powerful parallel computers to extended networks or clusters of workstations. What they would need mostly is a powerful environment to develop applications, in a parallel and distributed manner. In ADAJ, which is at the same time a development and an execution environment, the programmer has the possibil-

0-7695-1573-8/02/$17.00 (C) 2002 IEEE

ity to express parallelism through distributed collections or asynchronous calls, to transparently deploy remote Java objects across cluster of machines and to transparently access them. These features are achieved on the basis of both Java features and JavaParty distributed object model. We presented an example of distribute and asynchronous calls in ADAJ which shows the facility introduced through distributed collections in order to express parallelism. Designing in ADAJ reduces the programmer’s effort by implicitly invoking threads, for a distribute or asynchronous call, by offering tools to easily recover the results. For some applications (strongly parallel), parallelism induces considerable speedup compared to the sequential version. In ADAJ, this speedup is comparable to speedups obtained using a RMI or a JavaParty version [7]. Tests on an application of a genetic algorithm to the traveling salesman problem executing on a cluster of workstations (Pentium III, 128M, 256MHz, running Linux), showed us a small overhead (from 0% to 2.33%). The overhead mainly comes from the reflection mechanism used for the implementation in the first version, which we hope to overcome considerably both by the improvements presented (currently under development in the second version) and by exploiting observation mechanisms in load balancing strategies.

[9] B. Garbinato, R. Guerraoui, and K. Mazouni. Distributed Programming in GARF. In R.Guerraoui, G. Nierstrasz, and M. Riveill, editors, Object-Based Distributed Programming, volume 791, pages 225–239. LNCS-Springer Verlag, 1994. [10] V. Ivannikov, S. Gaissaryan, M. Domrachev, V. Etch, and N. Shtaltovnaya. DPJ : Java class library for development of data-parallel programs. Institute for System Programming, Russian Academy of Sciences , 1997. [11] M. Phillippsen and M. Zenger. JavaParty - Transparent Remote Objects in Java. ACM 1997 Workshop on Java for Science and Engineering Computation, June 1997. [12] Sun Products - JDK1.2. API & Language Documentation. http://java.sun.com/products/jdk/1.2/docs/api/overviewsummary.html. [13] Sun Products - JDK1.2. Collections JDK 1.2. http://java.sun.com/products/jdk/1.2/docs/guide/collections/ index.html. [14] Sun Products - JDK1.2. Remote Method Invocation. http://java.sun.com/products/jdk/1.2/docs/guide/rmi/index.html. [15] Sun Products - JDK1.4. New I/O APIs. http://java.sun.com/j2se/1.4/docs/guide/nio/index.html.

References [1] F. Baude, D. Caromel, F. Huet, and J. Vayssi`ere. Communicating Mobile Objects in Java. HPCN 2000, LNCS 1823:633–643, 2000. [2] A. Bouchi, E. Leprˆetre, and P. Lecouffe. Un m´ecanisme d’observation des objets distribu´es en Java. In Rencontres Francophones du Parall´elisme des Architectures et des Syst`emes, pages 171–176. RenPar’12 2000, 2000. [3] A. Bouchi, R. Olejnik, and B. Toursel. Java Tools for Measurements of the Machines Loads. In ”NATO Advanced Research Workshop Romania - Advanced Environments, Tools and Applications for Cluster Computing”, September 2001. [4] U. B. C. S. Division. Ninja Project - A Pool of Threads. http://ninja.cs.berkeley.edu/javadocs ninja2/ ninja2.core.io core.thread pool.ThreadPool.html, 1999. [5] DS&0 Research Team. Projet Jacob - Active Container of Object for Java. http://jccf.labri.u-bordeaux.fr/jodo/. [6] V. Felea, N. Devesa, P. Lecouffe, and B. Toursel. Expressing Parallelism in Java Applications Distributed on Clusters. In ”NATO Advanced Research Workshop Romania - Advanced Environments, Tools and Applications for Cluster Computing”, September 2001. [7] V. Felea, B. Toursel, and N. Devesa. Les collections distribu´ees : un outil pour la conception d’applications Java parall`eles. Submitted to Technique et science informatique, 2001. [8] A. S. Foundation. The Jakarta Project - Turbine. http://jakarta.apache.org/turbine/turbine-2/projectmap.html, Copyright 1999-2001.

0-7695-1573-8/02/$17.00 (C) 2002 IEEE