Parallel CORBA Objects CORBA

Click here to load reader

  • date post

    19-Jan-2016
  • Category

    Documents

  • view

    30
  • download

    0

Embed Size (px)

description

Parallel CORBA Objects CORBA. May, 22 nd 2000 ARC « Couplage » Christophe René (IRISA/IFSIC). Contents. Introduction Parallel CORBA object concept Performance evaluation Encapsulation example Conclusion. Introduction. Objective - PowerPoint PPT Presentation

Transcript of Parallel CORBA Objects CORBA

  • Parallel CORBA Objects CORBAMay, 22nd 2000ARC Couplage

    Christophe Ren (IRISA/IFSIC)

    ARC Couplage

  • Contents

    Introduction Parallel CORBA object concept Performance evaluation Encapsulation example

    Conclusion

    ARC Couplage

  • IntroductionObjectiveTo design a Problem Solving Environment able to integrate a large number of codes aiming at simulating a physical problemTo perform multi-physics simulation (code coupling)

    ConstraintsSimulation codes may be located on different machinesdistributed processingSimulation codes may require high performance computersparallel processing

    ApproachCombining both parallel and distributed technologies using a component approach (MPI + CORBA)

    ARC Couplage

  • CORBA GeneralitiesCORBA: Common Object Request Broker ArchitectureOpen standard for distributed object computing by the OMGSoftware bus, object orientedRemote invocation mechanismHardware, operating system and programming language independenceVendor independence (interoperability)Problems to facePerformance issuesPoor integration of high performance computing environments with CORBA

    ARC Couplage

  • How does CORBA work ?Interface Definition Language (IDL)Describe remote object

    IDL compilerStub and skeleton code generation

    IDL stub (proxy)Handle remote invocation

    IDL skeletonLink between object implementation and ORBinterface MatrixOperations {const long SIZE = 100;typedef double Vector[ SIZE ];typedef double Matrix[ SIZE ][ SIZE ];void multiply( in Matrix A, in Vector B, out Vector C );};ServerClientIDL stubObjetinvocationObject Request Broker (ORB)IDLcompilerOAIDL skeletonObject implementation

    ARC Couplage

  • Encapsulating MPI-based parallel codes into CORBA objectsMaster/slave approachOne SPMD code acts as the master whereas the others act as slavesThe master drives the execution of the slaves through message-passingDrawbacksLack of scalability when communicating through the ORBNeed modifications to the original MPI codeAdvantageCan be used with any CORBA implementation

    ARC Couplage

  • Master / Slave approach in details Master has toSelect the method to invoke within the slave processesScatter data to slave processesGather data from slave processes

    Master processCORBA + MPIinitialization

    Slave processesMPI initialization

    ARC Couplage

  • Parallel CORBA object conceptA collection of identical CORBA objects Transparent to the clientParallel remote invocationData distributionCORBA ORBMPI Communication layerParallel CORBA ObjectSPMDcodeSkel.OASPMDcodeSkel.OASPMDcodeSkel.OAParallel ServerStubClientSequentialClient

    ARC Couplage

  • Problems to faceCommunication betweena sequential client and a parallel servera parallel client and a sequential servera parallel client and a parallel server

    Implementation constraintsDo not modify the ORB core to keep interoperability features

    ApproachModify stub and skeleton codeExtend the IDL compiler

    ARC Couplage

  • Extended-IDL

    Collection specificationSize specification = number of requests to sendShape specification used to distribute arrays

    Data distribution specificationScatter and gather elements of an array

    Reduction operator specificationPerform collective operations using request replies

    ARC Couplage

  • Specifying number of objects in a collection

    Several ways:integer valueinterval of integer valuemathematical functionpowerexponentialmultiplecharacter *interface[ 4 ] Example1 { /* ... */};

    interface[ 2 .. 8 ] Example2 { /* ... */};

    interface[ 2 ^ n ] Example3 { /* ... */};

    interface[ * ] Example4 { /* ... */};

    ARC Couplage

  • Shape of the object collectionHow can we organize 8 objets ?

    Shape depends on data distribution specification, but users may add special requirements

    ARC Couplage

  • Shape of the object collection (contd)

    Specification of the shape

    size of one dimensioninteger valuemathematical functionmultipledependence betweendimensionsinterface[ 8: 2, 4 ] Example1 { /* ... */};

    interface[ *: 2 ] Example2 { /* ... */};

    interface[ *: *, 2 ] Example3 { /* ... */};

    interface[ *: 2 * n ] Example3 { /* ... */};

    interface[ x ^ 2: n, n ] Example4 { /* ... */};

    ARC Couplage

  • Inheritance mechanismUnder some constraintsnumbers of processors must match

    shapes of virtual nodes array must matchinterface[ * ] Example1 { /* ... */};interface[ 2 ^ n ] Example2 : Example1 { /* ... */};interface[ 2 ^ n ] Example1 { /* ... */};interface[ * ] Example2 : Example1 { /* ... */};interface[ * ] Example1 { /* ... */};interface[ * : 2 ] Example2: Example1 { /* ... */};interface[ *: 2 ] Example1 { /* ... */};interface[ *: 3 ] Example2: Example1 { /* ... */};Inheritance not allowed Inheritance allowed Inheritance allowed Inheritance not allowed

    ARC Couplage

  • Specifying Data distributionNew keyword: dist

    Only arrays and sequences may be distributed

    Available distribution mode:BLOCKBLOCK( size )CYCLICCYCLIC( size )*interface[ * ] Example { typedef double Arr1[ 8 ]; typedef Arr1 Arr2[ 8 ]; typedef sequence< double > Seq; void Op1( in dist[ CYCLIC ] Arr1 A, in Arr1 B, out dist[ BLOCK ][ * ] Arr2 C );

    void Op2( in dist[ BLOCK ] Seq A, inout Seq B );};

    ARC Couplage

  • Distribution examples on 2 processors

    Block( 5 )

    Block = Block( BlockSize )

    BlockSize = ( ArrayLength + ProcNb - 1 ) / ProcNb

    Cyclic( 3 )

    Cyclic = Cyclic( 1 )

    ARC Couplage

  • MappingVector distribution on a processor matrix

    ARC Couplage

  • Mapping (contd)typedef double Arr1[ 8 ];typedef Arr1 Arr2[ 8 ];

    interface[ * ] Example1 { void Op1( in dist[ * ][ CYCLIC ] Arr2 A ); void Op2( in dist[ CYCLIC, 2 ][ * ] Arr2 A );};

    interface[ * ] Example2 { void Op1( in dist[ BLOCK, 2 ][ BLOCK, 1 ] Arr2 A, out dist[ BLOCK ][ BLOCK ] Arr2 B );};Specification allowed

    ARC Couplage

  • Reduction operatorsReduction operator available:min, maxaddition (sum), multiplier (prod)bitwise operation (and, or, xor)logical operation (and, or, xor)interface[ * ] Example1 { typedef double Arr[ 8 ];

    cland boolean Op1( in dist[ BLOCK ] Arr A, in double B ); void Op2( in dist[ CYCLIC ] Arr A, inout cmin double B ); void Op3( in dist[ CYCLIC( 3 ) ] Arr A, out csum double B );};

    ARC Couplage

  • Summary interface MatrixOperations { const long SIZE = 100;

    typedef double Vector[ SIZE ]; typedef double Matrix[ SIZE ][ SIZE ];

    void multiply( in Matrix A, in Vector B, out Vector C );

    double minimum( in Vector A );};

    ARC Couplage

  • Code Generation ProblemsNew type for distributed parameters: distributed arrayAmount of data to be sent to remote objects is known at runtimeAn extension of CORBA sequenceData distribution specification stored in distributed arrays

    Skeleton code generationProvide access to data distribution specification

    Stub code generationScatter and gather data among remote objectsManage remote operation invocations

    ARC Couplage

  • Stub code generationCORBA ORB

    ARC Couplage

  • Parallel CORBA Object as client

    Stub code generation when the client is parallel

    Assignment of remote object references to the stubs

    Use of distributed data type as operation parameters in the stubs

    Exchange of data through MPI by the stubsto build requeststo propagate results

    ARC Couplage

  • Parallel CORBA Object as client (contd)Only one process has lot of worksgather distributed data from other processes send the alone requestscatter distributed data toother processesbroadcast value of nondistributed data

    CORBA ORBMPI Communication layerParallel CORBA ObjectSPMDcodeStubSPMDcodeStubSPMDcodeStubSkel.OAObjectimpl.SequentialServerParallel Client

    ARC Couplage

  • Parallel CORBA Object as client (contd) p requests are dispatched among n objects (cyclic distribution)p < n: data distribution handled by the stub p > n: data distribution handled by the skeletonp = n: user choiceParallel ServerMPI Communication layerParallel CORBA Object (size p)SPMDcodeSkel.OASPMDcodeSkel.OASPMDcodeSkel.OACORBA ORBMPI Communication layerParallel CORBA Object (size n)SPMDcodeStubSPMDcodeStubSPMDcodeStubParallel Client

    ARC Couplage

  • Naming ServiceCurrently (as defined by the OMG):Provide some methods to access a remote object through a symbolic name Associate a symbolic name with an object reference and only oneOur needs:Associate a symbolic name with a collection of object referencesImplementation constraint:Object reference to the Standard Naming Service and the Parallel Naming Service must be the same:orb->resolve_initial_reference( NameService );Our solution:Add new methods to the Naming Service interface

    ARC Couplage

  • Extension to the Naming Service

    ARC Couplage

  • ImplementationUsing MICO implementation of CORBA

    Library (not included in the ORB core)Parallel CORBA object base classFunctions to handle distributed dataData redistribution library interface

    Extended-IDL compiler (extension of MICO IDL compiler)ParserSemantic analyzerCode generator

    Experimental platformCluster of PCsParallel machine (Cenju - NEC)

    ARC Couplage

  • Comparison between CORBA and MPIBenchmark: send / receive Platform: 2 Bi - Pentium III 500 MhzEthernet 100 Mb/sLatency:MPI: 0,35 msCORBA: 0,52 msDifferences due to:ProtocolMemory allocationinterface Bench { typedef sequence< long > Vector; void sendrecv( in Vector in_a, out Vector out_a );};

    ARC Couplage

  • Performance evaluation (contd)

    Four experiments:

    ARC Couplage

  • Performance evaluation (contd)Matrix order = 256 ; Element type = long050010001500200025001248Number of objects belonging to the collectionms

    ARC Couplage

  • Performance evaluation (contd)Matrix order = 512

    ARC Couplage

  • Encapsulation exampleint main( int argc, char* argv[] ){ /* ... */

    MPI_Init( &argc, &argv );

    MPI_Comm_rank( MPI_COMM_WORLD, &id ); MPI_Comm_size( MPI_COMM_WORLD, &size ); /* ... */

    MPI_Send( ... ); MPI_Recv( ... );

    /* ... */

    MPI_Finalize();}Original code

    ARC Couplage

  • Encapsulation example (contd)

    ARC Couplage

  • Encapsulation example (contd)

    ARC Couplage

  • Conclusion

    We show that MPI and CORBA can be combined for distributed and parallel programming

    Implementation depends on the CORBA implementationNeed to have a standardized API for the ORB

    Response to the OMG RFI Supporting Aggregated Computing in CORBA

    ARC Couplage