140. Library Example #2


Up: Motivating Examples Next: Inter-Communication Previous: Library Example #1

The main program: 2.2

   int main(int argc, char **argv) 
   { 
     int ma, mb; 
     MPI_Group MPI_GROUP_WORLD, group_a, group_b; 
     MPI_Comm comm_a, comm_b; 
 
     static int list_a[] = {0, 1}; 
#if  defined(EXAMPLE_2B) | defined(EXAMPLE_2C) 
     static int list_b[] = {0, 2 ,3}; 
#else/* EXAMPLE_2A */ 
     static int list_b[] = {0, 2}; 
#endif 
     int size_list_a = sizeof(list_a)/sizeof(int); 
     int size_list_b = sizeof(list_b)/sizeof(int); 
 
     ... 
     MPI_Init(&argc, &argv); 
     MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); 
 
     MPI_Group_incl(MPI_GROUP_WORLD, size_list_a, list_a, &group_a); 
     MPI_Group_incl(MPI_GROUP_WORLD, size_list_b, list_b, &group_b); 
 
     MPI_Comm_create(MPI_COMM_WORLD, group_a, &comm_a); 
     MPI_Comm_create(MPI_COMM_WORLD, group_b, &comm_b); 
 
     if(comm_a != MPI_COMM_NULL) 
        MPI_Comm_rank(comm_a, &ma); 
     if(comm_b != MPI_COMM_NULL) 
        MPI_Comm_rank(comm_b, &mb); 
 
     if(comm_a != MPI_COMM_NULL) 
        lib_call(comm_a); 
 
     if(comm_b != MPI_COMM_NULL) 
     { 
       lib_call(comm_b); 
       lib_call(comm_b); 
     } 
 
     if(comm_a != MPI_COMM_NULL) 
       MPI_Comm_free(&comm_a); 
     if(comm_b != MPI_COMM_NULL) 
       MPI_Comm_free(&comm_b); 
     MPI_Group_free(&group_a); 
     MPI_Group_free(&group_b); 
     MPI_Group_free(&MPI_GROUP_WORLD); 
     MPI_Finalize(); 
   } 

The library:

   void lib_call(MPI_Comm comm) 
   { 
     int me, done = 0; 
     MPI_Status status;  
     MPI_Comm_rank(comm, &me); 
     if(me == 0) 
        while(!done) 
        { 
           MPI_Recv(..., MPI_ANY_SOURCE, MPI_ANY_TAG, comm, &status); 
           ... 
        } 
     else 
     { 
       /* work */ 
       MPI_Send(..., 0, ARBITRARY_TAG, comm); 
       .... 
     } 
#ifdef EXAMPLE_2C 
     /* include (resp, exclude) for safety (resp, no safety): */ 
     MPI_Barrier(comm); 
#endif 
   } 
The above example is really three examples, depending on whether or not one includes rank 3 in list_b, and whether or not a synchronize is included in lib_call. This example illustrates that, despite contexts, subsequent calls to lib_call with the same context need not be safe from one another (colloquially, ``back-masking''). Safety is realized if the MPI_Barrier is added. What this demonstrates is that libraries have to be written carefully, even with contexts. When rank 3 is excluded, then the synchronize is not needed to get safety from back masking.

Algorithms like ``reduce'' and ``allreduce'' have strong enough source selectivity properties so that they are inherently okay (no backmasking), provided that MPI provides basic guarantees. So are multiple calls to a typical tree-broadcast algorithm with the same root or different roots (see [45]). Here we rely on two guarantees of MPI: pairwise ordering of messages between processes in the same context, and source selectivity --- deleting either feature removes the guarantee that backmasking cannot be required.

Algorithms that try to do non-deterministic broadcasts or other calls that include wildcard operations will not generally have the good properties of the deterministic implementations of ``reduce,'' ``allreduce,'' and ``broadcast.'' Such algorithms would have to utilize the monotonically increasing tags (within a communicator scope) to keep things straight.

All of the foregoing is a supposition of ``collective calls'' implemented with point-to-point operations. MPI implementations may or may not implement collective calls using point-to-point operations. These algorithms are used to illustrate the issues of correctness and safety, independent of how MPI implements its collective calls. See also Section Formalizing the Loosely Synchronous Model .



Up: Motivating Examples Next: Inter-Communication Previous: Library Example #1


Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page

(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009