178. Startup


Up: Contents Next: Allowing User Functions at Process Termination Previous: Timers and Synchronization

One goal of MPI is to achieve source code portability. By this we mean that a program written using MPI and complying with the relevant language standards is portable as written, and must not require any source code changes when moved from one system to another. This explicitly does not say anything about how an MPI program is started or launched from the command line, nor what the user must do to set up the environment in which an MPI program will run. However, an implementation may require some setup to be performed before other MPI routines may be called. To provide for this, MPI includes an initialization routine MPI_INIT.

MPI_INIT()

int MPI_Init(int *argc, char ***argv)

MPI_INIT(IERROR)
INTEGER IERROR
void MPI::Init(int& argc, char**& argv)
void MPI::Init()

This routine must be called before any other MPI routine. It must be called at most once; subsequent calls are erroneous (see MPI_INITIALIZED).

All MPI programs must contain a call to MPI_INIT; this routine must be called before any other MPI routine (apart from MPI_GET_VERSION, MPI_INITIALIZED, and MPI_FINALIZED) is called. The version for ISO C accepts the argc and argv that are provided by the arguments to main:

int main(argc, argv) 
int argc; 
char **argv; 
{ 
    MPI_Init(&argc, &argv); 
 
    /* parse arguments */ 
    /* main program    */ 
 
    MPI_Finalize();     /* see below */ 
} 
The Fortran version takes only IERROR.

Conforming implementations of MPI are required to allow applications to pass NULL for both the argc and argv arguments of main in C and C++. In C++, there is an alternative binding for MPI::Init that does not have these arguments at all.


Rationale.

In some applications, libraries may be making the call to MPI_Init, and may not have access to argc and argv from main. It is anticipated that applications requiring special information about the environment or information supplied by mpiexec can get that information from environment variables. ( End of rationale.)
MPI_FINALIZE()

int MPI_Finalize(void)

MPI_FINALIZE(IERROR)
INTEGER IERROR
void MPI::Finalize()

This routine cleans up all MPI state. Each process must call MPI_FINALIZE before it exits. Unless there has been a call to MPI_ABORT, each process must ensure that all pending non-blocking communications are (locally) complete before calling MPI_FINALIZE. Further, at the instant at which the last process calls MPI_FINALIZE, all pending sends must be matched by a receive, and all pending receives must be matched by a send.

For example, the following program is correct:

        Process 0                Process 1 
        ---------                --------- 
        MPI_Init();              MPI_Init(); 
        MPI_Send(dest=1);        MPI_Recv(src=0); 
        MPI_Finalize();          MPI_Finalize(); 
Without the matching receive, the program is erroneous:
        Process 0                Process 1 
        -----------              ----------- 
        MPI_Init();              MPI_Init(); 
        MPI_Send (dest=1); 
        MPI_Finalize();          MPI_Finalize(); 

A successful return from a blocking communication operation or from MPI_WAIT or MPI_TEST tells the user that the buffer can be reused and means that the communication is completed by the user, but does not guarantee that the local process has no more work to do. A successful return from MPI_REQUEST_FREE with a request handle generated by an MPI_ISEND nullifies the handle but provides no assurance of operation completion. The MPI_ISEND is complete only when it is known by some means that a matching receive has completed. MPI_FINALIZE guarantees that all local actions required by communications the user has completed will, in fact, occur before it returns.

MPI_FINALIZE guarantees nothing about pending communications that have not been completed (completion is assured only by MPI_WAIT, MPI_TEST, or MPI_REQUEST_FREE combined with some other verification of completion).


Example This program is correct:

rank 0                          rank 1 
===================================================== 
...                             ... 
MPI_Isend();                    MPI_Recv(); 
MPI_Request_free();             MPI_Barrier(); 
MPI_Barrier();                  MPI_Finalize(); 
MPI_Finalize();                 exit(); 
exit(); 


Example This program is erroneous and its behavior is undefined:

rank 0                          rank 1 
===================================================== 
...                             ... 
MPI_Isend();                    MPI_Recv(); 
MPI_Request_free();             MPI_Finalize(); 
MPI_Finalize();                 exit(); 
exit(); 

If no MPI_BUFFER_DETACH occurs between an MPI_BSEND (or other buffered send) and MPI_FINALIZE, the MPI_FINALIZE implicitly supplies the MPI_BUFFER_DETACH.


Example This program is correct, and after the MPI_Finalize, it is as if the buffer had been detached.

rank 0                          rank 1 
===================================================== 
...                             ... 
buffer = malloc(1000000);       MPI_Recv(); 
MPI_Buffer_attach();            MPI_Finalize(); 
MPI_Bsend();                    exit(); 
MPI_Finalize(); 
free(buffer); 
exit(); 


Example In this example, MPI_Iprobe() must return a FALSE flag. MPI_Test_cancelled() must return a TRUE flag, independent of the relative order of execution of MPI_Cancel() in process 0 and MPI_Finalize() in process 1.

The MPI_Iprobe() call is there to make sure the implementation knows that the ``tag1'' message exists at the destination, without being able to claim that the user knows about it.


rank 0                          rank 1 
======================================================== 
MPI_Init();                     MPI_Init(); 
MPI_Isend(tag1); 
MPI_Barrier();                  MPI_Barrier(); 
                                MPI_Iprobe(tag2); 
MPI_Barrier();                  MPI_Barrier(); 
                                MPI_Finalize(); 
                                exit(); 
MPI_Cancel(); 
MPI_Wait(); 
MPI_Test_cancelled(); 
MPI_Finalize(); 
exit(); 
 

Advice to implementors.

An implementation may need to delay the return from MPI_FINALIZE until all potential future message cancellations have been processed. One possible solution is to place a barrier inside MPI_FINALIZE ( End of advice to implementors.)
Once MPI_FINALIZE returns, no MPI routine (not even MPI_INIT) may be called, except for MPI_GET_VERSION, MPI_INITIALIZED, and MPI_FINALIZED. Each process must complete any pending communication it initiated before it calls MPI_FINALIZE. If the call returns, each process may continue local computations, or exit, without participating in further MPI communication with other processes. MPI_FINALIZE is collective over all connected processes. If no processes were spawned, accepted or connected then this means over MPI_COMM_WORLD; otherwise it is collective over the union of all processes that have been and continue to be connected, as explained in Section Releasing Connections on page Releasing Connections .
Advice to implementors.

Even though a process has completed all the communication it initiated, such communication may not yet be completed from the viewpoint of the underlying MPI system. E.g., a blocking send may have completed, even though the data is still buffered at the sender. The MPI implementation must ensure that a process has completed any involvement in MPI communication before MPI_FINALIZE returns. Thus, if a process exits after the call to MPI_FINALIZE, this will not cause an ongoing communication to fail. ( End of advice to implementors.)
Although it is not required that all processes return from MPI_FINALIZE, it is required that at least process 0 in MPI_COMM_WORLD return, so that users can know that the MPI portion of the computation is over. In addition, in a POSIX environment, they may desire to supply an exit code for each process that returns from MPI_FINALIZE.


Example The following illustrates the use of requiring that at least one process return and that it be known that process 0 is one of the processes that return. One wants code like the following to work no matter how many processes return.


    ... 
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank); 
    ... 
    MPI_Finalize(); 
    if (myrank == 0) { 
        resultfile = fopen("outfile","w"); 
        dump_results(resultfile); 
        fclose(resultfile); 
    } 
    exit(0); 

Flag is true if MPI_INIT has been called and false otherwise.
MPI_INITIALIZED( flag )
OUT flag

int MPI_Initialized(int *flag)

MPI_INITIALIZED(FLAG, IERROR)
LOGICAL FLAG
INTEGER IERROR
bool MPI::Is_initialized()

This routine may be used to determine whether MPI_INIT has been called. MPI_INITIALIZED returns true if the calling process has called MPI_INIT. Whether MPI_FINALIZE has been called does not affect the behavior of MPI_INITIALIZED. It is one of the few routines that may be called before MPI_INIT is called.

MPI_ABORT( comm, errorcode )
IN commcommunicator of tasks to abort
IN errorcodeerror code to return to invoking environment

int MPI_Abort(MPI_Comm comm, int errorcode)

MPI_ABORT(COMM, ERRORCODE, IERROR)
INTEGER COMM, ERRORCODE, IERROR
void MPI::Comm::Abort(int errorcode)

This routine makes a ``best attempt'' to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program. It may not be possible for an MPI implementation to abort only the processes represented by comm if this is a subset of the processes. In this case, the MPI implementation should attempt to abort all the connected processes but should not abort any unconnected processes. If no processes were spawned, accepted or connected then this has the effect of aborting all the processes associated with MPI_COMM_WORLD.


Rationale.

The communicator argument is provided to allow for future extensions of MPI to environments with, for example, dynamic process management. In particular, it allows but does not require an MPI implementation to abort a subset of MPI_COMM_WORLD. ( End of rationale.)

Advice to users.

Whether the errorcode is returned from the executable or from the MPI process startup mechanism (e.g., mpiexec), is an aspect of quality of the MPI library but not mandatory. ( End of advice to users.)

Advice to implementors.

Where possible, a high-quality implementation will try to return the errorcode from the MPI process startup mechanism (e.g. mpiexec or singleton init). ( End of advice to implementors.)



Up: Contents Next: Allowing User Functions at Process Termination Previous: Timers and Synchronization


Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008