One goal of MPI is to achieve source code portability. By this we mean that a program written using MPI and complying with the relevant language standards is portable as written, and must not require any source code changes when moved from one system to another. This explicitly does not say anything about how an MPI program is started or launched from the command line, nor what the user must do to set up the environment in which an MPI program will run. However, an implementation may require some setup to be performed before other MPI routines may be called. To provide for this, MPI includes an initialization routine MPI_INIT.
int MPI_Init(int *argc, char ***argv)
MPI_Init(ierror)
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_INIT(IERROR)
INTEGER IERROR
All MPI programs must contain exactly one call to an MPI initialization routine:
MPI_INIT or MPI_INIT_THREAD. Subsequent calls to any
initialization routines are erroneous. The only MPI functions that may be invoked
before the MPI initialization routines are called are MPI_GET_VERSION, MPI_GET_LIBRARY_VERSION,
MPI_INITIALIZED, MPI_FINALIZED, and any function
with the prefix MPI_T_ (within the constraints for functions with this prefix listed in Section Initialization and Finalization
). The version for ISO C
accepts the argc and argv that are provided by the arguments to
main or NULL:
int main(int argc, char *argv[]) { MPI_Init(&argc, &argv); /* parse arguments */ /* main program */ MPI_Finalize(); /* see below */ return 0; }The Fortran version takes only IERROR.
Conforming implementations of MPI are required to allow applications to pass NULL for both the argc and argv arguments of main in C.
After MPI is initialized, the application can access information about the execution environment by querying the predefined info object MPI_INFO_ENV. The following keys are predefined for this object, corresponding to the arguments of MPI_COMM_SPAWN or of mpiexec:
The info object MPI_INFO_ENV need not contain a (key,value) pair for each of these predefined keys; the set of (key,value) pairs provided is implementation-dependent. Implementations may provide additional, implementation specific, (key,value) pairs.
In case where the MPI processes were started with MPI_COMM_SPAWN_MULTIPLE or, equivalently, with a startup mechanism that supports multiple process specifications, then the values stored in the info object MPI_INFO_ENV at a process are those values that affect the local MPI process.
Example
If MPI is started with a call to
mpiexec -n 5 -arch sun ocean : -n 10 -arch rs6000 atmosThen the first 5 processes will have have in their MPI_INFO_ENV object the pairs (command, ocean), (maxprocs, 5), and (arch, sun). The next 10 processes will have in MPI_INFO_ENV (command, atmos), (maxprocs, 10), and (arch, rs6000)
Advice to users.
The values passed in MPI_INFO_ENV are the values of the
arguments passed to the mechanism that started the MPI execution ---
not the actual value provided. Thus, the value associated with
maxprocs is the number of MPI processes requested; it can
be larger than the actual number of processes obtained, if the
soft option was used.
( End of advice to users.)
Advice
to implementors.
High-quality implementations will provide a (key,value) pair for each
parameter that can be passed to the command that starts an MPI
program.
( End of advice to implementors.)
int MPI_Finalize(void)
MPI_Finalize(ierror)
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_FINALIZE(IERROR)
INTEGER IERROR
This routine cleans up all MPI state. If an MPI program terminates normally (i.e., not due to a call to MPI_ABORT or an unrecoverable error) then each process must call MPI_FINALIZE before it exits.
Before an MPI process invokes MPI_FINALIZE, the process must perform all MPI calls needed to complete its involvement in MPI communications: It must locally complete all MPI operations that it initiated and must execute matching calls needed to complete MPI communications initiated by other processes. For example, if the process executed a nonblocking send, it must eventually call MPI_WAIT, MPI_TEST, MPI_REQUEST_FREE, or any derived function; if the process is the target of a send, then it must post the matching receive; if it is part of a group executing a collective operation, then it must have completed its participation in the operation.
The call to MPI_FINALIZE does not free objects created by MPI calls; these objects are freed using MPI_ XXX_FREE calls.
MPI_FINALIZE is collective over all connected processes. If no processes were spawned, accepted or connected then this means over MPI_COMM_WORLD; otherwise it is collective over the union of all processes that have been and continue to be connected, as explained in Section Releasing Connections .
The following examples illustrates these rules
Example
The following code is correct
Process 0 Process 1 --------- --------- MPI_Init(); MPI_Init(); MPI_Send(dest=1); MPI_Recv(src=0); MPI_Finalize(); MPI_Finalize();
Example
Without a matching receive, the program is erroneous
Process 0 Process 1 ----------- ----------- MPI_Init(); MPI_Init(); MPI_Send (dest=1); MPI_Finalize(); MPI_Finalize();
Example
This program is correct: Process 0 calls
MPI_Finalize after it has executed
the MPI calls that complete the
send operation. Likewise, process 1 executes the MPI call
that completes the matching receive operation before it calls MPI_Finalize.
Process 0 Proces 1 -------- -------- MPI_Init(); MPI_Init(); MPI_Isend(dest=1); MPI_Recv(src=0); MPI_Request_free(); MPI_Finalize(); MPI_Finalize(); exit(); exit();
Example
This program is correct. The attached buffer is a resource
allocated by the user, not by MPI; it is available to the user
after MPI is finalized.
Process 0 Process 1 --------- --------- MPI_Init(); MPI_Init(); buffer = malloc(1000000); MPI_Recv(src=0); MPI_Buffer_attach(); MPI_Finalize(); MPI_Send(dest=1)); exit(); MPI_Finalize(); free(buffer); exit();
Example
This program is correct. The cancel operation must succeed,
since the send cannot complete normally. The wait operation, after
the call to MPI_Cancel, is
local --- no matching MPI call is required on process 1.
Process 0 Process 1 --------- --------- MPI_Issend(dest=1); MPI_Finalize(); MPI_Cancel(); MPI_Wait(); MPI_Finalize();
Advice
to implementors.
Even though a process has
executed all MPI calls needed to complete the communications
it is involved with, such
communication may not yet be completed from the viewpoint of the underlying
MPI system. For example, a blocking send may have returned, even though the data
is still buffered at the sender in an MPI
buffer; an MPI process may receive a cancel request for a
message it has completed receiving. The MPI implementation must ensure that a
process has completed any involvement in MPI communication before
MPI_FINALIZE returns. Thus, if a process exits after the call to
MPI_FINALIZE, this will not cause an ongoing communication to
fail.
The MPI implementation should also complete freeing all
objects marked for deletion by MPI calls that freed them.
( End of advice to implementors.)
Once MPI_FINALIZE returns, no MPI routine (not even MPI_INIT) may
be called, except for
MPI_GET_VERSION, MPI_GET_LIBRARY_VERSION,
MPI_INITIALIZED,
MPI_FINALIZED, and any function
with the prefix MPI_T_ (within the constraints for functions with this prefix listed in Section Initialization and Finalization
).
Although it is not required that all processes return from MPI_FINALIZE, it is required that at least process 0 in MPI_COMM_WORLD return, so that users can know that the MPI portion of the computation is over. In addition, in a POSIX environment, users may desire to supply an exit code for each process that returns from MPI_FINALIZE.
Example
The following illustrates the use of requiring that at least one
process return and that it be known that process 0 is one of the processes
that return. One wants code like the following to work no matter how many
processes return.
... MPI_Comm_rank(MPI_COMM_WORLD, &myrank); ... MPI_Finalize(); if (myrank == 0) { resultfile = fopen("outfile","w"); dump_results(resultfile); fclose(resultfile); } exit(0);
MPI_INITIALIZED(flag) | |
OUT flag | F |
int MPI_Initialized(int *flag)
MPI_Initialized(flag, ierror)
LOGICAL, INTENT(OUT) :: flag
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_INITIALIZED(FLAG, IERROR)
LOGICAL FLAG
INTEGER IERROR
This routine may be used to determine whether MPI_INIT has been called. MPI_INITIALIZED returns true if the calling process has called MPI_INIT. Whether MPI_FINALIZE has been called does not affect the behavior of MPI_INITIALIZED. It is one of the few routines that may be called before MPI_INIT is called. This function must always be thread-safe, as defined in Section MPI and Threads .
MPI_ABORT(comm, errorcode) | |
IN comm | communicator of tasks to abort |
IN errorcode | error code to return to invoking environment |
int MPI_Abort(MPI_Comm comm, int errorcode)
MPI_Abort(comm, errorcode, ierror)
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, INTENT(IN) :: errorcode
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_ABORT(COMM, ERRORCODE, IERROR)
INTEGER COMM, ERRORCODE, IERROR
This routine makes a ``best attempt'' to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program.
It may not be possible for an MPI implementation to abort only the processes represented by comm if this is a subset of the processes. In this case, the MPI implementation should attempt to abort all the connected processes but should not abort any unconnected processes. If no processes were spawned, accepted, or connected then this has the effect of aborting all the processes associated with MPI_COMM_WORLD.
Rationale.
The communicator argument is provided to allow for future extensions of MPI to
environments with, for example, dynamic process management. In particular, it
allows but does not require an MPI implementation to abort a subset of
MPI_COMM_WORLD.
( End of rationale.)
Advice to users.
Whether the errorcode is returned from the executable or from the
MPI process startup mechanism (e.g., mpiexec), is an aspect of quality
of the MPI library but not mandatory.
( End of advice to users.)
Advice
to implementors.
Where possible, a high-quality implementation will try to return the
errorcode from the MPI process startup mechanism
(e.g. mpiexec or singleton init).
( End of advice to implementors.)