The following are all local operations.
MPI_COMM_SIZE(comm, size) | |
IN comm | communicator (handle) |
OUT size | number of MPI processes in the group of comm (integer) |
Rationale.
This function is equivalent to accessing the communicator's group with
MPI_COMM_GROUP (see above), computing the size using
MPI_GROUP_SIZE,
and then freeing the temporary group via MPI_GROUP_FREE. However,
this functionality is so commonly used that this shortcut was introduced.
( End of rationale.)
Advice to users.
This function indicates the number of MPI processes involved in a communicator. For MPI_COMM_WORLD, it indicates the total number of MPI processes available unless the number of MPI processes has been changed by using the functions described in Chapter Process Initialization, Creation, and Management; note that the number of MPI processes in MPI_COMM_WORLD does not change during the life of an MPI program.
This call is often used with the next call to determine the amount of
concurrency available for a specific library or program. The following
call, MPI_COMM_RANK indicates the rank of the MPI process
that calls it in the range from 0,..., size-1, where size
is the return value of MPI_COMM_SIZE. ( End of advice to users.)
MPI_COMM_RANK(comm, rank) | |
IN comm | communicator (handle) |
OUT rank | rank of the calling MPI process in group of comm (integer) |
Rationale.
This function is equivalent to accessing the communicator's group with
MPI_COMM_GROUP (see above), computing the rank using
MPI_GROUP_RANK,
and then freeing the temporary group via MPI_GROUP_FREE. However,
this functionality is so commonly used that this shortcut was introduced.
( End of rationale.)
Advice to users.
This function gives the rank of the MPI process in the particular communicator's group. It is useful, as noted above, in conjunction with MPI_COMM_SIZE.
Many programs will follow the supervisor/executor or manager/worker model, where one MPI process
will play a supervisory role while the other
MPI processes will play an executory role. In this framework, the two preceding
calls are useful for determining the roles of the various MPI processes of a
communicator.
( End of advice to users.)
MPI_COMM_COMPARE(comm1, comm2, result) | |
IN comm1 | first communicator (handle) |
IN comm2 | second communicator (handle) |
OUT result | result (integer) |
MPI_IDENT results if and only if comm1 and comm2 are handles for the same object (identical groups and same contexts). MPI_CONGRUENT results if the underlying groups are identical in constituents and rank order; these communicators differ only by context. MPI_SIMILAR results if the group members of both communicators are the same but the rank order differs. MPI_UNEQUAL results otherwise.