This section introduces five blocking inter-communicator operations.
MPI_INTERCOMM_CREATE is used to bindtwo intra-communicators into an intercommunicator; the function
MPI_INTERCOMM_CREATE_FROM_GROUPS constructs an inter-communicator
from two previously defined disjoint groups; the function
MPI_INTERCOMM_MERGE creates an intra-communicator by
merging the local and remote groups of an inter-communicator. The
functions
MPI_COMM_DUP and
MPI_COMM_FREE, introduced previously,
duplicate and free an inter-communicator, respectively.
Overlap of local and remote groups that are bound into an inter-communicator is prohibited. If there is overlap, then the program is erroneous and is likely to deadlock.
The function MPI_INTERCOMM_CREATE can be used to create an inter-communicator from two existing intra-communicators, in the following situation: At least one selected member from each group (the ``group leader'') has the ability to communicate with the selected member from the other group; that is, a ``peer'' communicator exists to which both leaders belong, and each leader knows the rank of the other leader in this peer communicator. Furthermore, members of each group know the rank of their leader.
Construction of an inter-communicator from two intra-communicators requires separate collective operations in the local group and in the remote group, as well as a point-to-point communication between an MPI process in the local group and an MPI process in the remote group.
When using the World Model (Section The World Model), the MPI_COMM_WORLD communicator (or preferably a dedicated duplicate thereof) can be this peer communicator. For applications that use the Sessions Model, or the spawn or join operations, it may be necessary to first create an intra-communicator to be used as the peer communicator.
The application topology functions described in Chapter Virtual Topologies for MPI Processes do not apply to inter-communicators. Users that require this capability should utilize MPI_INTERCOMM_MERGE to build an intra-communicator, then apply the graph or cartesian topology capabilities to that intra-communicator, creating an appropriate topology-oriented intra-communicator. Alternatively, it may be reasonable to devise one's own application topology mechanisms for this case, without loss of generality.
MPI_INTERCOMM_CREATE(local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm) | |
IN local_comm | local intra-communicator (handle) |
IN local_leader | rank of local group leader in local_comm (integer) |
IN peer_comm | ``peer'' communicator; significant only at the local_leader (handle) |
IN remote_leader | rank of remote group leader in peer_comm; significant only at the local_leader (integer) |
IN tag | tag (integer) |
OUT newintercomm | new inter-communicator (handle) |
This call creates an inter-communicator. It is collective over the union of the local and remote groups. MPI processes should provide identical local_comm and local_leader arguments within each group. Wildcards are not permitted for remote_leader, local_leader, and tag.
MPI_INTERCOMM_CREATE_FROM_GROUPS(local_group, local_leader, remote_group, remote_leader, stringtag, info, errhandler, newintercomm) | |
IN local_group | local group (handle) |
IN local_leader | rank of local group leader in local_group (integer) |
IN remote_group | remote group, significant only at local_leader (handle) |
IN remote_leader | rank of remote group leader in remote_group, significant only at local_leader (integer) |
IN stringtag | unique idenitifier for this operation (string) |
IN info | info object (handle) |
IN errhandler | error handler to be attached to new inter-communicator (handle) |
OUT newintercomm | new inter-communicator (handle) |
This call creates an inter-communicator. Unlike MPI_INTERCOMM_CREATE, this function uses as input previously defined, disjoint local and remote groups. The calling MPI process must be a member of the local group. The call is collective over the union of the local and remote groups. All involved MPI processes shall provide an identical value for the stringtag argument. Within each group, all MPI processes shall provide identical local_group, local_leader arguments. Wildcards are not permitted for the remote_leader or local_leader arguments. The stringtag argument serves the same purpose as the stringtag used in the MPI_COMM_CREATE_FROM_GROUP function; it differentiates concurrent calls in a multithreaded environment. The stringtag shall not exceed MPI_MAX_STRINGTAG_LEN characters in length. For C, this includes space for a null terminating character. MPI_MAX_STRINGTAG_LEN shall have a value of at least 63. In the event that MPI_GROUP_EMPTY is supplied as the local_group or remote_group or both, then the call is a local operation and MPI_COMM_NULL is returned as the newintercomm.
The errhandler argument specifies an error handler to be attached to the new inter-communicator. Section Error Handling specifies the error handler to be invoked if an error is encountered during the invocation of MPI_INTERCOMM_CREATE_FROM_GROUPS.
The info argument provides hints and assertions, possibly MPI implementation dependent, which indicate desired characteristics and guide communicator creation.
MPI_INTERCOMM_MERGE(intercomm, high, newintracomm) | |
IN intercomm | inter-communicator (handle) |
IN high | ordering of the local and remote groups in the new intra-communicator (logical) |
OUT newintracomm | new intra-communicator (handle) |
This function creates an intra-communicator from the union of the two groups that are associated with intercomm. All MPI processes should provide the same high value within each of the two groups. If MPI processes in one group provided the value high = false and MPI processes in the other group provided the value high = true then the union orders the ``low'' group before the ``high'' group. If all MPI processes provided the same high argument then the order of the union is arbitrary. This call is blocking and collective within the union of the two groups.
The error handler on the new inter-communicator in each MPI process is inherited from the communicator that contributes the local group. Note that this can result in different MPI processes in the same communicator having different error handlers.
Advice
to implementors.
The implementation of
MPI_INTERCOMM_MERGE,
MPI_COMM_FREE, and MPI_COMM_DUP are
similar to the implementation of
MPI_INTERCOMM_CREATE, except
that contexts private to the input intercommunicator
are used for
communication between group leaders rather than contexts inside a
bridge communicator. ( End of advice to implementors.)