93. Applying Collective Operations to Intercommunicators
Up: Communicator Argument
Next: Specifics for Intercommunicator Collective Operations
Previous: Specifics for Intracommunicator Collective Operations
To understand how collective operations apply to intercommunicators,
we can view most MPI intracommunicator
collective operations as fitting one of the following categories (see, for
instance,
[56]):
-
All-To-All
-
All processes contribute to the result. All processes
receive the result.
- MPI_ALLGATHER, MPI_IALLGATHER,
MPI_ALLGATHERV, MPI_IALLGATHERV
- MPI_ALLTOALL, MPI_IALLTOALL,
MPI_ALLTOALLV, MPI_IALLTOALLV,
MPI_ALLTOALLW, MPI_IALLTOALLW
- MPI_ALLREDUCE, MPI_IALLREDUCE,
MPI_REDUCE_SCATTER_BLOCK,
MPI_IREDUCE_SCATTER_BLOCK, MPI_REDUCE_SCATTER,
MPI_IREDUCE_SCATTER
- MPI_BARRIER, MPI_IBARRIER
-
All-To-One
-
All processes contribute to the result. One process
receives the result.
- MPI_GATHER, MPI_IGATHER,
MPI_GATHERV, MPI_IGATHERV
- MPI_REDUCE, MPI_IREDUCE
-
One-To-All
-
One process contributes to the result. All processes
receive the result.
- MPI_BCAST, MPI_IBCAST
- MPI_SCATTER, MPI_ISCATTER,
MPI_SCATTERV, MPI_ISCATTERV
-
Other
-
Collective operations that do not fit into one of the above
categories.
- MPI_SCAN, MPI_ISCAN,
MPI_EXSCAN, MPI_IEXSCAN
The data movement patterns of MPI_SCAN, MPI_ISCAN,
MPI_EXSCAN, and MPI_IEXSCAN
do not fit this taxonomy.
The application of collective communication to
intercommunicators is best described in terms of two groups.
For example, an all-to-all
MPI_ALLGATHER operation can be described as collecting data
from all members of one group with the result appearing in all members
of the other group (see Figure 2
). As another
example, a one-to-all MPI_BCAST operation sends data from one
member of one group to all members of the other group.
Collective computation operations such as MPI_REDUCE_SCATTER have
a
similar interpretation (see Figure 3
).
For intracommunicators, these two groups
are the same. For intercommunicators, these two groups are distinct.
For the all-to-all operations, each such operation is described in two phases,
so that it
has a symmetric, full-duplex behavior.
The following collective operations also apply to intercommunicators:
- MPI_BARRIER, MPI_IBARRIER
- MPI_BCAST, MPI_IBCAST
- MPI_GATHER, MPI_IGATHER,
MPI_GATHERV, MPI_IGATHERV,
- MPI_SCATTER, MPI_ISCATTER,
MPI_SCATTERV, MPI_ISCATTERV,
- MPI_ALLGATHER, MPI_IALLGATHER,
MPI_ALLGATHERV, MPI_IALLGATHERV,
- MPI_ALLTOALL, MPI_IALLTOALL,
MPI_ALLTOALLV, MPI_IALLTOALLV,
MPI_ALLTOALLW, MPI_IALLTOALLW,
- MPI_ALLREDUCE, MPI_IALLREDUCE,
MPI_REDUCE, MPI_IREDUCE,
- MPI_REDUCE_SCATTER_BLOCK,
MPI_IREDUCE_SCATTER_BLOCK, MPI_REDUCE_SCATTER,
MPI_IREDUCE_SCATTER.
Figure 2: Intercommunicator allgather. The focus of data to one process is
represented, not mandated by the semantics.
The two phases do allgathers in both directions.
Figure 3: Intercommunicator reduce-scatter. The focus of data to one process
is represented, not mandated by the semantics.
The two phases do reduce-scatters in both directions.
Up: Communicator Argument
Next: Specifics for Intercommunicator Collective Operations
Previous: Specifics for Intracommunicator Collective Operations
Return to MPI-3.1 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-3.1 of June 4, 2015
HTML Generated on June 4, 2015