MPI_REDUCE_SCATTER extends the functionality of MPI_REDUCE_SCATTER_BLOCK
such that the scattered blocks can vary in size.
Block sizes are determined by the recvcounts array,
such that the i-th block contains recvcounts[i] elements.
int MPI_Reduce_scatter(void* sendbuf, void* recvbuf, int *recvcounts, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
MPI_REDUCE_SCATTER(SENDBUF, RECVBUF, RECVCOUNTS, DATATYPE, OP, COMM, IERROR)
If comm is an intracommunicator,
MPI_REDUCE_SCATTER first
performs a global, element-wise reduction on vectors of
elements in the send
buffers defined by sendbuf, count and
datatype, using the operation op, where n is the number of
processes in the group of comm. The routine is called by all group
members using the same arguments for recvcounts,
datatype, op and comm.
The resulting vector is treated as n consecutive blocks where the number
of elements of the i-th block is recvcounts[i]. The blocks are scattered
to the processes of the group. The i-th block
is sent to process i and stored in the
receive buffer defined by recvbuf, recvcounts[i] and
datatype.
The MPI_REDUCE_SCATTER
routine is functionally equivalent to:
an
MPI_REDUCE
collective
operation
with count equal to
the sum of recvcounts[i] followed by
MPI_SCATTERV with sendcounts equal to recvcounts.
However, a direct implementation may run faster.
( End of advice to implementors.)
The last restriction is needed so that the length of the send
buffer can be determined by the sum of the local recvcounts entries.
Otherwise, a communication is needed to figure out how many elements
are reduced.
( End of rationale.)
MPI_REDUCE_SCATTER( sendbuf, recvbuf, recvcounts,
datatype, op, comm) IN sendbuf starting address of send buffer (choice) OUT recvbuf starting address of receive buffer (choice) IN recvcounts non-negative
integer array (of length group size) specifying the number of elements of the result distributed to each process.
IN datatype data type of elements of send and receive buffers (handle) IN op operation (handle) IN comm communicator (handle)
<type> SENDBUF(*), RECVBUF(*)
INTEGER RECVCOUNTS(*), DATATYPE, OP, COMM, IERROR
{ void MPI::Comm::Reduce_scatter(const void* sendbuf, void* recvbuf, int recvcounts[], const MPI::Datatype& datatype, const MPI::Op& op) const = 0 (binding deprecated, see Section Deprecated since MPI-2.2
) }
Advice
to implementors.
The ``in place'' option for intracommunicators is specified by passing
MPI_IN_PLACE in
the sendbuf argument.
In this case, the input data is taken from the receivebuffer. It is not required to specify the ``in
place'' option on all processes, since the processes for which
recvcounts[i] ==0 may not have allocated a receive buffer.
If comm is an intercommunicator, then the result of the reduction
of the data provided by processes in one
group (group A) is scattered among processes in
the other group (group B), and vice
versa. Within each group, all processes provide the same
recvcounts argument, and provide input vectors of
elements stored in the send buffers, where n is the size of the group.
The resulting vector from the other group is scattered in blocks of
recvcounts[i] elements among the processes in the group. The number of
elements count must be the same for the two groups.
Rationale.
Up: Reduce-Scatter
Next: Scan
Previous: MPI_REDUCE_SCATTER_BLOCK
Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009