int MPI_Reduce_scatter_block(void* sendbuf, void* recvbuf, int recvcount, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
MPI_REDUCE_SCATTER_BLOCK(SENDBUF, RECVBUF, RECVCOUNT, DATATYPE, OP, COMM, IERROR)
{ void MPI::Comm::Reduce_scatter_block(const void* sendbuf, void* recvbuf, int recvcount, const MPI::Datatype& datatype, const MPI::Op& op) const = 0 (binding deprecated, see Section Deprecated since MPI-2.2
) }
If comm is an intracommunicator, MPI_REDUCE_SCATTER_BLOCK
first performs a global, element-wise reduction on vectors of count = n*recvcount
The MPI_REDUCE_SCATTER_BLOCK routine is functionally equivalent to:
an MPI_REDUCE collective operation with count equal to recvcount* n
If comm is an intercommunicator, then the result of the reduction of the data provided
by processes in one group (group A) is scattered among processes in the other group (group B) and vice versa.
Within each group, all processes provide the same value for the recvcount argument,
and provide input vectors of count = n*recvcount
The last restriction is needed so that the length of the send buffer of one group can be
determined by the local recvcount argument of the other group.
Otherwise, a communication is needed to figure out how many elements are reduced.
( End of rationale.)
MPI_REDUCE_SCATTER_BLOCK( sendbuf, recvbuf, recvcount,
datatype, op, comm) IN sendbuf starting address of send buffer (choice) OUT recvbuf starting address of receive buffer (choice) IN recvcount element count per block (non-negative integer) IN datatype data type of elements of send and receive buffers (handle) IN op operation (handle) IN comm communicator (handle)
<type> SENDBUF(*), RECVBUF(*)
INTEGER RECVCOUNT, DATATYPE, OP, COMM, IERROR
Advice
to implementors.
The ``in place'' option for intracommunictors is specified by passing MPI_IN_PLACE
in the sendbuf argument on all processes.
In this case, the input data is taken from the receive buffer.
Rationale.
Up: Reduce-Scatter
Next: MPI_REDUCE_SCATTER
Previous: Reduce-Scatter
Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009