MPI_REDUCE(sendbuf, recvbuf, count, datatype, op, root, comm) | |
IN sendbuf | address of send buffer (choice) |
OUT recvbuf | address of receive buffer (choice, significant only at root) |
IN count | number of elements in send buffer (integer) |
IN datatype | data type of elements of send buffer (handle) |
IN op | reduce operation (handle) |
IN root | rank of root process (integer) |
IN comm | communicator (handle) |
void MPI::Comm::Reduce(const void* sendbuf, void* recvbuf, int count, const MPI::Datatype& datatype, const MPI::Op& op, int root) const = 0
The ``in place'' option for intracommunicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at the root. In such case, the input data is taken at the root from the receive buffer, where it will be replaced by the output data.
If comm is an intercommunicator, then the call involves all processes in the intercommunicator, but with one group (group A) defining the root process. All processes in the other group (group B) pass the same value in argument root, which is the rank of the root in group A. The root passes the value MPI_ROOT in root. All other processes in group A pass the value MPI_PROC_NULL in root. Only send buffer arguments are significant in group B and only receive buffer arguments are significant at the root.
MPI_ALLREDUCE(sendbuf, recvbuf, count, datatype, op, comm) | |
IN sendbuf | starting address of send buffer (choice) |
OUT recvbuf | starting address of receive buffer (choice) |
IN count | number of elements in send buffer (integer) |
IN datatype | data type of elements of send buffer (handle) |
IN op | operation (handle) |
IN comm | communicator (handle) |
void MPI::Comm::Allreduce(const void* sendbuf, void* recvbuf, int count, const MPI::Datatype& datatype, const MPI::Op& op) const = 0
The ``in place'' option for intracommunicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at the root. In such case, the input data is taken at each process from the receive buffer, where it will be replaced by the output data.
If comm is an intercommunicator, then the result of the reduction of the data provided by processes in group A is stored at each process in group B, and vice versa. Both groups should provide the same count value.
MPI_REDUCE_SCATTER(sendbuf, recvbuf, recvcounts, datatype, op, comm) | |
IN sendbuf | starting address of send buffer (choice) |
OUT recvbuf | starting address of receive buffer (choice) |
IN recvcounts | integer array specifying the number of elements in result distributed to each process. Array must be identical on all calling processes. |
IN datatype | data type of elements of input buffer (handle) |
IN op | operation (handle) |
IN comm | communicator (handle) |
void MPI::Comm::Reduce_scatter(const void* sendbuf, void* recvbuf, int recvcounts[], const MPI::Datatype& datatype, const MPI::Op& op) const = 0
The ``in place'' option for intracommunicators is specified by passing MPI_IN_PLACE in the sendbuf argument. In this case, the input data is taken from the top of the receive buffer. Note that the area occupied by the input data may be either longer or shorter than the data filled by the output data.
If comm is an intercommunicator, then the result of the reduction of the data provided by processes in group A is scattered among processes in group B, and vice versa. Within each group, all processes provide the same recvcounts argument, and the sum of the recvcounts entries should be the same for the two groups.
Rationale.
The last restriction is needed so that the length of the send
buffer can be determined by the sum of the local recvcounts entries.
Otherwise, a communication is needed to figure out how many elements
are reduced.
( End of rationale.)