7.12.7. Nonblocking Reduce
Up: Nonblocking Collective Operations
Next: Nonblocking All-Reduce
Previous: Nonblocking All-to-All Scatter/Gather
MPI_IREDUCE(sendbuf, recvbuf, count, datatype, op, root, comm, request) |
IN sendbuf | address of send buffer (choice) |
OUT recvbuf | address of receive buffer (choice, significant only at root) |
IN count | number of elements in send buffer (non-negative integer) |
IN datatype | datatype of elements of send buffer (handle) |
IN op | reduce operation (handle) |
IN root | rank of the root (integer) |
IN comm | communicator (handle) |
OUT request | communication request (handle) |
C binding
int MPI_Ireduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm, MPI_Request *request)
int MPI_Ireduce_c(const void *sendbuf, void *recvbuf, MPI_Count count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm, MPI_Request *request)
Fortran 2008 binding
MPI_Ireduce(sendbuf, recvbuf, count, datatype, op, root, comm, request, ierror)
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
INTEGER, INTENT(IN) :: count, root
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Op), INTENT(IN) :: op
TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_Ireduce(sendbuf, recvbuf, count, datatype, op, root, comm, request, ierror) !(_c)
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Op), INTENT(IN) :: op
INTEGER, INTENT(IN) :: root
TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_IREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT, COMM, REQUEST, IERROR)
<type> SENDBUF(*), RECVBUF(*)
INTEGER COUNT, DATATYPE, OP, ROOT, COMM, REQUEST, IERROR
This call starts a nonblocking variant of MPI_REDUCE (see
Section Reduce).
Advice
to implementors.
The implementation is explicitly allowed to use different algorithms for
blocking and nonblocking reduction operations that might change the
order of evaluation of the operations. However, as for
MPI_REDUCE, it is strongly recommended that
MPI_IREDUCE be implemented so that the same result be
obtained whenever the function is applied on the same arguments,
appearing in the same order. Note that this may prevent optimizations
that take advantage of the physical location of MPI processes.
( End of advice to implementors.)
Advice to users.
For operations that are not truly associative, the result delivered
upon completion of the nonblocking reduction may not exactly equal the
result delivered by the blocking reduction, even when specifying the
same arguments in the same order.
( End of advice to users.)
Up: Nonblocking Collective Operations
Next: Nonblocking All-Reduce
Previous: Nonblocking All-to-All Scatter/Gather
Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023