MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) | |
IN sendbuf | starting address of send buffer (choice) |
IN sendcount | number of elements in send buffer (non-negative integer) |
IN sendtype | datatype of send buffer elements (handle) |
OUT recvbuf | address of receive buffer (choice) |
IN recvcount | number of elements received from any MPI process (non-negative integer) |
IN recvtype | datatype of receive buffer elements (handle) |
IN comm | communicator (handle) |
MPI_ALLGATHER can be thought of as MPI_GATHER, but where all MPI processes receive the result, instead of just the root. The block of data sent from the j-th MPI process is received by every MPI process and placed in the j-th block of the buffer recvbuf.
The type signature associated with sendcount, sendtype, at an MPI process must be equal to the type signature associated with recvcount, recvtype at any other MPI process.
If comm is an intra-communicator,
the outcome of a call to MPI_ALLGATHER(...) is as if
all MPI processes executed n calls to
MPI_Gather(sendbuf,sendcount,sendtype,recvbuf,recvcount, recvtype,root,comm)for root = 0, ..., n-1. The rules for correct usage of MPI_ALLGATHER can be found in the corresponding rules for MPI_GATHER (see Section Gather).
The ``in place'' option for intra-communicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at all MPI processes. sendcount and sendtype are ignored. Then the input data of each MPI process is assumed to be in the area where that MPI process would receive its own contribution to the receive buffer.
If comm is an inter-communicator, then each MPI process of one group (group A) contributes sendcount data items; these data are concatenated and the result is stored at each MPI process in the other group (group B). Conversely the concatenation of the contributions of the MPI processes in group B is stored at each MPI process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.
Advice to users.
In the inter-communicator case, the communication pattern of MPI_ALLGATHER
need not be symmetric. The number of items
sent by MPI processes in group A (as specified by the arguments
sendcount, sendtype in group A and the arguments
recvcount, recvtype in group B), need not equal the number of
items sent by MPI processes in group B (as specified by the arguments
sendcount, sendtype in group B and the arguments
recvcount, recvtype in group A). In particular, one can move
data in only one direction by specifying sendcount = 0 for
the communication in the reverse direction.
( End of advice to users.)
MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm) | |
IN sendbuf | starting address of send buffer (choice) |
IN sendcount | number of elements in send buffer (non-negative integer) |
IN sendtype | datatype of send buffer elements (handle) |
OUT recvbuf | address of receive buffer (choice) |
IN recvcounts | nonnegative integer array (of length group size) containing the number of elements that are received from each MPI process |
IN displs | integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from MPI process i |
IN recvtype | datatype of receive buffer elements (handle) |
IN comm | communicator (handle) |
MPI_ALLGATHERV can be thought of as MPI_GATHERV, but where all processes receive the result, instead of just the root. The block of data sent from the j-th MPI process is received by every MPI process and placed in the j-th block of the buffer recvbuf. These blocks need not all be the same size.
The type signature associated with sendcount, sendtype, at MPI process j must be equal to the type signature associated with recvcounts[j], recvtype at any other MPI process.
If comm is an intra-communicator,
the outcome is as if all MPI processes executed calls to
MPI_Gatherv(sendbuf,sendcount,sendtype,recvbuf,recvcounts,displs, recvtype,root,comm),for root = 0, ..., n-1. The rules for correct usage of MPI_ALLGATHERV can be found in the corresponding rules for MPI_GATHERV (see Section Gather).
The ``in place'' option for intra-communicators is specified by passing the value MPI_IN_PLACE to the argument sendbuf at all MPI processes. In such a case, sendcount and sendtype are ignored, and the input data of each MPI process is assumed to be in the area where that MPI process would receive its own contribution to the receive buffer.
If comm is an inter-communicator, then each MPI process of one group (group A) contributes sendcount data items; these data are concatenated and the result is stored at each MPI process in the other group (group B). Conversely the concatenation of the contributions of the MPI processes in group B is stored at each MPI process in group A. The send buffer arguments in group A must be consistent with the receive buffer arguments in group B, and vice versa.