In the neighborhood gather operation, each MPI process i gathers data items from each MPI process j if an edge (j,i) exists in the topology graph, and each MPI process i sends the same data items to all MPI processes j where an edge (i,j) exists. The send buffer is sent to each neighboring MPI process and the l-th block in the receive buffer is received from the l-th neighbor.
MPI_NEIGHBOR_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) | |
IN sendbuf | starting address of send buffer (choice) |
IN sendcount | number of elements sent to each neighbor (non-negative integer) |
IN sendtype | datatype of send buffer elements (handle) |
OUT recvbuf | starting address of receive buffer (choice) |
IN recvcount | number of elements received from each neighbor (non-negative integer) |
IN recvtype | datatype of receive buffer elements (handle) |
IN comm | communicator with associated virtual topology (handle) |
The MPI_NEIGHBOR_ALLGATHER procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:
Figure Neighborhood Gather shows the neighborhood gather communication of one MPI process with outgoing neighbors d0... d3 and incoming neighbors s0... s5. The MPI process will send its sendbuf to all four destinations (outgoing neighbors) and it will receive the contribution from all six sources (incoming neighbors) into separate locations of its receive buffer.
Neighborhood gather communication example
All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.
The type signature associated with sendcount, sendtype at an MPI process must be equal to the type signature associated with recvcount, recvtype at all other MPI processes. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed.
Rationale.
For optimization reasons, the same type signature is required
independently of whether the topology graph is connected or not.
( End of rationale.)
The ``in place'' option is not meaningful for this operation.
Example
Buffer usage of MPI_NEIGHBOR_ALLGATHER in the case of a Cartesian virtual topology.
On a Cartesian virtual topology, the buffer usage in a given direction d with dims[d]=3 and 1, respectively during creation of the communicator is described in Figure 22.
The figure may apply to any (or multiple) directions in the Cartesian topology. The grey buffers are required in all cases but are only accessed if during creation of the communicator, periods[d] was defined as nonzero (in C) or .TRUE. (in Fortran).
The vector variant of MPI_NEIGHBOR_ALLGATHER allows one to gather different numbers of elements from each neighbor.
MPI_NEIGHBOR_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm) | |
IN sendbuf | starting address of send buffer (choice) |
IN sendcount | number of elements sent to each neighbor (non-negative integer) |
IN sendtype | datatype of send buffer elements (handle) |
OUT recvbuf | starting address of receive buffer (choice) |
IN recvcounts | nonnegative integer array (of length indegree) containing the number of elements that are received from each neighbor |
IN displs | integer array (of length indegree). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from neighbor i |
IN recvtype | datatype of receive buffer elements (handle) |
IN comm | communicator with associated virtual topology (handle) |
The MPI_NEIGHBOR_ALLGATHERV procedure supports Cartesian communicators, graph communicators, and distributed graph communicators as described in Section Neighborhood Collective Communication on Virtual Topologies. If comm is a distributed graph communicator, the outcome is as if each MPI process executed sends to each of its outgoing neighbors and receives from each of its incoming neighbors:
The type signature associated with sendcount, sendtype at MPI process j must be equal to the type signature associated with recvcounts [l], recvtype at any other MPI process with srcs[l]=j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of communicating MPI processes. Distinct type maps between sender and receiver are still allowed. The data received from the l-th neighbor is placed into recvbuf beginning at offset displs [l] elements (in terms of the recvtype).
The ``in place'' option is not meaningful for this operation.
All arguments are significant on all MPI processes and the argument comm must have identical values on all MPI processes.