The functions
MPI_GRAPH_CREATE, MPI_DIST_GRAPH_CREATE_ADJACENT,
MPI_DIST_GRAPH_CREATE and MPI_CART_CREATE
are used to create general (graph) virtual topologies and Cartesian
topologies, respectively. These topology creation functions are
collective. As with other collective calls, the program must be
written to work correctly, whether the call synchronizes or not.
The topology creation functions take as input an existing communicator
comm_old,
which defines the set of processes on which the topology is to be
mapped.
2.2For MPI_GRAPH_CREATE and MPI_CART_CREATE,
all input arguments must have identical
values on all processes of the group of comm_old. For
MPI_DIST_GRAPH_CREATE_ADJACENT and
MPI_DIST_GRAPH_CREATE the input
communication graph is distributed across the calling
processes. Therefore the processes provide different values for the
arguments specifying the graph. However, all processes must give the
same value for reorder and the info argument. In all cases, a
new communicator comm_topol is created that
carries the topological structure as cached information (see
Chapter Groups, Contexts, Communicators, and Caching
). In analogy to function
MPI_COMM_CREATE, no cached information propagates from
comm_old to comm_topol.
MPI_CART_CREATE can be used to describe Cartesian structures of
arbitrary dimension. For each coordinate direction one specifies whether the
process structure is periodic or not.
Note that an n-dimensional hypercube is
an n-dimensional torus with 2 processes per coordinate direction. Thus,
special support for hypercube structures is not necessary. The local
auxiliary function MPI_DIMS_CREATE can be used to compute a balanced
distribution of processes among a given number of dimensions.
Two additional functions, MPI_GRAPH_MAP and
MPI_CART_MAP are presented in the last section. In general these
functions are not called by the user directly. However, together with the
communicator manipulation functions presented in Chapter Groups, Contexts, Communicators, and Caching
,
they are sufficient to implement all other topology functions.
Section Low-Level Topology Functions
outlines such an implementation.
Rationale.
Similar functions are contained in
EXPRESS [12] and PARMACS. ( End of rationale.)
The function MPI_TOPO_TEST can be used to inquire about the
topology associated with a communicator. The topological information can be
extracted from the communicator using the functions
MPI_GRAPHDIMS_GET and MPI_GRAPH_GET, for general
graphs, and MPI_CARTDIM_GET and MPI_CART_GET, for
Cartesian topologies. Several additional functions are provided to manipulate
Cartesian topologies: the functions MPI_CART_RANK and
MPI_CART_COORDS translate Cartesian coordinates into a group rank,
and vice-versa; the function MPI_CART_SUB can be used to extract a
Cartesian subspace (analogous to MPI_COMM_SPLIT). The function
MPI_CART_SHIFT provides the information needed to communicate with
neighbors in a Cartesian dimension. The two functions
MPI_GRAPH_NEIGHBORS_COUNT and MPI_GRAPH_NEIGHBORS can
be used to extract the neighbors of a node in a graph.
For distributed graphs, the functions MPI_DIST_NEIGHBORS_COUNT and
MPI_DIST_NEIGHBORS can be used to extract the neighbors of the calling node.
The function
MPI_CART_SUB is collective over the input communicator's group;
all other functions are local.
Up: Contents
Next: Topology Constructors
Previous: Embedding in MPI
Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009