The general graph constructor assumes that each process passes the full (global) communication graph to the call. This limits the scalability of this constructor. With the distributed graph interface, the communication graph is specified in a fully distributed fashion. Each process specifies only the part of the communication graph of which it is aware. Typically, this could be the set of processes from which the process will eventually receive or get data, or the set of processes to which the process will send or put data, or some combination of such edges. Two different interfaces can be used to create a distributed graph topology. MPI_DIST_GRAPH_CREATE_ADJACENT creates a distributed graph communicator with each process specifying all of its incoming and outgoing (adjacent) edges in the logical communication graph and thus requires minimal communication during creation. MPI_DIST_GRAPH_CREATE provides full flexibility, and processes can indicate that communication will occur between other pairs of processes.
To provide better possibilities for optimization by the MPI library, the distributed graph constructors permit weighted communication edges and take an info argument that can further influence process reordering or other optimizations performed by the MPI library. For example, hints can be provided on how edge weights are to be interpreted, the quality of the reordering, and/or the time permitted for the MPI library to process the graph.
MPI_DIST_GRAPH_CREATE_ADJACENT(comm_old, indegree, sources, sourceweights, outdegree, destinations, destweights, info, reorder, comm_dist_graph) | |
IN comm_old | input communicator (handle) |
IN indegree | size of sources and sourceweights arrays (non-negative integer) |
IN sources | ranks of processes for which the calling process is a destination (array of non-negative integers) |
IN sourceweights | weights of the edges into the calling process (array of non-negative integers) |
IN outdegree | size of destinations and destweights arrays (non-negative integer) |
IN destinations | ranks of processes for which the calling process is a source (array of non-negative integers) |
IN destweights | weights of the edges out of the calling process (array of non-negative integers) |
IN info | hints on optimization and interpretation of weights (handle) |
IN reorder | the ranks may be reordered ( true) or not ( false) (logical) |
OUT comm_dist_graph | communicator with distributed graph topology (handle) |
int MPI_Dist_graph_create_adjacent(MPI_Comm comm_old, int indegree, int sources[], int sourceweights[], int outdegree, int destinations[], int destweights[], MPI_Info info, int reorder, MPI_Comm *comm_dist_graph)
MPI_DIST_GRAPH_CREATE_ADJACENT(COMM_OLD, INDEGREE, SOURCES, SOURCEWEIGHTS, OUTDEGREE, DESTINATIONS, DESTWEIGHTS, INFO, REORDER, COMM_DIST_GRAPH, IERROR)
INTEGER COMM_OLD, INDEGREE, SOURCES(*), SOURCEWEIGHTS(*), OUTDEGREE,
DESTINATIONS(*), DESTWEIGHTS(*), INFO, COMM_DIST_GRAPH, IERROR
LOGICAL REORDER
{ MPI::Distgraphcomm MPI::Intracomm::Dist_graph_create_adjacent(int indegree, const int sources[], const int sourceweights[], int outdegree, const int destinations[], const int destweights[], const MPI::Info& info, bool reorder) const (binding deprecated, see Section Deprecated since MPI-2.2
) }
{ MPI::Distgraphcomm MPI::Intracomm::Dist_graph_create_adjacent(int indegree, const int sources[], int outdegree, const int destinations[], const MPI::Info& info, bool reorder) const (binding deprecated, see Section Deprecated since MPI-2.2
) }
MPI_DIST_GRAPH_CREATE_ADJACENT returns a handle to a new
communicator to which the distributed graph topology information is
attached. Each process passes all information about the edges to its
neighbors in the virtual distributed graph topology. The calling
processes must ensure that each edge of the graph is described in the
source and in the destination process with the same weights. If there
are multiple edges for a given (source,dest) pair, then the sequence
of the weights of these edges does not matter. The complete
communication topology is the combination of all edges shown in the
sources arrays of all processes in comm_old, which must be
identical to the combination of all edges shown in the
destinations arrays. Source and destination ranks must be
process ranks of comm_old. This allows a fully distributed
specification of the communication graph. Isolated processes (i.e.,
processes with no outgoing or incoming edges, that is, processes that
have specified indegree and outdegree as zero and that
thus do not occur as source or destination rank in the graph
specification) are allowed.
The call creates a new communicator comm_dist_graph of
distributed graph topology type to which topology information has been
attached. The number of processes in comm_dist_graph is
identical to the number of processes in comm_old. The call to
MPI_DIST_GRAPH_CREATE_ADJACENT is collective.
Weights are specified as non-negative integers and can be used to
influence the process remapping strategy and other internal MPI
optimizations. For instance, approximate count arguments of later
communication calls along specific edges could be used as their edge
weights. Multiplicity of edges can likewise indicate more intense
communication between pairs of processes. However, the exact meaning
of edge weights is not specified by the MPI standard and is left to
the implementation. In C or Fortran, an application can supply the
special value MPI_UNWEIGHTED for the weight array to indicate
that all edges have the same (effectively no) weight. In C++, this
constant does not exist and the weight arguments may be omitted from
the argument list. It is erroneous to supply MPI_UNWEIGHTED, or
in C++ omit the weight arrays, for some but not all processes of
comm_old. Note that MPI_UNWEIGHTED is not a special weight
value; rather it is a special value for the total array argument. In
C, one would expect it to be NULL. In Fortran,
MPI_UNWEIGHTED is an object like MPI_BOTTOM (not usable
for initialization or assignment).
See Section Named Constants
.
The meaning of the info and reorder arguments is defined
in the description of the following routine.
int MPI_Dist_graph_create(MPI_Comm comm_old, int n, int sources[], int degrees[], int destinations[], int weights[], MPI_Info info, int reorder, MPI_Comm *comm_dist_graph)
MPI_DIST_GRAPH_CREATE(COMM_OLD, N, SOURCES, DEGREES, DESTINATIONS, WEIGHTS, INFO, REORDER, COMM_DIST_GRAPH, IERROR)
{ MPI::Distgraphcomm MPI::Intracomm::Dist_graph_create(int n, const int sources[], const int degrees[], const int destinations[], const int weights[], const MPI::Info& info, bool reorder) const (binding deprecated, see Section Deprecated since MPI-2.2
) }
{ MPI::Distgraphcomm MPI::Intracomm::Dist_graph_create(int n, const int sources[], const int degrees[], const int destinations[], const MPI::Info& info, bool reorder) const (binding deprecated, see Section Deprecated since MPI-2.2
) }
MPI_DIST_GRAPH_CREATE returns a handle to a new
communicator to which the distributed graph topology information is
attached. Concretely, each process calls the constructor with a set of
directed (source,destination) communication edges as
described below. Every process passes an array of n source
nodes in the sources array. For each source node, a
non-negative number of destination nodes is specified in the
degrees array. The destination nodes are stored in the
corresponding consecutive segment of the destinations
array. More precisely, if the i-th node in sources is
s, this specifies degrees[i] edges (s,d)
with d of the j-th such edge stored in
destinations[degrees[0]+...+degrees[i-1]+j]. The weight of
this edge is stored in
weights[degrees[0]+...+degrees[i-1]+j]. Both the
sources and the destinations arrays may contain the
same node more than once, and the order in which nodes are listed as
destinations or sources is not significant. Similarly, different
processes may specify edges with the same source and destination
nodes. Source and destination nodes must be process ranks of
comm_old. Different processes may specify different numbers
of source and destination nodes, as well as different source to
destination edges. This allows a fully distributed specification of
the communication graph. Isolated processes (i.e., processes with no
outgoing or incoming edges, that is, processes that do not occur as
source or destination node in the graph specification) are allowed.
The call creates a new communicator comm_dist_graph of
distributed graph topology type to which topology information has been
attached. The number of processes in comm_dist_graph is
identical to the number of processes in comm_old. The call to
MPI_Dist_graph_create is collective.
If reorder = false, all processes will have the same rank in
comm_dist_graph as in comm_old. If
reorder = true then the MPI library is free to remap to
other processes (of comm_old) in order to improve
communication on the edges of the communication graph. The weight
associated with each edge is a hint to the MPI library about the
amount or intensity of communication on that edge, and may be used to
compute a ``best'' reordering.
Weights are specified as non-negative integers and can be used to
influence the process remapping strategy and other internal MPI
optimizations. For instance, approximate count arguments of later
communication calls along specific edges could be used as their edge
weights. Multiplicity of edges can likewise indicate more intense
communication between pairs of processes. However, the exact meaning
of edge weights is not specified by the MPI standard and is left to
the implementation. In C or Fortran, an application can supply the
special value MPI_UNWEIGHTED for the weight array to indicate
that all edges have the same (effectively no) weight. In C++, this
constant does not exist and the weights argument may be omitted from
the argument list. It is erroneous to supply MPI_UNWEIGHTED, or
in C++ omit the weight arrays, for some but not all processes of
comm_old. Note that MPI_UNWEIGHTED is not a special weight
value; rather it is a special value for the total array argument. In
C, one would expect it to be NULL. In Fortran,
MPI_UNWEIGHTED is an object like MPI_BOTTOM (not usable
for initialization or assignment).
See Section Named Constants
The meaning of the weights argument can be influenced by the
info argument. Info arguments can be used to guide the
mapping; possible options include minimizing the maximum number of
edges between processes on different SMP nodes, or minimizing the sum
of all such edges. An MPI implementation is not obliged to follow
specific hints, and it is valid for an MPI implementation not to do
any reordering. An MPI implementation may specify more info
key-value pairs. All processes must specify the same set of key-value
info pairs.
MPI implementations must document any additionally supported key-value
info pairs. MPI_INFO_NULL is always valid, and may
indicate the default creation of the distributed graph topology to the
MPI library.
An implementation does not explicitly need to construct the topology
from its distributed parts. However, all processes can construct the
full topology from the distributed specification and use this in a
call to MPI_GRAPH_CREATE to create the topology. This may serve
as a reference implementation of the functionality, and may be
acceptable for small communicators. However, a scalable high-quality
implementation would save the topology graph in a distributed way.
( End of advice to implementors.)
With MPI_DIST_GRAPH_CREATE, this graph could be
constructed in many different ways. One way would be that each process
specifies its outgoing edges. The arguments per process would be:
Another way would be to pass the whole graph on process 0, which could be
done with the following arguments per process:
In both cases above, the application could supply MPI_UNWEIGHTED
instead of explicitly providing identical weights.
MPI_DIST_GRAPH_CREATE_ADJACENT could be used to specify
this graph using the following arguments:
MPI_DIST_GRAPH_CREATE(comm_old, n, sources, degrees,
destinations, weights, info, reorder, comm_dist_graph) IN comm_old input communicator (handle) IN n number of source nodes for which this process
specifies edges (non-negative integer) IN sources array containing the n source nodes for which
this process specifies edges (array of non-negative integers) IN degrees array specifying the number of destinations
for each source node in the source node array
(array of non-negative integers) IN destinations destination nodes for the source nodes in the source node
array (array of non-negative integers) IN weights weights for source to destination edges (array of non-negative integers) IN info hints on optimization and interpretation of weights (handle) IN reorder the process may be reordered ( true) or not ( false) (logical) OUT comm_dist_graph communicator with distributed graph topology added (handle)
INTEGER COMM_OLD, N, SOURCES(*), DEGREES(*), DESTINATIONS(*), WEIGHTS(*), INFO, COMM_DIST_GRAPH, IERROR
LOGICAL REORDER
Advice
to implementors.
Example
As for Example General (Graph) Constructor
,assume there are four processes 0, 1, 2, 3 with the following adjacency matrix and
unit edge weights:
process neighbors
0 1, 3
1 0
2 3
3 0, 2
process n sources degrees
destinations weights
0 1 0 2 1,3 1,1
1 1 1 1 0 1
2 1 2 1 3 1
3 1 3 2 0,2 1,1
process n sources degrees
destinations weights
0 4 0,1,2,3 2,1,1,2 1,3,0,3,0,2 1,1,1,1,1,1
1 0 - - - -
2 0 - - - -
3 0 - - -
process indegree sources sourceweights
outdegree destinations destweights
0 2 1,3 1,1 2 1,3 1,1
1 1 0 1 1 0 1
2 1 3 1 1 3 1
3 2 0,2 1,1 2 0,2 1,1
Example
A two-dimensional PxQ torus where all processes communicate along the
dimensions and along the diagonal edges. This cannot be modelled with
Cartesian topologies, but can easily be captured with
MPI_DIST_GRAPH_CREATE as shown in the following code. In this
example, the communication along the dimensions is twice as heavy as
the communication along the diagonals:
/*
Input: dimensions P, Q
Condition: number of processes equal to P*Q; otherwise only
ranks smaller than P*Q participate
*/
int rank, x, y;
int sources[1], degrees[1];
int destinations[8], weights[8];
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
/* get x and y dimension */
y=rank/P; x=rank%P;
/* get my communication partners along x dimension */
destinations[0] = P*y+(x+1)%P; weights[0] = 2;
destinations[1] = P*y+(P+x-1)%P; weights[1] = 2;
/* get my communication partners along y dimension */
destinations[2] = P*((y+1)%Q)+x; weights[2] = 2;
destinations[3] = P*((Q+y-1)%Q)+x; weights[3] = 2;
/* get my communication partners along diagonals */
destinations[4] = P*((y+1)%Q)+(x+1)%P; weights[4] = 1;
destinations[5] = P*((Q+y-1)%Q)+(x+1)%P; weights[5] = 1;
destinations[6] = P*((y+1)%Q)+(P+x-1)%P; weights[6] = 1;
destinations[7] = P*((Q+y-1)%Q)+(P+x-1)%P; weights[7] = 1;
sources[0] = rank;
degrees[0] = 8;
MPI_Dist_graph_create(MPI_COMM_WORLD, 1, sources, degrees, destinations,
weights, MPI_INFO_NULL, 1, comm_dist_graph)
Up: Topology Constructors
Next: Topology Inquiry Functions
Previous: General (Graph) Constructor
Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009