Up: Contents
Next: Names, Addresses, Ports, and All That
Previous: Manager-worker Example, Using MPI_COMM_SPAWN.
This section provides functions that establish communication between
two sets of MPI processes that do not share a communicator.
Some situations in which these functions are useful are:
1. Two parts of an application that are started independently need to
communicate.
2. A visualization tool wants to attach to a running process.
3. A server wants to accept connections from multiple clients. Both
clients and server may be parallel programs.
In each of these situations, MPI must establish communication
channels where none existed before, and there is no parent/child
relationship.
The routines described in this section establish communication
between the two sets of processes by
creating an MPI intercommunicator,
where the two groups of the intercommunicator
are the original sets of processes.
Establishing contact between two groups of processes that do not share an
existing communicator is a collective but asymmetric process. One group of
processes indicates its willingness to accept connections from other groups of
processes. We will call this group the (parallel) server, even if this
is not a client/server type of application. The other group connects to the
server; we will call it the client.
Advice to users.
While the names client and server are used throughout
this section, MPI does not guarantee the traditional robustness
of client server systems. The functionality described in this
section is intended to allow two cooperating parts of the
same application to communicate with one another. For instance,
a client that gets a segmentation fault
and dies, or one that doesn't participate in a collective
operation may cause a server to crash or hang.
( End of advice to users.)
Up: Contents
Next: Names, Addresses, Ports, and All That
Previous: Manager-worker Example, Using MPI_COMM_SPAWN.
Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page
MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008