In a thread-compliant implementation, an MPI process is a process that may be multithreaded. Each thread can issue MPI calls; however, threads are not separately addressable: the rank argument in a send or receive call identifies an MPI process, not a thread. A message sent to an MPI process can be received by any thread in this MPI process.
Rationale.
This model corresponds to the POSIX model of interprocess
communication: the fact that a process is multithreaded, rather than
single-threaded, does not affect the external interface of this
process.
MPI implementations in which MPI `processes' are POSIX threads
inside a single POSIX process are not thread-compliant by this
definition (indeed, their ``processes'' are single-threaded).
( End of rationale.)
Advice to users.
It is the user's responsibility to prevent races when threads within the
same application post conflicting communication calls. The user can
make sure that two threads in the same process will not issue
conflicting communication calls by using distinct communicators at each
thread.
( End of advice to users.)
The two main requirements for a thread-compliant implementation are listed
below.
Advice
to implementors.
MPI calls can be made thread-safe by executing only one at a time,
e.g., by protecting MPI code with one process-global lock. However,
blocked
operations cannot hold the lock, as this would prevent progress of
other threads in the process. The
lock is held only for the duration of an atomic, locally-completing
suboperation such as posting a send or completing a send, and is released
in between.
Finer locks can provide more concurrency, at the expense of higher
locking overheads.
Concurrency can also be achieved by having some of the MPI protocol
executed by separate server threads.
( End of advice to implementors.)