Locks are used to protect accesses to the locked target window effected by RMA calls issued between the lock and unlock calls, and to protect load/store accesses to a locked local or shared memory window executed between the lock and unlock calls. Accesses that are protected by an exclusive lock (acquired using MPI_LOCK_EXCLUSIVE) will not be concurrent at the window site with other accesses to the same window that are lock protected. Accesses that are protected by a shared lock (acquired using MPI_LOCK_SHARED) will not be concurrent at the window site with accesses protected by an exclusive lock to the same window.
MPI_WIN_LOCK(lock_type, rank, assert, win) | |
IN lock_type | either MPI_LOCK_EXCLUSIVE or MPI_LOCK_SHARED (state) |
IN rank | rank of locked window (non-negative integer) |
IN assert | program assertion (integer) |
IN win | window object (handle) |
Opens an RMA access epoch. The window at the MPI process with a rank of rank in the group of win can be accessed by RMA operations on win during that epoch. Multiple RMA access epochs (with calls to MPI_WIN_LOCK) can occur simultaneously; however, each access epoch must target a different MPI process.
MPI_WIN_LOCK_ALL(assert, win) | |
IN assert | program assertion (integer) |
IN win | window object (handle) |
Opens an RMA access epoch to all MPI processes in win, with a lock type of MPI_LOCK_SHARED. During the epoch, the calling MPI process can access the window memory on all MPI processes in win by using RMA operations. A window locked with MPI_WIN_LOCK_ALL must be unlocked with MPI_WIN_UNLOCK_ALL. This routine is not collective---the ALL refers to a lock on all members of the group of the window.
Advice to users.
There may be additional overheads associated with using
MPI_WIN_LOCK and MPI_WIN_LOCK_ALL concurrently
on the same window. These overheads could be avoided by specifying the
assertion MPI_MODE_NOCHECK when possible (see
Section Assertions).
( End of advice to users.)
MPI_WIN_UNLOCK(rank, win) | |
IN rank | rank of window (non-negative integer) |
IN win | window object (handle) |
Closes an RMA access epoch opened by a call to MPI_WIN_LOCK on window win. RMA operations issued during this period will have completed both at the origin and at the target when the call returns.
MPI_WIN_UNLOCK_ALL(win) | |
IN win | window object (handle) |
Closes a shared RMA access epoch opened by a call to MPI_WIN_LOCK_ALL on window win. RMA operations issued during this epoch will have completed both at the origin and at the target when the call returns.
It is erroneous to have a window locked and exposed (in an exposure epoch) concurrently. For example, an MPI process may not call MPI_WIN_LOCK to lock a target window if the target process has called MPI_WIN_POST and has not yet called MPI_WIN_WAIT; it is erroneous to call MPI_WIN_POST while the local window is locked.
Rationale.
An alternative is to require MPI to enforce mutual exclusion between exposure epochs and locking periods. But this would entail additional
overheads when locks or active target synchronization do not interact
in support of those rare interactions between the two mechanisms. The
programming style that we encourage here is that a set of windows is
used with only one synchronization mechanism at a time, with shifts
from one mechanism to another being rare and involving global synchronization.
( End of rationale.)
Advice to users.
Users need to use explicit synchronization code in order to enforce
mutual exclusion between locking periods and exposure epochs on a
window.
( End of advice to users.)
Implementors may restrict the use of RMA communication that is
synchronized by lock calls to windows in memory allocated by
MPI_ALLOC_MEM
(Section Memory Allocation),
MPI_WIN_ALLOCATE (Section Window That Allocates Memory),
MPI_WIN_ALLOCATE_SHARED (Section Window That Allocates Shared Memory), or attached with
MPI_WIN_ATTACH (Section Window of Dynamically Attached Memory).
Locks can be used portably only in such memory.
Rationale.
The implementation of passive target communication between processes in different shared memory domains may require an asynchronous software agent. Such an agent can be implemented more easily, and can achieve better performance, if restricted to specially allocated memory. It can be avoided altogether if shared memory is used. It seems natural to impose restrictions that allow the use of shared memory for RMA communication in shared memory machines.
( End of rationale.)
Consider the sequence of calls in the example below.
Example
Use of MPI_WIN_LOCK and MPI_WIN_UNLOCK.
The call to MPI_WIN_UNLOCK will not return until the put transfer has completed at the origin and at the target.
Advice
to implementors.
The semantics described above still leave much freedom to implementors.
Return from the call to
MPI_WIN_LOCK may be delayed until an
exclusive lock on the window is acquired; or, the first
two calls may return immediately, while return from MPI_WIN_UNLOCK is delayed until
a lock is acquired---the update of the target window is then
postponed until the call to MPI_WIN_UNLOCK occurs.
However, if the call to MPI_WIN_LOCK is used to lock a
window accessible via load/store accesses (i.e., a local window or a window at an MPI process
for which a pointer to shared memory can be obtained via MPI_WIN_SHARED_QUERY),
then the call must not return before the lock is acquired,
since the lock may protect load/store accesses to the window issued
after the lock call returns.
( End of advice to implementors.)
Advice to users.
In order to ensure a portable deadlock free program, a user must assume
that MPI_WIN_LOCK may delay its return until the desired lock on the
window has been acquired.
( End of advice to users.)