MPI_WIN_LOCK(lock_type, rank, assert, win) | |
IN lock_type | either MPI_LOCK_EXCLUSIVE or MPI_LOCK_SHARED (state) |
IN rank | rank of locked window (non-negative integer) |
IN assert | program assertion (integer) |
IN win | window object (handle) |
int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win)
MPI_Win_lock(lock_type, rank, assert, win, ierror)
INTEGER, INTENT(IN) :: lock_type, rank, assert
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WIN_LOCK(LOCK_TYPE, RANK, ASSERT, WIN, IERROR)
INTEGER LOCK_TYPE, RANK, ASSERT, WIN, IERROR
Starts an RMA access epoch. The window at the process with rank rank can be accessed by RMA operations on win during that epoch. Multiple RMA access epochs (with calls to MPI_WIN_LOCK) can occur simultaneously; however, each access epoch must target a different process.
MPI_WIN_LOCK_ALL(assert, win) | |
IN assert | program assertion (integer) |
IN win | window object (handle) |
int MPI_Win_lock_all(int assert, MPI_Win win)
MPI_Win_lock_all(assert, win, ierror)
INTEGER, INTENT(IN) :: assert
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WIN_LOCK_ALL(ASSERT, WIN, IERROR)
INTEGER ASSERT, WIN, IERROR
Starts an RMA access epoch to all processes in win, with a lock type of MPI_LOCK_SHARED. During the epoch, the calling process can access the window memory on all processes in win by using RMA operations. A window locked with MPI_WIN_LOCK_ALL must be unlocked with MPI_WIN_UNLOCK_ALL. This routine is not collective --- the ALL refers to a lock on all members of the group of the window.
Advice to users.
There may be additional overheads associated with using
MPI_WIN_LOCK and MPI_WIN_LOCK_ALL concurrently
on the same window. These overheads could be avoided by specifying the
assertion MPI_MODE_NOCHECK when possible (see
Section Assertions
).
( End of advice to users.)
MPI_WIN_UNLOCK(rank, win) | |
IN rank | rank of window (non-negative integer) |
IN win | window object (handle) |
int MPI_Win_unlock(int rank, MPI_Win win)
MPI_Win_unlock(rank, win, ierror)
INTEGER, INTENT(IN) :: rank
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WIN_UNLOCK(RANK, WIN, IERROR)
INTEGER RANK, WIN, IERROR
Completes an RMA access epoch started by a call to MPI_WIN_LOCK on window win. RMA operations issued during this period will have completed both at the origin and at the target when the call returns.
MPI_WIN_UNLOCK_ALL(win) | |
IN win | window object (handle) |
int MPI_Win_unlock_all(MPI_Win win)
MPI_Win_unlock_all(win, ierror)
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WIN_UNLOCK_ALL(WIN, IERROR)
INTEGER WIN, IERROR
Completes a shared RMA access epoch started by a call to MPI_WIN_LOCK_ALL on window win. RMA operations issued during this epoch will have completed both at the origin and at the target when the call returns.
Locks are used to protect accesses to the locked target window effected by RMA calls issued between the lock and unlock calls, and to protect load/store accesses to a locked local or shared memory window executed between the lock and unlock calls. Accesses that are protected by an exclusive lock will not be concurrent at the window site with other accesses to the same window that are lock protected. Accesses that are protected by a shared lock will not be concurrent at the window site with accesses protected by an exclusive lock to the same window.
It is erroneous to have a window locked and exposed (in an exposure epoch) concurrently. For example, a process may not call MPI_WIN_LOCK to lock a target window if the target process has called MPI_WIN_POST and has not yet called MPI_WIN_WAIT; it is erroneous to call MPI_WIN_POST while the local window is locked.
Rationale.
An alternative is to require MPI to enforce mutual exclusion between exposure epochs and locking periods. But this would entail additional
overheads when locks or active target synchronization do not interact
in support of those rare interactions between the two mechanisms. The
programming style that we encourage here is that a set of windows is
used with only one synchronization mechanism at a time, with shifts
from one mechanism to another being rare and involving global synchronization.
( End of rationale.)
Advice to users.
Users need to use explicit synchronization code in order to enforce
mutual exclusion between locking periods and exposure epochs on a
window.
( End of advice to users.)
Implementors may restrict the use of RMA communication that is
synchronized by lock calls to windows in memory allocated by
MPI_ALLOC_MEM
(Section Memory Allocation
),
MPI_WIN_ALLOCATE (Section Window That Allocates Memory
), or attached with
MPI_WIN_ATTACH (Section Window of Dynamically Attached Memory
).
Locks can be used portably only in such memory.
Rationale.
The implementation of passive target communication when memory is not shared may require an asynchronous software agent. Such an agent can be implemented more easily, and can achieve better performance, if restricted to specially allocated memory. It can be avoided altogether if shared memory is used. It seems natural to impose restrictions that allows one to use shared memory for third party communication in shared memory machines.
( End of rationale.)
Consider the sequence of calls in the example below.
Example
MPI_Win_lock(MPI_LOCK_EXCLUSIVE, rank, assert, win); MPI_Put(..., rank, ..., win); MPI_Win_unlock(rank, win);
The call to MPI_WIN_UNLOCK will not return until the put transfer has completed at the origin and at the target. This still leaves much freedom to implementors. The call to MPI_WIN_LOCK may block until an exclusive lock on the window is acquired; or, the first two calls may not block, while MPI_WIN_UNLOCK blocks until a lock is acquired --- the update of the target window is then postponed until the call to MPI_WIN_UNLOCK occurs. However, if the call to MPI_WIN_LOCK is used to lock a local window, then the call must block until the lock is acquired, since the lock may protect local load/store accesses to the window issued after the lock call returns.