The functions MPI_WAIT and MPI_TEST are used to complete a nonblocking communication. The completion of a send operation indicates that the sender is now free to update the locations in the send buffer (the send operation itself leaves the content of the send buffer unchanged). It does not indicate that the message has been received, rather, it may have been buffered by the communication subsystem. However, if a synchronous mode send was used, the completion of the send operation indicates that a matching receive was initiated, and that the message will eventually be received by this matching receive.
The completion of a receive operation indicates that the receive buffer contains the received message, the receiver is now free to access it, and that the status object is set. It does not indicate that the matching send operation has completed (but indicates, of course, that the send was initiated).
We shall use the following terminology: A null handle is a handle with value MPI_REQUEST_NULL. A persistent request and the handle to it are inactive if the request is not associated with any ongoing communication (see Section Persistent Communication Requests ). A handle is active if it is neither null nor inactive. An empty status is a status which is set to return tag = MPI_ANY_TAG, source = MPI_ANY_SOURCE, error = MPI_SUCCESS, and is also internally configured so that calls to MPI_GET_COUNT, MPI_GET_ELEMENTS, and MPI_GET_ELEMENTS_X return count = 0 and MPI_TEST_CANCELLED returns false. We set a status variable to empty when the value returned by it is not significant. Status is set in this way so as to prevent errors due to accesses of stale information.
The fields in a status object returned by a call to MPI_WAIT, MPI_TEST, or any of the other derived functions ( MPI_{ TEST|WAIT}{ ALL|SOME|ANY}), where the request corresponds to a send call, are undefined, with two exceptions: The error status field will contain valid information if the wait or test call returned with MPI_ERR_IN_STATUS; and the returned status can be queried by the call MPI_TEST_CANCELLED.
Error codes belonging to the error class MPI_ERR_IN_STATUS should be returned only by the MPI completion functions that take arrays of MPI_Status. For the functions MPI_TEST, MPI_TESTANY, MPI_WAIT, and MPI_WAITANY, which return a single MPI_Status value, the normal MPI error return process should be used (not the MPI_ERROR field in the MPI_Status argument).
MPI_WAIT(request, status) | |
INOUT request | request (handle) |
OUT status | status object (Status) |
int MPI_Wait(MPI_Request *request, MPI_Status *status)
MPI_Wait(request, status, ierror)
TYPE(MPI_Request), INTENT(INOUT) :: request
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WAIT(REQUEST, STATUS, IERROR)
INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR
A call to MPI_WAIT returns when the operation identified by request is complete. If the request is an active persistent request, it is marked inactive. Any other type of request is and the request handle is set to MPI_REQUEST_NULL. MPI_WAIT is a non-local operation.
The call returns, in status, information on the completed operation. The content of the status object for a receive operation can be accessed as described in Section Return Status . The status object for a send operation may be queried by a call to MPI_TEST_CANCELLED (see Section Probe and Cancel ).
One is allowed to call MPI_WAIT with a null or inactive request argument. In this case the operation returns immediately with empty status.
Advice to users.
Successful return of MPI_WAIT after a MPI_IBSEND implies
that the user send buffer can be reused --- i.e., data has been sent
out or copied into a
buffer attached with MPI_BUFFER_ATTACH. Note that, at this point,
we can no longer cancel the send (see Section Probe and Cancel
).
If a matching receive is never
posted, then the buffer cannot be freed. This runs somewhat
counter to the stated goal of MPI_CANCEL (always being able to free
program space that was committed to the communication subsystem).
( End of advice to users.)
Advice
to implementors.
In a multithreaded environment, a call to
MPI_WAIT should block only the calling thread, allowing the thread
scheduler to schedule another thread for execution.
( End of advice to implementors.)
MPI_TEST(request, flag, status) | |
INOUT request | communication request (handle) |
OUT flag | true if operation completed (logical) |
OUT status | status object (Status) |
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
MPI_Test(request, flag, status, ierror)
TYPE(MPI_Request), INTENT(INOUT) :: request
LOGICAL, INTENT(OUT) :: flag
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_TEST(REQUEST, FLAG, STATUS, IERROR)
LOGICAL FLAG
INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR
A call to MPI_TEST returns flag = true if the operation identified by request is complete. In such a case, the status object is set to contain information on the completed operation. If the request is an active persistent request, it is marked as inactive. Any other type of request is deallocated and the request handle is set to MPI_REQUEST_NULL. The call returns flag = false if the operation identified by request is not complete. In this case, the value of the status object is undefined. MPI_TEST is a local operation.
The return status object for a receive operation carries information that can be accessed as described in Section Return Status . The status object for a send operation carries information that can be accessed by a call to MPI_TEST_CANCELLED (see Section Probe and Cancel ).
One is allowed to call MPI_TEST with a null or inactive request argument. In such a case the operation returns with flag = true and empty status.
The functions MPI_WAIT and MPI_TEST can be used to complete both sends and receives.
Advice to users.
The use of
the nonblocking MPI_TEST call allows the user to
schedule alternative activities within a single thread of execution.
An event-driven thread scheduler can be emulated with periodic calls to
MPI_TEST.
( End of advice to users.)
Example
Simple usage of nonblocking operations and MPI_WAIT.
CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.EQ.0) THEN CALL MPI_ISEND(a(1), 10, MPI_REAL, 1, tag, comm, request, ierr) **** do some computation to mask latency **** CALL MPI_WAIT(request, status, ierr) ELSE IF (rank.EQ.1) THEN CALL MPI_IRECV(a(1), 15, MPI_REAL, 0, tag, comm, request, ierr) **** do some computation to mask latency **** CALL MPI_WAIT(request, status, ierr) END IF
A request object can be deallocated without waiting for the associated communication to complete, by using the following operation.
MPI_REQUEST_FREE(request) | |
INOUT request | communication request (handle) |
int MPI_Request_free(MPI_Request *request)
MPI_Request_free(request, ierror)
TYPE(MPI_Request), INTENT(INOUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_REQUEST_FREE(REQUEST, IERROR)
INTEGER REQUEST, IERROR
Mark the request object for deallocation and set request to MPI_REQUEST_NULL. An ongoing communication that is associated with the request will be allowed to complete. The request will be deallocated only after its completion.
Rationale.
The MPI_REQUEST_FREE mechanism is provided for
reasons of performance and convenience on the sending side.
( End of rationale.)
Advice to users.
Once a request is freed by a call to MPI_REQUEST_FREE, it is not possible
to check for the successful completion of the associated communication
with calls to MPI_WAIT or MPI_TEST. Also, if an error occurs subsequently
during the communication, an error code cannot be returned to the user ---
such an error must be treated as fatal. An active receive request should
never be freed as the receiver will have no way to verify that the receive
has completed and the receive buffer can be reused.
( End of advice to users.)
Example
An example using MPI_REQUEST_FREE.
CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) IF (rank.EQ.0) THEN DO i=1, n CALL MPI_ISEND(outval, 1, MPI_REAL, 1, 0, MPI_COMM_WORLD, req, ierr) CALL MPI_REQUEST_FREE(req, ierr) CALL MPI_IRECV(inval, 1, MPI_REAL, 1, 0, MPI_COMM_WORLD, req, ierr) CALL MPI_WAIT(req, status, ierr) END DO ELSE IF (rank.EQ.1) THEN CALL MPI_IRECV(inval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr) CALL MPI_WAIT(req, status, ierr) DO I=1, n-1 CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr) CALL MPI_REQUEST_FREE(req, ierr) CALL MPI_IRECV(inval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr) CALL MPI_WAIT(req, status, ierr) END DO CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr) CALL MPI_WAIT(req, status, ierr) END IF