The functions MPI_WAIT and MPI_TEST are used to complete a nonblocking communication. The completion of a send operation indicates that the sender is now free to update the locations in the send buffer (the send operation itself leaves the content of the send buffer unchanged). It does not indicate that the message has been received, rather, it may have been buffered by the communication subsystem. However, if a synchronous mode send was used, the completion of the send operation indicates that a matching receive was initiated, and that the message will eventually be received by this matching receive.
The completion of a receive operation indicates that the receive buffer contains the received message, the receiver is now free to access it, and that the status object is set. It does not indicate that the matching send operation has completed (but indicates, of course, that the send was initiated).
We shall use the following terminology:
A null handle is a handle with
value
MPI_REQUEST_NULL.
A persistent
request and the handle to it are inactive
if the request is not associated with any ongoing
communication (see Section Persistent Communication Requests
).
A handle is active if it is neither null nor inactive.
An
empty status is a status which is set to return tag =
MPI_ANY_TAG, source = MPI_ANY_SOURCE, error =
MPI_SUCCESS, and is also internally configured so that calls to
MPI_GET_COUNT and MPI_GET_ELEMENTS return
count = 0
and MPI_TEST_CANCELLED returns false.
We set a status variable to empty when the value returned by it is not significant. Status is set in this way so as to prevent errors due to accesses of stale information.
The fields in a status object returned by a call to MPI_WAIT, MPI_TEST, or any of the other derived functions ( MPI_{ TEST|WAIT}{ ALL|SOME|ANY}), where the request corresponds to a send call, are undefined, with two exceptions: The error status field will contain valid information if the wait or test call returned with MPI_ERR_IN_STATUS; and the returned status can be queried by the call MPI_TEST_CANCELLED.
Error codes belonging to the error class MPI_ERR_IN_STATUS should be returned only by the MPI completion functions that take arrays of MPI_STATUS. For the functions MPI_TEST, MPI_TESTANY, MPI_WAIT, and MPI_WAITANY, which return a single MPI_STATUS value, the normal MPI error return process should be used (not the MPI_ERROR field in the MPI_STATUS argument).
MPI_WAIT(request, status) | |
INOUT request | request (handle) |
OUT status | status object (Status) |
int MPI_Wait(MPI_Request *request, MPI_Status *status)
MPI_WAIT(REQUEST, STATUS, IERROR)
A call to MPI_WAIT returns when the operation
identified by request is complete. If the communication object
associated with this request was created by a nonblocking send or
receive call, then the object is deallocated by the call to MPI_WAIT
and the request handle is set to MPI_REQUEST_NULL.
MPI_WAIT is a non-local operation.
The call returns, in status, information on
the completed operation. The content of the status object for a receive
operation can be accessed as
described in Section Return Status
.
The status object for a send operation may be queried by
a call to MPI_TEST_CANCELLED
(see Section Probe and Cancel
).
One is allowed to call MPI_WAIT with a null or inactive
request argument.
In this case the operation returns immediately with empty status.
Successful return of MPI_WAIT after a MPI_IBSEND implies
that the user send buffer can be reused --- i.e., data has been sent
out or copied into a
buffer attached with MPI_BUFFER_ATTACH. Note that, at this point,
we can no longer cancel the send (see Section Probe and Cancel
).
If a matching receive is never
posted, then the buffer cannot be freed. This runs somewhat
counter to the stated goal of MPI_CANCEL (always being able to free
program space that was committed to the communication subsystem).
( End of advice to users.)
In a multi-threaded environment, a call to
MPI_WAIT should block only the calling thread, allowing the thread
scheduler to schedule another thread for execution.
( End of advice to implementors.)
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
MPI_TEST(REQUEST, FLAG, STATUS, IERROR)
A call to MPI_TEST returns flag = true if the
operation identified by request is complete. In such a case, the
status object is set to contain information on the completed
operation; if the communication object was created by a nonblocking
send or receive, then it is deallocated and the request handle is set to
MPI_REQUEST_NULL. The call returns
flag = false, otherwise. In this case, the value
of the status object is undefined.
MPI_TEST is a local operation.
The return status object for a receive operation carries information that
can be accessed as described in
Section Return Status
.
The status object for a send operation carries information that can be
accessed by
a call to MPI_TEST_CANCELLED
(see Section Probe and Cancel
).
One
is allowed to call MPI_TEST with a null or inactive request
argument. In such a case the operation returns with flag = true and
empty status.
The functions MPI_WAIT and MPI_TEST can be used to
complete both sends and receives.
The use of
the nonblocking MPI_TEST call allows the user to
schedule alternative activities within a single thread of execution.
An event-driven thread scheduler can be emulated with periodic calls to
MPI_TEST.
( End of advice to users.)
A request object can be deallocated without waiting for the associated
communication to complete, by using the following operation.
int MPI_Request_free(MPI_Request *request)
MPI_REQUEST_FREE(REQUEST, IERROR)
The MPI_REQUEST_FREE mechanism is provided for
reasons of performance and convenience on the sending side.
( End of rationale.)
Once a request is freed by a call to MPI_REQUEST_FREE, it is not possible
to check for the successful completion of the associated communication
with calls to MPI_WAIT or MPI_TEST. Also, if an error occurs subsequently
during the communication, an error code cannot be returned to the user ---
such an error must be treated as fatal. An active receive request should
never be freed as the receiver will have no way to verify that the receive
has completed and the receive buffer can be reused.
( End of advice to users.)
INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR
{ void MPI::Request::Wait(MPI::Status& status) (binding deprecated, see Section Deprecated since MPI-2.2
) }
{ void MPI::Request::Wait() (binding deprecated, see Section Deprecated since MPI-2.2
) }
Advice to users.
Advice
to implementors.
MPI_TEST(request, flag, status) INOUT request communication request (handle) OUT flag true if operation completed (logical) OUT status status object (Status)
LOGICAL FLAG
INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR
{ bool MPI::Request::Test(MPI::Status& status) (binding deprecated, see Section Deprecated since MPI-2.2
) }
{ bool MPI::Request::Test() (binding deprecated, see Section Deprecated since MPI-2.2
) }
Advice to users.
2.2
Example
Simple usage of nonblocking operations and MPI_WAIT.
CALL MPI_COMM_RANK(comm, rank, ierr)
IF (rank.EQ.0) THEN
CALL MPI_ISEND(a(1), 10, MPI_REAL, 1, tag, comm, request, ierr)
**** do some computation to mask latency ****
CALL MPI_WAIT(request, status, ierr)
ELSE IF (rank.EQ.1) THEN
CALL MPI_IRECV(a(1), 15, MPI_REAL, 0, tag, comm, request, ierr)
**** do some computation to mask latency ****
CALL MPI_WAIT(request, status, ierr)
END IF
MPI_REQUEST_FREE(request) INOUT request communication request (handle)
INTEGER REQUEST, IERROR
{ void MPI::Request::Free() (binding deprecated, see Section Deprecated since MPI-2.2
) }
Mark the request object for deallocation and set request to
MPI_REQUEST_NULL.
An ongoing communication that is associated with the request will be allowed
to complete. The request will be deallocated only after its completion.
Rationale.
2.2
Advice to users.
Example
An example using MPI_REQUEST_FREE.
CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
IF (rank.EQ.0) THEN
DO i=1, n
CALL MPI_ISEND(outval, 1, MPI_REAL, 1, 0, MPI_COMM_WORLD, req, ierr)
CALL MPI_REQUEST_FREE(req, ierr)
CALL MPI_IRECV(inval, 1, MPI_REAL, 1, 0, MPI_COMM_WORLD, req, ierr)
CALL MPI_WAIT(req, status, ierr)
END DO
ELSE IF (rank.EQ.1) THEN
CALL MPI_IRECV(inval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr)
CALL MPI_WAIT(req, status, ierr)
DO I=1, n-1
CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr)
CALL MPI_REQUEST_FREE(req, ierr)
CALL MPI_IRECV(inval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr)
CALL MPI_WAIT(req, status, ierr)
END DO
CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, MPI_COMM_WORLD, req, ierr)
CALL MPI_WAIT(req, status, ierr)
END IF
Up: Nonblocking Communication
Next: Semantics of Nonblocking Communications
Previous: Communication Initiation
Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009