The functions MPI_MRECV and MPI_IMRECV receive messages that have been previously matched by a matching probe (Section Matching Probe ).
MPI_MRECV(buf, count, datatype, message, status) | |
OUT buf | initial address of receive buffer (choice) |
IN count | number of elements in receive buffer (non-negative integer) |
IN datatype | datatype of each receive buffer element (handle) |
INOUT message | message (handle) |
OUT status | status object (Status) |
int MPI_Mrecv(void* buf, int count, MPI_Datatype datatype, MPI_Message *message, MPI_Status *status)
MPI_Mrecv(buf, count, datatype, message, status, ierror)
TYPE(*), DIMENSION(..) :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Message), INTENT(INOUT) :: message
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_MRECV(BUF, COUNT, DATATYPE, MESSAGE, STATUS, IERROR)
<type> BUF(*)
INTEGER COUNT, DATATYPE, MESSAGE, STATUS(MPI_STATUS_SIZE), IERROR
This call receives a message matched by a matching probe operation (Section Matching Probe ).
The receive buffer consists of the storage containing count consecutive elements of the type specified by datatype, starting at address buf. The length of the received message must be less than or equal to the length of the receive buffer. An overflow error occurs if all incoming data does not fit, without truncation, into the receive buffer.
If the message is shorter than the receive buffer, then only those locations corresponding to the (shorter) message are modified.
On return from this function, the message handle is set to MPI_MESSAGE_NULL. All errors that occur during the execution of this operation are handled according to the error handler set for the communicator used in the matching probe call that produced the message handle.
If MPI_MRECV is called with MPI_MESSAGE_NO_PROC as the message argument, the call returns immediately with the status object set to source = MPI_PROC_NULL, tag = MPI_ANY_TAG, and count = 0, as if a receive from MPI_PROC_NULL was issued (see Section Null Processes ). A call to MPI_MRECV with MPI_MESSAGE_NULL is erroneous.
MPI_IMRECV(buf, count, datatype, message, request) | |
OUT buf | initial address of receive buffer (choice) |
IN count | number of elements in receive buffer (non-negative integer) |
IN datatype | datatype of each receive buffer element (handle) |
INOUT message | message (handle) |
OUT request | communication request (handle) |
int MPI_Imrecv(void* buf, int count, MPI_Datatype datatype, MPI_Message *message, MPI_Request *request)
MPI_Imrecv(buf, count, datatype, message, request, ierror)
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Message), INTENT(INOUT) :: message
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_IMRECV(BUF, COUNT, DATATYPE, MESSAGE, REQUEST, IERROR)
<type> BUF(*)
INTEGER COUNT, DATATYPE, MESSAGE, REQUEST, IERROR
MPI_IMRECV is the nonblocking variant of MPI_MRECV and starts a nonblocking receive of a matched message. Completion semantics are similar to MPI_IRECV as described in Section Communication Initiation . On return from this function, the message handle is set to MPI_MESSAGE_NULL.
If MPI_IMRECV is called with MPI_MESSAGE_NO_PROC as the message argument, the call returns immediately with a request object which, when completed, will yield a status object set to source = MPI_PROC_NULL, tag = MPI_ANY_TAG, and count = 0, as if a receive from MPI_PROC_NULL was issued (see Section Null Processes ). A call to MPI_IMRECV with MPI_MESSAGE_NULL is erroneous.
Advice
to implementors.
If reception of a matched message is started with
MPI_IMRECV, then it is possible to cancel the returned
request with MPI_CANCEL. If MPI_CANCEL succeeds, the
matched message must be found by a subsequent message probe
( MPI_PROBE, MPI_IPROBE, MPI_MPROBE, or
MPI_IMPROBE), received by a subsequent receive operation or
cancelled by the sender.
See Section Cancel
for details about MPI_CANCEL.
The cancellation of operations initiated with MPI_IMRECV may
fail.
( End of advice to implementors.)