It is convenient to be able to wait for the completion of any, some, or all the operations in a list, rather than having to wait for a specific message. A call to MPI_WAITANY or MPI_TESTANY can be used to wait for the completion of one out of several operations. A call to MPI_WAITALL or MPI_TESTALL can be used to wait for all pending operations in a list. A call to MPI_WAITSOME or MPI_TESTSOME can be used to complete all enabled operations in a list.
| MPI_WAITANY (count, array_of_requests, index, status) | |
| IN count | list length (non-negative integer) | 
| INOUT array_of_requests | array of requests (array of handles) | 
| OUT index | index of handle for operation that completed (integer) | 
| OUT status | status object (Status) | 
 
  int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index, MPI_Status *status) 
  
 
  MPI_WAITANY(COUNT, ARRAY_OF_REQUESTS, INDEX, STATUS, IERROR)  
Blocks until one of the operations associated with the active  
requests in the array has completed.  
If more then one operation is  
enabled and can terminate, one is arbitrarily chosen.  
Returns in  index the index  
of that request in the array and returns in  status the status of the  
completing communication.  
(The array is indexed from zero in C, and from one in Fortran.)  
If the request was allocated by a nonblocking communication operation, then it  
is deallocated and the request handle is set to   MPI_REQUEST_NULL.  
  
  
The  array_of_requests list may contain null or inactive  
handles.  
If the list contains no active handles (list has length zero or all  
entries are null or inactive),  
then the call  returns immediately with  index =  
  MPI_UNDEFINED, and a empty  status.  
  
The execution of  MPI_WAITANY(count, array_of_requests, index,  
status) has the same effect as the execution of   
 MPI_WAIT(&array_of_requests[i], status),   
where  i is the value  
returned by  index (unless the value of  index  
is   MPI_UNDEFINED).  
 MPI_WAITANY with an array containing one active entry  
is equivalent to  MPI_WAIT.  
  
  
  
  int MPI_Testany(int count, MPI_Request *array_of_requests, int *index, int *flag, MPI_Status *status)   
  MPI_TESTANY(COUNT, ARRAY_OF_REQUESTS, INDEX, FLAG, STATUS, IERROR)  
Tests for completion of  
either one or none of the operations associated with active handles.  
In the former case, it returns  flag = true,  
returns in  index the index of this request in the array,  
and returns in  status the status of that operation; if the request was  
allocated by a nonblocking communication call then the request is deallocated  
and the handle is set to   MPI_REQUEST_NULL.  
(The array is indexed from zero in C, and from one in Fortran.)  
In the latter case (no operation completed), it returns  flag =  
false, returns a value  
of   MPI_UNDEFINED in  index and  status is  
undefined.  
  
  
The array may contain null or inactive handles.  
If the  
array contains no active handles then the call returns  
immediately with  flag = true,  
 index =   MPI_UNDEFINED, and an empty  status.  
  
If the array of requests contains active handles then  
  
the execution of  MPI_TESTANY(count, array_of_requests,  
index, status) has the same effect as the execution of  
 MPI_TEST( &array_of_requests[i], flag, status),  
for  i=0, 1 ,..., count-1,  
in some arbitrary order, until one call returns  flag = true, or  
all fail.  In the former case,  index is set to the last value of  i,  
and in the latter case, it is set to   MPI_UNDEFINED.  
 MPI_TESTANY with an array containing one active entry  
is equivalent to  MPI_TEST.  
  
  
  
2.2  
  
  
  
  int MPI_Waitall(int count, MPI_Request *array_of_requests, MPI_Status *array_of_statuses)   
  MPI_WAITALL(COUNT, ARRAY_OF_REQUESTS, ARRAY_OF_STATUSES, IERROR)  
Blocks until all communication operations associated with active handles  
in the list complete, and return the status of all these operations  
(this includes the case where no handle in the list is active).  
Both arrays have the same number of valid entries.  The  i-th entry in  
 array_of_statuses is set to the return status of the  
 i-th operation.  
Requests that were created by nonblocking communication operations are  
deallocated and the corresponding handles in the array are set to  
  MPI_REQUEST_NULL.  
  
The list may contain null or inactive handles.  
The call sets to empty the status of each such entry.  
  
The error-free execution of  MPI_WAITALL(count, array_of_requests,  
array_of_statuses) has the same effect as the execution of  
  
When one or more of the communications completed by a  
call to  MPI_WAITALL fail, it is  
desireable to return specific information on each  
communication.  The function  MPI_WAITALL will return in such  
case the error code   MPI_ERR_IN_STATUS and will set the  
error field of each status to a specific error code.  This code will be  
  MPI_SUCCESS, if the specific communication completed; it will  
be another specific error code, if it failed;  
or it can be   MPI_ERR_PENDING if it has neither failed nor completed.  
The function  MPI_WAITALL will return   MPI_SUCCESS if no request  
had an error,  
or will return another error code if it failed  
for other reasons (such as invalid arguments).  In such cases, it will  
not update the error fields of the statuses.  
  
 
  
This design streamlines error handling in the application.  
The application code need only test the (single) function result to  
determine if an error has occurred.  It needs to check each individual  
status only  when an error occurred.  
 ( End of rationale.)   
  
  int MPI_Testall(int count, MPI_Request *array_of_requests, int *flag, MPI_Status *array_of_statuses)   
  MPI_TESTALL(COUNT, ARRAY_OF_REQUESTS, FLAG, ARRAY_OF_STATUSES, IERROR)  
Returns  flag = true  
if all communications associated  
with active handles in the array have completed (this includes the  
case where no handle in the list is active).  
In this case, each status entry that corresponds to an active handle  
request  
is set to the status of the corresponding communication; if the request was  
allocated by a nonblocking communication call then it is deallocated, and  
the handle is set to   MPI_REQUEST_NULL.  
  
Each status entry that corresponds to a null or inactive  
handle is set to empty.  
  
  
Otherwise,  
 flag = false is returned, no request is modified  
and the values of the status entries are undefined.  
This is a local operation.  
  
  
Errors that occurred during the execution of  MPI_TESTALL  
are handled as errors in  MPI_WAITALL.  
  
  
  
  int MPI_Waitsome(int incount, MPI_Request *array_of_requests, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses)   
  MPI_WAITSOME(INCOUNT, ARRAY_OF_REQUESTS, OUTCOUNT, ARRAY_OF_INDICES, ARRAY_OF_STATUSES, IERROR)  
  
If the list contains no active handles, then the  
call returns immediately with  outcount =   MPI_UNDEFINED.  
  
When one or more of the communications completed by  
 MPI_WAITSOME fails, then it is desirable to return specific   
information on each communication.  
The arguments  outcount,  
 array_of_indices and  array_of_statuses will be  
adjusted to indicate completion of all communications that have  
succeeded or failed.  The call will return the error code  
  MPI_ERR_IN_STATUS and the error field of each status  
returned will be set to indicate success or to indicate the specific error  
that occurred.  The call will return   MPI_SUCCESS if no request  
resulted in an error,  
and will return another error code if it failed  
for other reasons (such as invalid arguments).  In such cases, it will  
not update the error fields of the statuses.  
  
  
  
  int MPI_Testsome(int incount, MPI_Request *array_of_requests, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses)   
  MPI_TESTSOME(INCOUNT, ARRAY_OF_REQUESTS, OUTCOUNT, ARRAY_OF_INDICES, ARRAY_OF_STATUSES, IERROR)  
 MPI_TESTSOME is a local operation, which returns  
immediately, whereas  
 MPI_WAITSOME will   
block until a communication completes, if it was  
passed a list that contains at least one active handle.  Both calls fulfill a  
 fairness requirement:  If a request for a receive repeatedly  
appears in a list of requests passed to  MPI_WAITSOME or  
 MPI_TESTSOME, and a matching send has been posted, then the receive  
will eventually succeed, unless the send is satisfied by another receive; and  
similarly for send requests.  
  
  
Errors that occur during the execution of  MPI_TESTSOME are  
handled as for   
 
  
The use of  MPI_TESTSOME is likely to be more efficient than the use  
of  MPI_TESTANY. The former returns information on all  
completed communications, with the latter, a new call is required for  
each communication that completes.  
  
A server with multiple clients can use  MPI_WAITSOME so as not to  
starve any client.   Clients send messages to the server with service  
requests. The server calls  MPI_WAITSOME with one receive request  
for each client, and then handles all receives that completed.  
If a call to  MPI_WAITANY is used instead, then one client  
could starve while requests from another client always sneak in first.  
 ( End of advice to users.)   
 MPI_TESTSOME should complete as many pending communications as  
possible.  
 ( End of advice to implementors.)  
   
  
  
  
 
 
 INTEGER  COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI_STATUS_SIZE), IERROR 
  
 { static int MPI::Request::Waitany(int count, MPI::Request array_of_requests[], MPI::Status& status)  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
 { static int MPI::Request::Waitany(int count, MPI::Request array_of_requests[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
  
MPI_TESTANY(count, array_of_requests, index, flag, status)  IN count list length (non-negative  
integer)  INOUT array_of_requests array of requests (array of handles)  OUT index index of operation that completed,  
or   MPI_UNDEFINED if none completed (integer)  OUT flag   true if one of the operations is complete  
(logical)  OUT status status object (Status) 
  
 LOGICAL  FLAG 
INTEGER  COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI_STATUS_SIZE), IERROR 
  
 { static bool MPI::Request::Testany(int count, MPI::Request array_of_requests[], int& index, MPI::Status& status)  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
 { static bool MPI::Request::Testany(int count, MPI::Request array_of_requests[], int& index)  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
  
MPI_WAITALL( count, array_of_requests, array_of_statuses)  IN count lists length (non-negative  
integer)  INOUT array_of_requests array of requests (array of handles)  OUT array_of_statuses array of status objects (array of Status) 
  
 INTEGER  COUNT, ARRAY_OF_REQUESTS(*) 
INTEGER  ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR 
  
 { static void MPI::Request::Waitall(int count, MPI::Request array_of_requests[], MPI::Status array_of_statuses[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
 { static void MPI::Request::Waitall(int count, MPI::Request array_of_requests[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
  
  
 MPI_WAIT(&array_of_request[i], &array_of_statuses[i]),  
for  i=0 ,..., count-1, in some arbitrary order.  
 MPI_WAITALL with an array of length one  
is equivalent to  MPI_WAIT.  
 
 Rationale.  
 
  
MPI_TESTALL(count, array_of_requests, flag,  
array_of_statuses)  IN count lists length (non-negative  
integer)  INOUT array_of_requests array of requests (array of handles)  OUT flag (logical)  OUT array_of_statuses array of status objects (array of Status) 
  
 LOGICAL  FLAG 
INTEGER  COUNT, ARRAY_OF_REQUESTS(*), ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR 
  
 { static bool MPI::Request::Testall(int count, MPI::Request array_of_requests[], MPI::Status array_of_statuses[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
 { static bool MPI::Request::Testall(int count, MPI::Request array_of_requests[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
  
MPI_WAITSOME(incount, array_of_requests, outcount,  
array_of_indices, array_of_statuses)  IN incount length of array_of_requests (non-negative  
integer)  INOUT array_of_requests array of requests (array of handles)  OUT outcount number of completed requests (integer)  OUT array_of_indices array of indices of operations that  
completed (array of integers)  OUT array_of_statuses array of status objects for  
    operations that completed (array of Status) 
  
 INTEGER INCOUNT, ARRAY_OF_REQUESTS(*), OUTCOUNT, ARRAY_OF_INDICES(*), ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR 
  
 { static int MPI::Request::Waitsome(int incount, MPI::Request array_of_requests[], int array_of_indices[], MPI::Status array_of_statuses[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
 { static int MPI::Request::Waitsome(int incount, MPI::Request array_of_requests[], int array_of_indices[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
  
Waits until at least one of the operations associated with active  
handles in the list have completed.  
Returns in  outcount the number of requests from the list  
 array_of_requests that have completed.  Returns in the first  
 outcount locations of the array  array_of_indices  
the indices of these operations (index within the  
array  array_of_requests; the array is indexed from zero in  
C and from one in  
Fortran).  Returns in the first  outcount  
locations of the array  array_of_status  
the status for these completed operations.  If a request that completed was  
allocated by a nonblocking communication call, then it is deallocated, and the  
associated handle is set to   MPI_REQUEST_NULL.  
MPI_TESTSOME(incount, array_of_requests, outcount,  
array_of_indices, array_of_statuses)  IN incount length of array_of_requests (non-negative  
integer)  INOUT array_of_requests array of requests (array of handles)  OUT outcount number of completed requests (integer)  OUT array_of_indices array of indices of operations that  
completed (array of integers)  OUT array_of_statuses array of status objects for  
    operations that completed (array of Status) 
  
 INTEGER INCOUNT, ARRAY_OF_REQUESTS(*), OUTCOUNT, ARRAY_OF_INDICES(*), ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR 
  
 { static int MPI::Request::Testsome(int incount, MPI::Request array_of_requests[], int array_of_indices[], MPI::Status array_of_statuses[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
 { static int MPI::Request::Testsome(int incount, MPI::Request array_of_requests[], int array_of_indices[])  (binding deprecated, see Section Deprecated since  MPI-2.2 
) }
  
  
Behaves like  MPI_WAITSOME, except that it returns  
immediately. If no operation has completed it  
returns  outcount = 0.  
  
If there is no active handle in the list it  
returns  outcount =   MPI_UNDEFINED.  
  
 MPI_WAITSOME.  
  
 
 Advice to users.  
 
 
 
 Advice  
        to implementors.  
 
 Example   
  
  
  
  
  
  
Client-server code (starvation can occur).  

 Example   
  
  
  
  
  
  
Same code, using  MPI_WAITSOME.  
 
CALL MPI_COMM_SIZE(comm, size, ierr) 
CALL MPI_COMM_RANK(comm, rank, ierr) 
IF(rank .GT. 0) THEN         ! client code 
    DO WHILE(.TRUE.) 
       CALL MPI_ISEND(a, n, MPI_REAL, 0, tag, comm, request, ierr) 
       CALL MPI_WAIT(request, status, ierr) 
    END DO 
ELSE         ! rank=0 -- server code 
    DO i=1, size-1 
       CALL MPI_IRECV(a(1,i), n, MPI_REAL, i, tag, 
                      comm, request_list(i), ierr) 
    END DO 
    DO WHILE(.TRUE.) 
       CALL MPI_WAITSOME(size, request_list, numdone, 
                        indices, statuses, ierr) 
       DO i=1, numdone 
          CALL DO_SERVICE(a(1, indices(i))) 
          CALL MPI_IRECV(a(1, indices(i)), n, MPI_REAL, 0, tag, 
                       comm, request_list(indices(i)), ierr) 
       END DO 
    END DO 
END IF 
 
   
  
![]()
![]()
![]()
Up:  Nonblocking Communication
Next:  Non-destructive Test of  status
Previous:  Semantics of Nonblocking Communications
Return to MPI-2.2 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-2.2 of September 4, 2009
HTML Generated on September 10, 2009