52. Buffer Allocation and Usage

Up: Contents Next: Model Implementation of Buffered Mode Previous: Semantics of Point-to-Point Communication

A user may specify a buffer to be used for buffering messages sent in buffered mode. Buffering is done by the sender.

MPI_BUFFER_ATTACH(buffer, size)
IN bufferinitial buffer address (choice)
IN sizebuffer size, in bytes (non-negative integer)

int MPI_Buffer_attach(void* buffer, int size)

<type> BUFFER(*)
void MPI::Attach_buffer(void* buffer, int size)
Provides to MPI a buffer in the user's memory to be used for buffering outgoing messages. The buffer is used only by messages sent in buffered mode. Only one buffer can be attached to a process at a time.

MPI_BUFFER_DETACH(buffer_addr, size)
OUT buffer_addrinitial buffer address (choice)
OUT sizebuffer size, in bytes (non-negative integer)

int MPI_Buffer_detach(void* buffer_addr, int* size)

<type> BUFFER_ADDR(*)
int MPI::Detach_buffer(void*& buffer)
Detach the buffer currently associated with MPI. The call returns the address and the size of the detached buffer. This operation will block until all messages currently in the buffer have been transmitted. Upon return of this function, the user may reuse or deallocate the space taken by the buffer.

Example Calls to attach and detach buffers.

#define BUFFSIZE 10000 
int size 
char *buff; 
MPI_Buffer_attach( malloc(BUFFSIZE), BUFFSIZE); 
/* a buffer of 10000 bytes can now be used by MPI_Bsend */ 
MPI_Buffer_detach( &buff, &size); 
/* Buffer size reduced to zero */ 
MPI_Buffer_attach( buff, size); 
/* Buffer of 10000 bytes available again */ 

Advice to users.

Even though the C functions MPI_Buffer_attach and MPI_Buffer_detach both have a first argument of type void*, these arguments are used differently: A pointer to the buffer is passed to MPI_Buffer_attach; the address of the pointer is passed to MPI_Buffer_detach, so that this call can return the pointer value. ( End of advice to users.)


Both arguments are defined to be of type void* (rather than void* and void**, respectively), so as to avoid complex type casts. E.g., in the last example, &buff, which is of type char**, can be passed as argument to MPI_Buffer_detach without type casting. If the formal parameter had type void** then we would need a type cast before and after the call. ( End of rationale.)

The statements made in this section describe the behavior of MPI for buffered-mode sends. When no buffer is currently associated, MPI behaves as if a zero-sized buffer is associated with the process.

MPI must provide as much buffering for outgoing messages as if outgoing message data were buffered by the sending process, in the specified buffer space, using a circular, contiguous-space allocation policy. We outline below a model implementation that defines this policy. MPI may provide more buffering, and may use a better buffer allocation algorithm than described below. On the other hand, MPI may signal an error whenever the simple buffering allocator described below would run out of space. In particular, if no buffer is explicitly associated with the process, then any buffered send may cause an error.

MPI does not provide mechanisms for querying or controlling buffering done by standard mode sends. It is expected that vendors will provide such information for their implementations.


There is a wide spectrum of possible implementations of buffered communication: buffering can be done at sender, at receiver, or both; buffers can be dedicated to one sender-receiver pair, or be shared by all communications; buffering can be done in real or in virtual memory; it can use dedicated memory, or memory shared by other processes; buffer space may be allocated statically or be changed dynamically; etc. It does not seem feasible to provide a portable mechanism for querying or controlling buffering that would be compatible with all these choices, yet provide meaningful information. ( End of rationale.)

Up: Contents Next: Model Implementation of Buffered Mode Previous: Semantics of Point-to-Point Communication

Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page

MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008