MPI_WIN_ALLOCATE_SHARED(size, disp_unit, info, comm, baseptr, win) | |
IN size | size of local window in bytes (non-negative integer) |
IN disp_unit | local unit size for displacements, in bytes (positive integer) |
IN info | info argument (handle) |
IN comm | intra-communicator (handle) |
OUT baseptr | address of local allocated window segment (choice) |
OUT win | window object returned by the call (handle) |
int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, void *baseptr, MPI_Win *win)
MPI_Win_allocate_shared(size, disp_unit, info, comm, baseptr, win, ierror)
USE, INTRINSIC :: ISO_C_BINDING, ONLY : C_PTR
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: size
INTEGER, INTENT(IN) :: disp_unit
TYPE(MPI_Info), INTENT(IN) :: info
TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(C_PTR), INTENT(OUT) :: baseptr
TYPE(MPI_Win), INTENT(OUT) :: win
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WIN_ALLOCATE_SHARED(SIZE, DISP_UNIT, INFO, COMM, BASEPTR, WIN, IERROR)
INTEGER DISP_UNIT, INFO, COMM, WIN, IERROR
INTEGER(KIND=MPI_ADDRESS_KIND) SIZE, BASEPTR
This is a collective call executed by all processes in the group of comm. On each process, it allocates memory of at least size bytes that is shared among all processes in comm, and returns a pointer to the locally allocated segment in baseptr that can be used for load/store accesses on the calling process. The locally allocated memory can be the target of load/store accesses by remote processes; the base pointers for other processes can be queried using the function MPI_WIN_SHARED_QUERY. The call also returns a window object that can be used by all processes in comm to perform RMA operations. The size argument may be different at each process and size = 0 is valid. It is the user's responsibility to ensure that the communicator comm represents a group of processes that can create a shared memory segment that can be accessed by all processes in the group. The discussions of rationales for MPI_ALLOC_MEM and MPI_FREE_MEM in Section Memory Allocation also apply to MPI_WIN_ALLOCATE_SHARED; in particular, see the rationale in Section Memory Allocation for an explanation of the type used for baseptr. The allocated memory is contiguous across process ranks unless the info key alloc_shared_noncontig is specified. Contiguous across process ranks means that the first address in the memory segment of process i is consecutive with the last address in the memory segment of process i-1. This may enable the user to calculate remote address offsets with local information only.
If the Fortran compiler provides TYPE(C_PTR), then the following generic interface must be provided in the mpi module and should be provided in mpif.h through overloading, i.e., with the same routine name as the routine with INTEGER(KIND=MPI_ADDRESS_KIND) BASEPTR, but with a different specific procedure name:
INTERFACE MPI_WIN_ALLOCATE_SHARED SUBROUTINE MPI_WIN_ALLOCATE_SHARED(SIZE, DISP_UNIT, INFO, COMM, & BASEPTR, WIN, IERROR) IMPORT :: MPI_ADDRESS_KIND INTEGER DISP_UNIT, INFO, COMM, WIN, IERROR INTEGER(KIND=MPI_ADDRESS_KIND) SIZE, BASEPTR END SUBROUTINE SUBROUTINE MPI_WIN_ALLOCATE_SHARED_CPTR(SIZE, DISP_UNIT, INFO, COMM, & BASEPTR, WIN, IERROR) USE, INTRINSIC :: ISO_C_BINDING, ONLY : C_PTR IMPORT :: MPI_ADDRESS_KIND INTEGER :: DISP_UNIT, INFO, COMM, WIN, IERROR INTEGER(KIND=MPI_ADDRESS_KIND) :: SIZE TYPE(C_PTR) :: BASEPTR END SUBROUTINE END INTERFACEThe base procedure name of this overloaded function is MPI_WIN_ALLOCATE_SHARED_CPTR. The implied specific procedure names are described in Section Interface Specifications, Procedure Names, and the Profiling Interface .
The info argument can be used to specify hints similar to the info argument for MPI_WIN_CREATE, MPI_WIN_ALLOCATE, and MPI_ALLOC_MEM. The additional info key alloc_shared_noncontig allows the library to optimize the layout of the shared memory segments in memory.
Advice to users.
If the info key alloc_shared_noncontig is not set to true, the
allocation strategy is to allocate contiguous memory across process
ranks. This may limit the performance on some architectures because it
does not allow the implementation to modify the data layout (e.g.,
padding to reduce access latency).
( End of advice to users.)
Advice
to implementors.
If the user sets the info key alloc_shared_noncontig to true, the
implementation can allocate the memory requested by each process in a
location that is close to this process. This can be achieved by padding
or allocating memory in special memory segments. Both techniques may
make the address space across consecutive ranks noncontiguous.
( End of advice to implementors.)
The consistency of load/store accesses from/to the shared memory as
observed by the user program depends on the architecture. A consistent
view can be created in the unified memory model (see
Section Memory Model
) by utilizing the window
synchronization functions (see Section Synchronization Calls
) or
explicitly completing outstanding store accesses (e.g., by calling
MPI_WIN_FLUSH). MPI does not define semantics for
accessing shared memory windows in the separate memory model.
MPI_WIN_SHARED_QUERY(win, rank, size, disp_unit, baseptr) | |
IN win | shared memory window object (handle) |
IN rank | rank in the group of window win (non-negative integer) or MPI_PROC_NULL |
OUT size | size of the window segment (non-negative integer) |
OUT disp_unit | local unit size for displacements, in bytes (positive integer) |
OUT baseptr | address for load/store access to window segment (choice) |
int MPI_Win_shared_query(MPI_Win win, int rank, MPI_Aint *size, int *disp_unit, void *baseptr)
MPI_Win_shared_query(win, rank, size, disp_unit, baseptr, ierror)
USE, INTRINSIC :: ISO_C_BINDING, ONLY : C_PTR
TYPE(MPI_Win), INTENT(IN) :: win
INTEGER, INTENT(IN) :: rank
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT) :: size
INTEGER, INTENT(OUT) :: disp_unit
TYPE(C_PTR), INTENT(OUT) :: baseptr
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_WIN_SHARED_QUERY(WIN, RANK, SIZE, DISP_UNIT, BASEPTR, IERROR)
INTEGER WIN, RANK, DISP_UNIT, IERROR
INTEGER (KIND=MPI_ADDRESS_KIND) SIZE, BASEPTR
This function queries the process-local address for remote memory segments created with MPI_WIN_ALLOCATE_SHARED. This function can return different process-local addresses for the same physical memory on different processes. The returned memory can be used for load/store accesses subject to the constraints defined in Section Semantics and Correctness . This function can only be called with windows of flavor MPI_WIN_FLAVOR_SHARED. If the passed window is not of flavor MPI_WIN_FLAVOR_SHARED, the error MPI_ERR_RMA_FLAVOR is raised. When rank is MPI_PROC_NULL, the pointer, disp_unit, and size returned are the pointer, disp_unit, and size of the memory segment belonging the lowest rank that specified size > 0. If all processes in the group attached to the window specified size = 0, then the call returns size = 0 and a baseptr as if MPI_ALLOC_MEM was called with size = 0.
If the Fortran compiler provides TYPE(C_PTR), then the following generic interface must be provided in the mpi module and should be provided in mpif.h through overloading, i.e., with the same routine name as the routine with INTEGER(KIND=MPI_ADDRESS_KIND) BASEPTR, but with a different specific procedure name:
INTERFACE MPI_WIN_SHARED_QUERY SUBROUTINE MPI_WIN_SHARED_QUERY(WIN, RANK, SIZE, DISP_UNIT, & BASEPTR, IERROR) IMPORT :: MPI_ADDRESS_KIND INTEGER WIN, RANK, DISP_UNIT, IERROR INTEGER (KIND=MPI_ADDRESS_KIND) SIZE, BASEPTR END SUBROUTINE SUBROUTINE MPI_WIN_SHARED_QUERY_CPTR(WIN, RANK, SIZE, DISP_UNIT, & BASEPTR, IERROR) USE, INTRINSIC :: ISO_C_BINDING, ONLY : C_PTR IMPORT :: MPI_ADDRESS_KIND INTEGER :: WIN, RANK, DISP_UNIT, IERROR INTEGER(KIND=MPI_ADDRESS_KIND) :: SIZE TYPE(C_PTR) :: BASEPTR END SUBROUTINE END INTERFACEThe base procedure name of this overloaded function is MPI_WIN_SHARED_QUERY_CPTR. The implied specific procedure names are described in Section Interface Specifications, Procedure Names, and the Profiling Interface .