15.4.4. Data Access with Shared File Pointers
Up: Data Access
Next: Noncollective Operations
Previous: Data Access with Individual File Pointers
MPI maintains exactly one shared file pointer
per collective MPI_FILE_OPEN
(shared among processes in the communicator group).
The current value of this pointer implicitly specifies
the offset in the data access routines described in this section.
These routines only use and update the shared file pointer
maintained by MPI.
The individual file pointers are not used nor updated.
The shared file pointer routines
have the same semantics as the data access with explicit offset routines
described in Section Data Access with Explicit Offsets,
with the following modifications:
- the offset is defined to be the current value
of the MPI-maintained shared file pointer,
- the effect of multiple calls to shared file pointer routines
is defined to behave as if the calls were serialized, and
- the use of shared file pointer routines is erroneous unless
all processes use the same file view.
For the noncollective shared file pointer routines,
the
serialization ordering is not deterministic.
The user needs to use other synchronization means
to enforce a specific order.
After a shared file pointer operation is initiated,
the shared file pointer is updated to point to the next
etype after the last one
that will be accessed.
The file pointer is updated relative to the current view of the file.
Up: Data Access
Next: Noncollective Operations
Previous: Data Access with Individual File Pointers
15.4.4.1. Noncollective Operations
Up: Data Access with Shared File Pointers
Next: Collective Operations
Previous: Data Access with Shared File Pointers
MPI_FILE_READ_SHARED(fh, buf, count, datatype, status) |
INOUT fh | file handle (handle) |
OUT buf | initial address of buffer (choice) |
IN count | number of elements in buffer (integer) |
IN datatype | datatype of each buffer element (handle) |
OUT status | status object (status) |
C binding
int MPI_File_read_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
int MPI_File_read_shared_c(MPI_File fh, void *buf, MPI_Count count, MPI_Datatype datatype, MPI_Status *status)
Fortran 2008 binding
MPI_File_read_shared(fh, buf, count, datatype, status, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..) :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_File_read_shared(fh, buf, count, datatype, status, ierror) !(_c)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..) :: buf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_READ_SHARED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
<type> BUF(*)
MPI_FILE_READ_SHARED reads a file
using the shared file pointer.
MPI_FILE_WRITE_SHARED(fh, buf, count, datatype, status) |
INOUT fh | file handle (handle) |
IN buf | initial address of buffer (choice) |
IN count | number of elements in buffer (integer) |
IN datatype | datatype of each buffer element (handle) |
OUT status | status object (status) |
C binding
int MPI_File_write_shared(MPI_File fh, const void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
int MPI_File_write_shared_c(MPI_File fh, const void *buf, MPI_Count count, MPI_Datatype datatype, MPI_Status *status)
Fortran 2008 binding
MPI_File_write_shared(fh, buf, count, datatype, status, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), INTENT(IN) :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_File_write_shared(fh, buf, count, datatype, status, ierror) !(_c)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), INTENT(IN) :: buf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_WRITE_SHARED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
<type> BUF(*)
MPI_FILE_WRITE_SHARED writes a file
using the shared file pointer.
MPI_FILE_IREAD_SHARED(fh, buf, count, datatype, request) |
INOUT fh | file handle (handle) |
OUT buf | initial address of buffer (choice) |
IN count | number of elements in buffer (integer) |
IN datatype | datatype of each buffer element (handle) |
OUT request | request object (handle) |
C binding
int MPI_File_iread_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request)
int MPI_File_iread_shared_c(MPI_File fh, void *buf, MPI_Count count, MPI_Datatype datatype, MPI_Request *request)
Fortran 2008 binding
MPI_File_iread_shared(fh, buf, count, datatype, request, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_File_iread_shared(fh, buf, count, datatype, request, ierror) !(_c)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), ASYNCHRONOUS :: buf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_IREAD_SHARED(FH, BUF, COUNT, DATATYPE, REQUEST, IERROR)
INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR
<type> BUF(*)
MPI_FILE_IREAD_SHARED is a nonblocking version
of MPI_FILE_READ_SHARED.
MPI_FILE_IWRITE_SHARED(fh, buf, count, datatype, request) |
INOUT fh | file handle (handle) |
IN buf | initial address of buffer (choice) |
IN count | number of elements in buffer (integer) |
IN datatype | datatype of each buffer element (handle) |
OUT request | request object (handle) |
C binding
int MPI_File_iwrite_shared(MPI_File fh, const void *buf, int count, MPI_Datatype datatype, MPI_Request *request)
int MPI_File_iwrite_shared_c(MPI_File fh, const void *buf, MPI_Count count, MPI_Datatype datatype, MPI_Request *request)
Fortran 2008 binding
MPI_File_iwrite_shared(fh, buf, count, datatype, request, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_File_iwrite_shared(fh, buf, count, datatype, request, ierror) !(_c)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: buf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Request), INTENT(OUT) :: request
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_IWRITE_SHARED(FH, BUF, COUNT, DATATYPE, REQUEST, IERROR)
INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR
<type> BUF(*)
MPI_FILE_IWRITE_SHARED is a nonblocking version
of the MPI_FILE_WRITE_SHARED interface.
Up: Data Access with Shared File Pointers
Next: Collective Operations
Previous: Data Access with Shared File Pointers
15.4.4.2. Collective Operations
Up: Data Access with Shared File Pointers
Next: Seek
Previous: Noncollective Operations
The semantics of a collective access using a shared file pointer are
that the accesses to the file will be
in the order determined by the ranks of the processes
within the group.
For each process, the location in the file at which data is accessed is the
position at which the shared file pointer would be after all processes whose
ranks within the group less than that of this process had accessed their data.
In addition, in order to prevent subsequent
shared offset accesses by the same processes from interfering
with this collective
access, the call might return only after all the processes
within the group have initiated their
accesses.
When the call returns, the shared file pointer points
to the next etype accessible,
according to the file view used by all processes,
after the last etype requested.
Advice to users.
There may be some programs in which all processes in the
group
need to access the
file using the shared file pointer, but the program may not require
that data be accessed in order of process rank.
In such programs, using the shared ordered routines
(e.g., MPI_FILE_WRITE_ORDERED
rather than MPI_FILE_WRITE_SHARED)
may enable an implementation to optimize access, improving performance.
( End of advice to users.)
Advice
to implementors.
Accesses to the data requested by all processes do not have to be serialized.
Once all processes have issued their requests, locations within the file for
all accesses can be computed, and accesses can proceed independently from each
other, possibly in parallel.
( End of advice to implementors.)
MPI_FILE_READ_ORDERED(fh, buf, count, datatype, status) |
INOUT fh | file handle (handle) |
OUT buf | initial address of buffer (choice) |
IN count | number of elements in buffer (integer) |
IN datatype | datatype of each buffer element (handle) |
OUT status | status object (status) |
C binding
int MPI_File_read_ordered(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
int MPI_File_read_ordered_c(MPI_File fh, void *buf, MPI_Count count, MPI_Datatype datatype, MPI_Status *status)
Fortran 2008 binding
MPI_File_read_ordered(fh, buf, count, datatype, status, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..) :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_File_read_ordered(fh, buf, count, datatype, status, ierror) !(_c)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..) :: buf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_READ_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
<type> BUF(*)
MPI_FILE_READ_ORDERED is a collective version of the
MPI_FILE_READ_SHARED interface.
MPI_FILE_WRITE_ORDERED(fh, buf, count, datatype, status) |
INOUT fh | file handle (handle) |
IN buf | initial address of buffer (choice) |
IN count | number of elements in buffer (integer) |
IN datatype | datatype of each buffer element (handle) |
OUT status | status object (status) |
C binding
int MPI_File_write_ordered(MPI_File fh, const void *buf, int count, MPI_Datatype datatype, MPI_Status *status)
int MPI_File_write_ordered_c(MPI_File fh, const void *buf, MPI_Count count, MPI_Datatype datatype, MPI_Status *status)
Fortran 2008 binding
MPI_File_write_ordered(fh, buf, count, datatype, status, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), INTENT(IN) :: buf
INTEGER, INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
MPI_File_write_ordered(fh, buf, count, datatype, status, ierror) !(_c)
TYPE(MPI_File), INTENT(IN) :: fh
TYPE(*), DIMENSION(..), INTENT(IN) :: buf
INTEGER(KIND=MPI_COUNT_KIND), INTENT(IN) :: count
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_WRITE_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR)
INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR
<type> BUF(*)
MPI_FILE_WRITE_ORDERED is a collective version of the
MPI_FILE_WRITE_SHARED interface.
Up: Data Access with Shared File Pointers
Next: Seek
Previous: Noncollective Operations
15.4.4.3. Seek
Up: Data Access with Shared File Pointers
Next: Split Collective Data Access Routines
Previous: Collective Operations
If MPI_MODE_SEQUENTIAL mode was specified when the file was opened,
it is erroneous to call the following two routines
( MPI_FILE_SEEK_SHARED and MPI_FILE_GET_POSITION_SHARED).
MPI_FILE_SEEK_SHARED(fh, offset, whence) |
INOUT fh | file handle (handle) |
IN offset | file offset (integer) |
IN whence | update mode (state) |
C binding
int MPI_File_seek_shared(MPI_File fh, MPI_Offset offset, int whence)
Fortran 2008 binding
MPI_File_seek_shared(fh, offset, whence, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN) :: offset
INTEGER, INTENT(IN) :: whence
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_SEEK_SHARED(FH, OFFSET, WHENCE, IERROR)
INTEGER FH, WHENCE, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
MPI_FILE_SEEK_SHARED updates the shared file pointer according to
whence,
which has
the following possible values:
MPI_FILE_SEEK_SHARED is collective;
all the processes in the communicator group associated with the file
handle fh must call MPI_FILE_SEEK_SHARED with the same
values for
offset and whence.
The offset can be negative, which allows seeking backwards.
It is erroneous to seek to a negative position in the view.
MPI_FILE_GET_POSITION_SHARED(fh, offset) |
IN fh | file handle (handle) |
OUT offset | offset of shared pointer (integer) |
C binding
int MPI_File_get_position_shared(MPI_File fh, MPI_Offset *offset)
Fortran 2008 binding
MPI_File_get_position_shared(fh, offset, ierror)
TYPE(MPI_File), INTENT(IN) :: fh
INTEGER(KIND=MPI_OFFSET_KIND), INTENT(OUT) :: offset
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
Fortran binding
MPI_FILE_GET_POSITION_SHARED(FH, OFFSET, IERROR)
INTEGER FH, IERROR
INTEGER(KIND=MPI_OFFSET_KIND) OFFSET
MPI_FILE_GET_POSITION_SHARED returns, in offset,
the current position of the shared file pointer in etype units
relative to the current
view.
Advice to users.
The offset can be used in a future call
to MPI_FILE_SEEK_SHARED
using whence = MPI_SEEK_SET to return to the current position.
To set the displacement to the current file pointer position,
first convert offset into an absolute byte position using
MPI_FILE_GET_BYTE_OFFSET,
then call MPI_FILE_SET_VIEW with the resulting
displacement.
( End of advice to users.)
Up: Data Access with Shared File Pointers
Next: Split Collective Data Access Routines
Previous: Collective Operations
Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023