20.1.6. MPI for Different Fortran Standard Versions
Up: Support for Fortran
Next: Requirements on Fortran Compilers
Previous: Interface Specifications, Procedure Names, and the Profiling Interface
This section describes which Fortran interface functionality
can be provided for different versions of the Fortran standard.
- For Fortran 77 with some extensions:
- MPI identifiers may be up to 30 characters (31 with the
profiling interface).
- MPI identifiers may contain underscores after the first character.
- An MPI subroutine with a choice argument may be
called with different argument types.
- Although not required by the MPI standard, the INCLUDE statement
should be available for including mpif.h into the
user application source code.
Only MPI-1.1, MPI-1.2, and MPI-1.3 can be implemented.
The use of absolute addresses from MPI_ADDRESS
and MPI_BOTTOM may cause problems if an address does not
fit into the memory space provided by an INTEGER.
(In MPI-2.0 this problem is solved with MPI_GET_ADDRESS,
but not for Fortran 77.)
- For Fortran 90:
The major additional features that are needed from Fortran 90 are:
- The MODULE and INTERFACE concept.
- The KIND= and SELECTED_ XXX_KIND concept.
- Fortran derived TYPEs and the SEQUENCE attribute.
- The OPTIONAL attribute for dummy arguments.
- Cray pointers, which are a nonstandard compiler extension,
are needed for the use of MPI_ALLOC_MEM.
With these features, MPI-1.1 -- MPI-2.2 can be implemented
without restrictions.
MPI-3.0 and later can be implemented with some restrictions.
The Fortran support methods are abbreviated with
S1 = the mpi_f08 module,
S2 = the mpi module, and
S3 = the mpif.f include file.
If not stated otherwise, restrictions exist for each method
that prevent implementing the complete semantics of MPI.
- MPI_SUBARRAYS_SUPPORTED equals .FALSE.,
i.e., subscript triplets and noncontiguous subarrays cannot be used
as buffers in nonblocking routines, RMA, or split-collective I/O.
- S1, S2, and S3 can be implemented,
but for S1, only a preliminary implementation is possible.
- In this preliminary interface of S1, the following changes are
necessary:
- TYPE(*), DIMENSION(..) is substituted by
nonstandardized extensions like !$PRAGMA IGNORE_TKR.
- The ASYNCHRONOUS attribute is omitted.
- PROCEDURE(...) callback declarations are
substituted by EXTERNAL.
- The specific procedure names are specified in Section Interface Specifications, Procedure Names, and the Profiling Interface.
- Due to the rules specified in Section Interface Specifications, Procedure Names, and the Profiling Interface,
choice buffer declarations should be implemented only with
nonstandardized extensions like !$PRAGMA IGNORE_TKR
(as long as F2008 with TS 29113 or Fortran 2018 is not available).
In S2 and S3:
Without such extensions, routines with choice buffers
should be provided with an implicit interface,
instead of overloading with a different MPI function
for each possible buffer type (as mentioned in
Section Problems Due to Strong Typing).
Such overloading would also imply restrictions
for passing Fortran derived types as choice buffer, see also
Section Fortran Derived Types.
Only in S1:
The implicit interfaces for routines with choice buffer arguments imply
that the ierror argument cannot be defined as OPTIONAL.
For this reason, it is recommended not to provide the mpi_f08
module if such an extension is not available.
- The ASYNCHRONOUS attribute can not be used in applications
to protect buffers in nonblocking MPI calls ( S1-- S3).
- The TYPE(C_PTR) binding of the
MPI_ALLOC_MEM and MPI_WIN_ALLOCATE routines
is not available.
- In S1 and S2,
the definition of the handle types (e.g., TYPE(MPI_Comm)
and the status type TYPE(MPI_Status)
must be modified: The SEQUENCE attribute must be used
instead of BIND(C) (which is not available in Fortran 90/95).
This restriction implies that the application must be fully recompiled
if one switches to an MPI library for Fortran 2003 and later
because the internal memory size of the handles may have changed.
For this reason, an implementor may choose not to provide
the mpi_f08 module for Fortran 90 compilers.
In this case, the mpi_f08 handle types and all routines, constants and
types related to TYPE(MPI_Status)
(see Section Status)
are also not available in the mpi module and mpif.h.
- For Fortran 95:
The quality of the MPI interface and the restrictions
are the same as with Fortran 90.
- For Fortran 2003:
The major features that are needed from Fortran 2003 are:
- Interoperability with C, i.e.,
- BIND(C) derived types.
- The ISO_C_BINDING intrinsic type C_PTR
and routine C_F_POINTER.
- The ability to define an ABSTRACT INTERFACE
and to use it for PROCEDURE dummy arguments.
- The ability to overload the operators .EQ. and
.NE. to allow the comparison of derived types (used in
MPI-3.0 and later for MPI handles).
- The ASYNCHRONOUS attribute is available to protect
Fortran asynchronous I/O.
This feature is not yet used by MPI, but it is the basis
for the enhancement for MPI communication in the
TS 29113.
With these features (but still without the features of TS 29113),
MPI-1.1 -- MPI-2.2 can be implemented
without restrictions, but with one enhancement:
- The user application can use TYPE(C_PTR) together with
MPI_ALLOC_MEM as long as MPI_ALLOC_MEM
is defined with an implicit interface because a
C_PTR and an INTEGER(KIND=MPI_ADDRESS_KIND)
argument must both map to a void * argument.
MPI-3.0 and later can be implemented with the following restrictions:
- MPI_SUBARRAYS_SUPPORTED equals .FALSE..
- For S1, only a preliminary implementation is possible.
The following changes are necessary:
- TYPE(*), DIMENSION(..) is substituted by
nonstandardized extensions like !$PRAGMA IGNORE_TKR.
- The specific procedure names are specified in Section Interface Specifications, Procedure Names, and the Profiling Interface.
- With S1, the ASYNCHRONOUS is required as specified in
the second Fortran interfaces.
With S2 and S3 the implementation can also add
this attribute if explicit interfaces are used.
- The ASYNCHRONOUS Fortran attribute can be used in applications
to try to protect buffers in nonblocking MPI calls,
but the protection can work only if the compiler
is able to protect asynchronous Fortran I/O and makes no difference
between such asynchronous Fortran I/O and MPI communication.
- The TYPE(C_PTR) binding of the
MPI_ALLOC_MEM, MPI_WIN_ALLOCATE,
MPI_WIN_ALLOCATE_SHARED, and
MPI_WIN_SHARED_QUERY routines
can be used only for Fortran types that are C compatible.
- The same restriction as for Fortran 90 applies if
nonstandardized extensions like !$PRAGMA IGNORE_TKR
are not available.
- For Fortran 2008 with TS 29113 and later and
For Fortran 2003 with TS 29113:
The major features that are needed from TS 29113 are:
- TYPE(*), DIMENSION(..) is available.
- The ASYNCHRONOUS attribute is extended to protect also
nonblocking MPI communication.
- The array dummy argument of the ISO_C_BINDING intrinsic
C_F_POINTER is not restricted to Fortran types for which
a corresponding type in C exists.
Using these features, MPI-3.0 and later can be implemented without any restrictions.
- With S1,
MPI_SUBARRAYS_SUPPORTED equals .TRUE..
The ASYNCHRONOUS attribute can be used
to protect buffers in nonblocking MPI calls.
The TYPE(C_PTR) binding of the
MPI_ALLOC_MEM, MPI_WIN_ALLOCATE,
MPI_WIN_ALLOCATE_SHARED, and
MPI_WIN_SHARED_QUERY routines
can be used for any Fortran type.
- With S2 and S3,
the value of MPI_SUBARRAYS_SUPPORTED is implementation dependent.
A high quality implementation will also provide
MPI_SUBARRAYS_SUPPORTED set to .TRUE.
and will use the ASYNCHRONOUS
attribute in the same way as in S1.
- If nonstandardized extensions like !$PRAGMA IGNORE_TKR
are not available then S2 must be implemented with
TYPE(*), DIMENSION(..).
Advice
to implementors.
If MPI_SUBARRAYS_SUPPORTED=.FALSE.,
the choice argument may be implemented with an
explicit interface using compiler directives, for example:
( End of advice to implementors.)
Up: Support for Fortran
Next: Requirements on Fortran Compilers
Previous: Interface Specifications, Procedure Names, and the Profiling Interface
Return to MPI-4.1 Standard Index
Return to MPI Forum Home Page
(Unofficial) MPI-4.1 of November 2, 2023
HTML Generated on November 19, 2023