\_
mpi-report.tex
MPI: A Message-Passing Interface Standard
Version 2.1
Message Passing Interface Forum
This document describes the Message-Passing Interface ( MPI)
standard, version 2.1.
The MPI standard includes point-to-point message-passing,
collective communications, group and communicator concepts,
process topologies, environmental management,
process creation and management, one-sided communications,
extended collective operations, external interfaces, I/O,
some miscellaneous topics, and a profiling interface.
Language bindings for C, C++ and Fortran are defined.
Technically, this version of the standard is based on
`` MPI: A Message-Passing Interface Standard, June 12,
1995'' ( MPI-1.1) from the MPI-1 Forum,
and `` MPI-2: Extensions to the Message-Passing Interface,
July, 1997'' ( MPI-1.2 and MPI-2.0) from the MPI-2 Forum,
and errata documents from the MPI Forum.
Historically, the evolution of the standards is from
MPI-1.0 (June 1994) to MPI-1.1 (June 12, 1995) to
MPI-1.2 (July 18, 1997), with several clarifications and additions
and published as part of the MPI-2 document, to
MPI-2.0 (July 18, 1997), with new functionality, to
MPI-1.3 (May 30, 2008),
combining for historical reasons
the documents 1.1 and 1.2 and some errata
documents
to one combined document,
and this document, MPI-2.1, combining the previous documents.
Additional clarifications and errata corrections
to MPI-2.0 are also included.
(c) 1993, 1994, 1995, 1996, 1997,
2008
University of Tennessee, Knoxville, Tennessee.
Permission to copy without fee all or part of this material is
granted, provided the University of Tennessee copyright notice and the
title of this document appear, and notice is given that copying is by
permission of the University of Tennessee.
Version 2.1: Mon Aug 4 09:37:31 2008, 2008. This document combines the previous documents
MPI-1.3 (May 30, 2008)
and MPI-2.0 (July 18, 1997).
Certain parts of MPI-2.0, such as some sections of Chapter 4,
Miscellany, and Chapter 7, Extended Collective Operations have
been merged into the Chapters of MPI-1.3. Additional errata and
clarifications collected by the MPI Forum are also included in
this document.
Version 1.3: May 30, 2008. This document combines the previous
documents MPI-1.1 (June 12, 1995) and the MPI-1.2 Chapter in MPI-2
(July 18, 1997). Additional errata collected by the MPI Forum
referring to MPI-1.1 and MPI-1.2 are also included in this
document.
Version 2.0: July 18, 1997. Beginning after the release of
MPI-1.1, the MPI Forum began meeting to consider corrections and
extensions. MPI-2 has been focused on process creation and
management, one-sided communications, extended collective
communications, external interfaces and parallel I/O. A miscellany
chapter discusses items that don't fit elsewhere, in particular
language interoperability.
Version 1.2: July 18, 1997. The MPI-2 Forum introduced MPI-1.2 as
Chapter 3
in the standard " MPI-2: Extensions to the Message-Passing
Interface", July 18, 1997. This section contains clarifications
and minor corrections to Version 1.1 of the MPI Standard. The only
new function in MPI-1.2 is one for identifying to which version of
the MPI Standard the implementation conforms. There are small
differences between MPI-1 and MPI-1.1. There are very few differences
between MPI-1.1 and MPI-1.2, but large differences between MPI-1.2
and MPI-2.
Version 1.1: June, 1995. Beginning in March, 1995, the Message-Passing Interface Forum reconvened to
correct errors and make clarifications in the MPI document of May 5, 1994,
referred to below as Version 1.0. These discussions resulted in Version 1.1,
which is this document. The changes from Version 1.0 are minor. A version
of this document with all changes marked is available. This paragraph is an
example of a change.
Version 1.0: May, 1994. The Message-Passing Interface Forum (MPIF), with participation from over
40 organizations, has been meeting since January 1993 to discuss and
define a set of library interface standards for message
passing.
MPIF is not sanctioned or supported by any official
standards organization.
The goal of the Message-Passing Interface, simply stated, is to
develop a widely used
standard for writing message-passing programs.
As such the interface should
establish a practical, portable, efficient, and flexible standard
for message-passing.
This is the final report, Version 1.0, of
the Message-Passing Interface Forum. This document contains all the
technical features proposed for the interface. This copy of the draft
was processed by LaTeX on
May 5, 1994.
Please send comments on MPI to
mpi-comments@mpi-forum.org.
Your comment will be forwarded to
MPI Forum
committee members who will
attempt to respond.
Acknowledgments
This document represents the work of many people who have served on
the MPI Forum. The meetings have been attended by dozens of people
from many parts of the world. It is the hard and dedicated work of
this group that has led to the MPI standard.
The technical development was carried out by subgroups, whose work was
reviewed by the full committee. During the period of development of
the Message-Passing Interface ( MPI), many people helped with this
effort.
Those who served as primary coordinators in MPI-1.0 and MPI-1.1 are:
- Jack Dongarra, David Walker, Conveners and Meeting Chairs
- Ewing Lusk, Bob Knighten, Minutes
- Marc Snir, William Gropp, Ewing Lusk, Point-to-Point Communications
- Al Geist, Marc Snir, Steve Otto, Collective Communications
- Steve Otto, Editor
- Rolf Hempel, Process Topologies
- Ewing Lusk, Language Binding
- William Gropp, Environmental Management
- James Cownie, Profiling
- Tony Skjellum, Lyndon Clarke, Marc Snir, Richard Littlefield, Mark Sears,
Groups, Contexts, and Communicators
- Steven Huss-Lederman, Initial Implementation Subset
The following list includes some of the active participants in
the MPI-1.0 and MPI-1.1 process not mentioned above.
Ed Anderson |
Robert Babb |
Joe Baron |
Eric Barszcz |
Scott Berryman |
Rob Bjornson |
Nathan Doss |
Anne Elster |
Jim Feeney |
Vince Fernando |
Sam Fineberg |
Jon Flower |
Daniel Frye |
Ian Glendinning |
Adam Greenberg |
Robert Harrison |
Leslie Hart |
Tom Haupt |
Don Heller |
Tom Henderson |
Alex Ho |
C.T. Howard Ho |
Gary Howell |
John Kapenga |
James Kohl |
Susan Krauss |
Bob Leary |
Arthur Maccabe |
Peter Madams |
Alan Mainwaring |
Oliver McBryan |
Phil McKinley |
Charles Mosher |
Dan Nessett |
Peter Pacheco |
Howard Palmer |
Paul Pierce |
Sanjay Ranka |
Peter Rigsbee |
Arch Robison |
Erich Schikuta |
Ambuj Singh |
Alan Sussman |
Robert Tomlinson |
Robert G. Voigt |
Dennis Weeks |
Stephen Wheat |
Steve Zenith
|
The University of Tennessee and Oak Ridge National Laboratory
made the draft available by anonymous FTP
mail servers and were instrumental in distributing
the document.
The work on the MPI-1 standard was supported in part by ARPA and NSF
under grant ASC-9310330,
the National Science Foundation Science and
Technology Center Cooperative Agreement No. CCR-8809615,
and by the Commission of the European Community through Esprit project
P6643 (PPPE).
Introduction to MPI
Overview and Goals
Background of MPI-1.0
Background of MPI-1.1, MPI-1.2, and MPI-2.0
Background of MPI-1.3 and MPI-2.1
Who Should Use This Standard?
What Platforms Are Targets For Implementation?
What Is Included In The Standard?
What Is Not Included In The Standard?
Organization of this Document
MPI Terms and Conventions
Document Notation
Naming Conventions
Procedure Specification
Semantic Terms
Data Types
Opaque Objects
Array Arguments
State
Named Constants
Choice
Addresses
File Offsets
Language Binding
Deprecated Names and Functions
Fortran Binding Issues
C Binding Issues
C++ Binding Issues
Functions and Macros
Processes
Error Handling
Implementation Issues
Independence of Basic Runtime Routines
Interaction with Signals
Examples
Point-to-Point Communication
Introduction
Blocking Send and Receive Operations
Blocking Send
Message Data
Message Envelope
Blocking Receive
Return Status
Passing MPI_STATUS_IGNORE for Status
Data Type Matching and Data Conversion
Type Matching Rules
Type MPI_CHARACTER
Data Conversion
Communication Modes
Semantics of Point-to-Point Communication
Buffer Allocation and Usage
Model Implementation of Buffered Mode
Nonblocking Communication
Communication Request Objects
Communication Initiation
Communication Completion
Semantics of Nonblocking Communications
Multiple Completions
Non-destructive Test of status
Probe and Cancel
Persistent Communication Requests
Send-Receive
Null Processes
Datatypes
Derived Datatypes
Type Constructors with Explicit Addresses
Datatype Constructors
Subarray Datatype Constructor
Distributed Array Datatype Constructor
Address and Size Functions
Lower-Bound and Upper-Bound Markers
Extent and Bounds of Datatypes
True Extent of Datatypes
Commit and Free
Duplicating a Datatype
Use of General Datatypes in Communication
Correct Use of Addresses
Decoding a Datatype
Examples
Pack and Unpack
Canonical MPI_PACK and MPI_UNPACK
Collective Communication
Introduction and Overview
Communicator Argument
Specifics for Intracommunicator Collective Operations
Applying Collective Operations to Intercommunicators
Specifics for Intercommunicator Collective Operations
Barrier Synchronization
Broadcast
Example using MPI_BCAST
Gather
Examples using MPI_GATHER, MPI_GATHERV
Scatter
Examples using MPI_SCATTER, MPI_SCATTERV
Gather-to-all
Examples using MPI_ALLGATHER, MPI_ALLGATHERV
All-to-All Scatter/Gather
Global Reduction Operations
Reduce
Predefined Reduction Operations
Signed Characters and Reductions
MINLOC and MAXLOC
User-Defined Reduction Operations
Example of User-defined Reduce
All-Reduce
Reduce-Scatter
Scan
Inclusive Scan
Exclusive Scan
Example using MPI_SCAN
Correctness
Groups, Contexts, Communicators, and Caching
Introduction
Features Needed to Support Libraries
MPI's Support for Libraries
Basic Concepts
Groups
Contexts
Intra-Communicators
Predefined Intra-Communicators
Group Management
Group Accessors
Group Constructors
Group Destructors
Communicator Management
Communicator Accessors
Communicator Constructors
Communicator Destructors
Motivating Examples
Current Practice #1
Current Practice #2
(Approximate) Current Practice #3
Example #4
Library Example #1
Library Example #2
Inter-Communication
Inter-communicator Accessors
Inter-communicator Operations
Inter-Communication Examples
Example 1: Three-Group ``Pipeline"
Example 2: Three-Group ``Ring"
Example 3: Building Name Service for Intercommunication
Caching
Functionality
Communicators
Windows
Datatypes
Error Class for Invalid Keyval
Attributes Example
Naming Objects
Formalizing the Loosely Synchronous Model
Basic Statements
Models of Execution
Static communicator allocation
Dynamic communicator allocation
The General case
Process Topologies
Introduction
Virtual Topologies
Embedding in MPI
Overview of the Functions
Topology Constructors
Cartesian Constructor
Cartesian Convenience Function: MPI_DIMS_CREATE
General (Graph) Constructor
Topology Inquiry Functions
Cartesian Shift Coordinates
Partitioning of Cartesian structures
Low-Level Topology Functions
An Application Example
MPI Environmental Management
Implementation Information
Version Inquiries
Environmental Inquiries
Tag Values
Host Rank
IO Rank
Clock Synchronization
Memory Allocation
Error Handling
Error Handlers for Communicators
Error Handlers for Windows
Error Handlers for Files
Freeing Errorhandlers and Retrieving Error Strings
Error Codes and Classes
Error Classes, Error Codes, and Error Handlers
Timers and Synchronization
Startup
Allowing User Functions at Process Termination
Determining Whether MPI Has Finished
Portable MPI Process Startup
The Info Object
Process Creation and Management
Introduction
The Dynamic Process Model
Starting Processes
The Runtime Environment
Process Manager Interface
Processes in MPI
Starting Processes and Establishing Communication
Starting Multiple Executables and Establishing Communication
Reserved Keys
Spawn Example
Manager-worker Example, Using MPI_COMM_SPAWN.
Establishing Communication
Names, Addresses, Ports, and All That
Server Routines
Client Routines
Name Publishing
Reserved Key Values
Client/Server Examples
Simplest Example --- Completely Portable.
Ocean/Atmosphere - Relies on Name Publishing
Simple Client-Server Example.
Other Functionality
Universe Size
Singleton MPI_INIT
MPI_APPNUM
Releasing Connections
Another Way to Establish MPI Communication
One-Sided Communications
Introduction
Initialization
Window Creation
Window Attributes
Communication Calls
Put
Get
Examples
Accumulate Functions
Synchronization Calls
Fence
General Active Target Synchronization
Lock
Assertions
Miscellaneous Clarifications
Examples
Error Handling
Error Handlers
Error Classes
Semantics and Correctness
Atomicity
Progress
Registers and Compiler Optimizations
External Interfaces
Introduction
Generalized Requests
Examples
Associating Information with Status
MPI and Threads
General
Clarifications
Initialization
I/O
Introduction
Definitions
File Manipulation
Opening a File
Closing a File
Deleting a File
Resizing a File
Preallocating Space for a File
Querying the Size of a File
Querying File Parameters
File Info
Reserved File Hints
File Views
Data Access
Data Access Routines
Positioning
Synchronism
Coordination
Data Access Conventions
Data Access with Explicit Offsets
Data Access with Individual File Pointers
Data Access with Shared File Pointers
Noncollective Operations
Collective Operations
Seek
Split Collective Data Access Routines
File Interoperability
Datatypes for File Interoperability
External Data Representation: ``external32''
User-Defined Data Representations
Extent Callback
Datarep Conversion Functions
Matching Data Representations
Consistency and Semantics
File Consistency
Random Access vs. Sequential Files
Progress
Collective File Operations
Type Matching
Miscellaneous Clarifications
MPI_Offset Type
Logical vs. Physical File Layout
File Size
Examples
Asynchronous I/O
I/O Error Handling
I/O Error Classes
Examples
Double Buffering with Split Collective I/O
Subarray Filetype Constructor
Profiling Interface
Requirements
Discussion
Logic of the Design
Miscellaneous Control of Profiling
Examples
Profiler Implementation
MPI Library Implementation
Systems with Weak Symbols
Systems Without Weak Symbols
Complications
Multiple Counting
Linker Oddities
Multiple Levels of Interception
Deprecated Functions
Deprecated since MPI-2.0
Language Bindings
C++
Overview
Design
C++ Classes for MPI
Class Member Functions for MPI
Semantics
C++ Datatypes
Communicators
Exceptions
Mixed-Language Operability
Profiling
Fortran Support
Overview
Problems With Fortran Bindings for MPI
Problems Due to Strong Typing
Problems Due to Data Copying and Sequence Association
Special Constants
Fortran 90 Derived Types
A Problem with Register Optimization
Basic Fortran Support
Extended Fortran Support
The mpi Module
No Type Mismatch Problems for Subroutines with Choice Arguments
Additional Support for Fortran Numeric Intrinsic Types
Parameterized Datatypes with Specified Precision and Exponent Range
Support for Size-specific MPI Datatypes
Communication With Size-specific Types
Language Interoperability
Introduction
Assumptions
Initialization
Transfer of Handles
Status
MPI Opaque Objects
Datatypes
Callback Functions
Error Handlers
Reduce Operations
Addresses
Attributes
Extra State
Constants
Interlanguage Communication
Language Bindings Summary
Defined Values and Handles
Defined Constants
Types
Prototype definitions
Deprecated prototype definitions
Info Keys
Info Values
C Bindings
Point-to-Point Communication C Bindings
Datatypes C Bindings
Collective Communication C Bindings
Groups, Contexts, Communicators, and Caching C Bindings
Process Topologies C Bindings
MPI Environmenta Management C Bindings
The Info Object C Bindings
Process Creation and Management C Bindings
One-Sided Communications C Bindings
External Interfaces C Bindings
I/O C Bindings
Language Bindings C Bindings
Profiling Interface C Bindings
Deprecated C Bindings
Fortran Bindings
Point-to-Point Communication Fortran Bindings
Datatypes Fortran Bindings
Collective Communication Fortran Bindings
Groups, Contexts, Communicators, and Caching Fortran Bindings
Process Topologies Fortran Bindings
MPI Environmenta Management Fortran Bindings
The Info Object Fortran Bindings
Process Creation and Management Fortran Bindings
One-Sided Communications Fortran Bindings
External Interfaces Fortran Bindings
I/O Fortran Bindings
Language Bindings Fortran Bindings
Profiling Interface Fortran Bindings
Deprecated Fortran Bindings
C++ Bindings
Point-to-Point Communication C++ Bindings
Datatypes C++ Bindings
Collective Communication C++ Bindings
Groups, Contexts, Communicators, and Caching C++ Bindings
Process Topologies C++ Bindings
MPI Environmenta Management C++ Bindings
The Info Object C++ Bindings
Process Creation and Management C++ Bindings
One-Sided Communications C++ Bindings
External Interfaces C++ Bindings
I/O C++ Bindings
Language Bindings C++ Bindings
Profiling Interface C++ Bindings
Deprecated C++ Bindings
C++ Bindings on all MPI Classes
Construction / Destruction
Copy / Assignment
Comparison
Inter-language Operability
Change-Log
Changes from Version 2.0 to Version 2.1
Bibliography
Index
Index
Index
Index
Index
Return to MPI-2.1 Standard Index
Return to MPI Forum Home Page
MPI-2.0 of July 1, 2008
HTML Generated on July 6, 2008