Tony Hey: "MPI: Past, Present and Future."

Wednesday October 3rd, 9:15

Abstract. This talk will trace the origins of MPI from the early message-passing, distributed memory, parallel computers in the 1980's, to today's parallel supercomputers. In these early days, parallel computing companies implemented proprietary message-passing libraries to support distributed memory parallel programs. At the time, there was also great instability in the market and parallel computing companies could come and go, with their demise taking with them a great deal of effort in parallel programs written to their specific
call specifications. In January 1992, Geoffrey Fox and Ken Kennedy had initiated a community effort called Fortran D, a precursor to High Performance Fortran and a high level data parallel language that compiled down to a distributed memory architecture. In the event, HPF proved an over ambitious goal: what was clearly achievable was a message-passing standard that enabled portability across a variety of distributed memory message-passing machines. In Europe, there was enthusiasm for PARMACS libraries: in the US, PVM was gaining adherents for distributed computing. For these reasons, in November 1992 Jack Dongarra and David Walker from the USA and Rolf Hempel and Tony Hey from Europe wrote an initial draft of the MPI standard. After a birds of a feather session at the 1992 Supercomputing Conference, Bill Gropp and Rusty Lusk from Argonne volunteered to create an open source implementation of the emerging MPI standard. This proved crucial in accelerating take up of the community-based standard, as did support from IBM, Intel and Meiko. Because of the need for the MPI standardization process to converge to agreement in a little over a year, the final agreed version of MPI contains more communication calls than most users now require. A later standardization process increased the functionally of MPI as MPI-2.

Where are we now? It is clear that MPI provides effective portability for data parallel distributed memory message passing programs. Moreover, such MPI programs can scale to large numbers of processors. MPI therefore still retains its appeal for closely coupled distributed computing and the rise of HPC clusters as a local resource has made MPI ubiquitous for serious parallel programmers. However, there are two trends that may limit the usefulness of MPI in the future. The first is the rise of the Web and of Web Services as a paradigm for distributed, service oriented computing. In principle, using Web Service protocols that expose functionality as a service offers the prospect of building more robust software for distributed systems. The second trend is the move towards Multi-Core processors as semi-conductor manufacturers are finding that they can no longer increase the clock speed as the feature size continues to shrink. Thus, although Moore's Law, in the sense that the feature size will continue to shrink, will continue for perhaps a decade or more, the accompanying increase in speed as the clock speed is increased will no longer be possible. For this reason, 2, 4 and 8 core chips, in which the processor is replicated several times, are already becoming commonplace. However, this means that any significant performance increase will depend entirely on the programmer's ability to exploit parallelism in their applications. This talk will end by reviewing these trends and examining the applicability of MPI in the future.

 About the author. (will be updated soon)

 

EuroPVM/MPI 2007 Logo
Content managed by the Etomite Content Management System.