Michael Ray, Yves C. Martin
Proceedings of SPIE - The International Society for Optical Engineering
The Blue Gene®/L (BG/L) supercomputer, with 65,536 dual-processor compute nodes, was designed from the ground up to support efficient execution of massively parallel message-passing programs. Part of this support is an optimized implementation of the Message Passing Interface (MPI), which leverages the hardware features of BG/L. MPI for BG/L is implemented on top of a more basic message-passing infrastructure called the message layer. This message layer can be used both to implement other higher-level libraries and directly by applications. MPI and the message layer are used in the two BG/L modes of operation: the coprocessor mode and the virtual node mode. Performance measurements show that our message-passing services deliver performance close to the hardware limits of the machine. They also show that dedicating one of the processors of a node to communication functions (coprocessor mode) greatly improves the message-passing bandwidth, whereas running two processes per compute node (virtual node mode) can have a positive impact on application performance. © Copyright 2005 by International Business Machines Corporation.
Michael Ray, Yves C. Martin
Proceedings of SPIE - The International Society for Optical Engineering
M.F. Cowlishaw
IBM Systems Journal
Rajiv Ramaswami, Kumar N. Sivarajan
IEEE/ACM Transactions on Networking
S.F. Fan, W.B. Yun, et al.
Proceedings of SPIE 1989