TY - JOUR
T1 - Adaptive CoMPI: Enhancing MPI based applications performance and scalability by using adaptive compression.
JF - International Journal of High Performance Computing and Applications, 2010. Sage
Y1 - 2010
A1 - Rosa Filgueira
A1 - David E. Singh
A1 - Alejandro Calderón
A1 - Félix García Carballeira
A1 - Jesús Carretero
AB - This paper presents an optimization of MPI communication, called Adaptive-CoMPI, based on runtime compression of MPI messages exchanged by applications. The technique developed can be used for any application, because its implementation is transparent for the user, and integrates different compression algorithms for both MPI collective and point-to-point primitives. Furthermore, compression is turned on and off and the most appropriate compression algorithms are selected at runtime, depending on the characteristics of each message, the network behavior, and compression algorithm behavior, following a runtime adaptive strategy. Our system can be optimized for a specific application, through a guided strategy, to reduce the runtime strategy overhead. Adaptive-CoMPI has been validated using several MPI benchmarks and real HPC applications. Results show that, in most cases, by using adaptive compression, communication time is reduced, enhancing application performance and scalability.
IS - 25 (3)
ER -
TY - JOUR
T1 - Dynamic-CoMPI: Dynamic optimization techniques for MPI parallel applications.
JF - The Journal of Supercomputing.
Y1 - 2010
A1 - Rosa Filgueira
A1 - Jesús Carretero
A1 - David E. Singh
A1 - Alejandro Calderón
A1 - Alberto Nunez
KW - Adaptive systems
KW - Clusters architectures
KW - Collective I/O
KW - Compression algorithms
KW - Heuristics
KW - MPI library - Parallel techniques
AB - This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techniques in order to reduce the impact of communications and non-contiguous I/O requests in parallel applications. These techniques are independent of the application and complementaries to each other. The first technique is an optimization of the Two-Phase collective I/O technique from ROMIO, called Locality aware strategy for Two-Phase I/O (LA-Two-Phase I/O). In order to increase the locality of the file accesses, LA-Two-Phase I/O employs the Linear Assignment Problem (LAP) for finding an optimal I/O data communication schedule. The main purpose of this technique is the reduction of the number of communications involved in the I/O collective operation. The second technique, called Adaptive-CoMPI, is based on run-time compression of MPI messages exchanged by applications. Both techniques can be applied on every application, because both of them are transparent for the users. Dynamic-CoMPI has been validated by using several MPI benchmarks and real HPC applications. The results show that, for many of the considered scenarios, important reductions in the execution time are achieved by reducing the size and the number of the messages. Additional benefits of our approach are the reduction of the total communication time and the network contention, thus enhancing, not only performance, but also scalability.
PB - Springer
ER -
TY - CONF
T1 - CoMPI: Enhancing MPI Based Applications Performance and Scalability Using Run-Time Compression.
T2 - EUROPVM/MPI 2009.Espoo, Finland. September 2009
Y1 - 2009
A1 - Rosa Filgueira
A1 - David E. Singh
A1 - Alejandro Calderón
A1 - Jesús Carretero
AB - This paper presents an optimization of MPI communications, called CoMPI, based on run-time compression of MPI messages exchanged by applications. A broad number of compression algorithms have been fully implemented and tested for both MPI collective and point to point primitives. In addition, this paper presents a study of several compression algorithms that can be used for run-time compression, based on the datatype used by applications. This study has been validated by using several MPI benchmarks and real HPC applications. Show that, in most of the cases, using compression reduces the application communication time enhancing application performance and scalability. In this way, CoMPI obtains important improvements in the overall execution time for many of the considered scenarios.
JF - EUROPVM/MPI 2009.Espoo, Finland. September 2009
PB - Springer
CY - Espoo. Finland
VL - 5759/2009
ER -