Frustrated with MPI+Threads? Try MPIxThreads!

التفاصيل البيبلوغرافية
العنوان: Frustrated with MPI+Threads? Try MPIxThreads!
المؤلفون: Zhou, Hui, Raffenetti, Ken, Zhang, Junchao, Guo, Yanfei, Thakur, Rajeev
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Distributed, Parallel, and Cluster Computing
الوصف: MPI+Threads, embodied by the MPI/OpenMP hybrid programming model, is a parallel programming paradigm where threads are used for on-node shared-memory parallelization and MPI is used for multi-node distributed-memory parallelization. OpenMP provides an incremental approach to parallelize code, while MPI, with its isolated address space and explicit messaging API, affords straightforward paths to obtain good parallel performance. However, MPI+Threads is not an ideal solution. Since MPI is unaware of the thread context, it cannot be used for interthread communication. This results in duplicated efforts to create separate and sometimes nested solutions for similar parallel tasks. In addition, because the MPI library is required to obey message-ordering semantics, mixing threads and MPI via MPI_THREAD_MULTIPLE can easily result in miserable performance due to accidental serializations. We propose a new MPI extension, MPIX Thread Communicator (threadcomm), that allows threads to be assigned distinct MPI ranks within thread parallel regions. The threadcomm extension combines both MPI processes and OpenMP threads to form a unified parallel environment. We show that this MPIxThreads (MPI Multiply Threads) paradigm allows OpenMP and MPI to work together in a complementary way to achieve both cleaner codes and better performance.
نوع الوثيقة: Working Paper
DOI: 10.1145/3615318.3615320
URL الوصول: http://arxiv.org/abs/2401.16551
رقم الأكسشن: edsarx.2401.16551
قاعدة البيانات: arXiv