Scalability of 3D-DFT by block tensor-matrix multiplication on the JUWELS Cluster

التفاصيل البيبلوغرافية
العنوان: Scalability of 3D-DFT by block tensor-matrix multiplication on the JUWELS Cluster
المؤلفون: Malapally, Nitin, Bolnykh, Viacheslav, Suarez, Estela, Carloni, Paolo, Lippert, Thomas, Mandelli, Davide
سنة النشر: 2023
المجموعة: Computer Science
Physics (Other)
مصطلحات موضوعية: Physics - Computational Physics, Computer Science - Distributed, Parallel, and Cluster Computing
الوصف: The 3D Discrete Fourier Transform (DFT) is a technique used to solve problems in disparate fields. Nowadays, the commonly adopted implementation of the 3D-DFT is derived from the Fast Fourier Transform (FFT) algorithm. However, evidence indicates that the distributed memory 3D-FFT algorithm does not scale well due to its use of all-to-all communication. Here, building on the work of Sedukhin \textit{et al}. [Proceedings of the 30th International Conference on Computers and Their Applications, CATA 2015 pp. 193-200 (01 2015)], we revisit the possibility of improving the scaling of the 3D-DFT by using an alternative approach that uses point-to-point communication, albeit at a higher arithmetic complexity. The new algorithm exploits tensor-matrix multiplications on a volumetrically decomposed domain via three specially adapted variants of Cannon's algorithm. It has here been implemented as a C++ library called S3DFT and tested on the JUWELS Cluster at the J\"ulich Supercomputing Center. Our implementation of the shared memory tensor-matrix multiplication attained 88\% of the theoretical single node peak performance. One variant of the distributed memory tensor-matrix multiplication shows excellent scaling, while the other two show poorer performance, which can be attributed to their intrinsic communication patterns. A comparison of S3DFT with the Intel MKL and FFTW3 libraries indicates that currently iMKL performs best overall, followed in order by FFTW3 and S3DFT. This picture might change with further improvements of the algorithm and/or when running on clusters that use network connections with higher latency, e.g. on cloud platforms.
Comment: 18 pages, 8 figures
نوع الوثيقة: Working Paper
DOI: 10.1016/j.jpdc.2024.104945
URL الوصول: http://arxiv.org/abs/2303.13337
رقم الأكسشن: edsarx.2303.13337
قاعدة البيانات: arXiv
الوصف
DOI:10.1016/j.jpdc.2024.104945