FPGA-accelerated machine learning inference as a service for particle physics computing

التفاصيل البيبلوغرافية
العنوان: FPGA-accelerated machine learning inference as a service for particle physics computing
المؤلفون: Duarte, Javier, Harris, Philip, Hauck, Scott, Holzman, Burt, Hsu, Shih-Chieh, Jindariani, Sergo, Khan, Suffian, Kreis, Benjamin, Lee, Brian, Liu, Mia, Lončar, Vladimir, Ngadiuba, Jennifer, Pedro, Kevin, Perez, Brandon, Pierini, Maurizio, Rankin, Dylan, Tran, Nhan, Trahms, Matthew, Tsaris, Aristeidis, Versteeg, Colin, Way, Ted W., Werran, Dustin, Wu, Zhenbin
المصدر: Comput Softw Big Sci (2019) 3: 13
سنة النشر: 2019
المجموعة: High Energy Physics - Experiment
Physics (Other)
مصطلحات موضوعية: Physics - Data Analysis, Statistics and Probability, High Energy Physics - Experiment, Physics - Computational Physics, Physics - Instrumentation and Detectors
الوصف: New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that potentially requires minimal modification to the current computing model. As examples, we retrain the ResNet-50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet-50 model with transfer learning for neutrino event classification. Using Project Brainwave by Microsoft to accelerate the ResNet-50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600--700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.
Comment: 16 pages, 14 figures, 2 tables
نوع الوثيقة: Working Paper
DOI: 10.1007/s41781-019-0027-2
URL الوصول: http://arxiv.org/abs/1904.08986
رقم الأكسشن: edsarx.1904.08986
قاعدة البيانات: arXiv
الوصف
DOI:10.1007/s41781-019-0027-2