Designing Efficient LLM Accelerators for Edge Devices

التفاصيل البيبلوغرافية
العنوان: Designing Efficient LLM Accelerators for Edge Devices
المؤلفون: Haris, Jude, Saha, Rappy, Hu, Wenhao, Cano, José
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Hardware Architecture, Computer Science - Machine Learning
الوصف: The increase in open-source availability of Large Language Models (LLMs) has enabled users to deploy them on more and more resource-constrained edge devices to reduce reliance on network connections and provide more privacy. However, the high computation and memory demands of LLMs make their execution on resource-constrained edge devices challenging and inefficient. To address this issue, designing new and efficient edge accelerators for LLM inference is crucial. FPGA-based accelerators are ideal for LLM acceleration due to their reconfigurability, as they enable model-specific optimizations and higher performance per watt. However, creating and integrating FPGA-based accelerators for LLMs (particularly on edge devices) has proven challenging, mainly due to the limited hardware design flows for LLMs in existing FPGA platforms. To tackle this issue, in this paper we first propose a new design platform, named SECDA-LLM, that utilizes the SECDA methodology to streamline the process of designing, integrating, and deploying efficient FPGA-based LLM accelerators for the llama.cpp inference framework. We then demonstrate, through a case study, the potential benefits of SECDA-LLM by creating a new MatMul accelerator that supports block floating point quantized operations for LLMs. Our initial accelerator design, deployed on the PYNQ-Z1 board, reduces latency 1.7 seconds per token or ~2 seconds per word) by 11x over the dual-core Arm NEON-based CPU execution for the TinyLlama model.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.00462
رقم الأكسشن: edsarx.2408.00462
قاعدة البيانات: arXiv