تقرير
Towards Greener LLMs: Bringing Energy-Efficiency to the Forefront of LLM Inference
العنوان: | Towards Greener LLMs: Bringing Energy-Efficiency to the Forefront of LLM Inference |
---|---|
المؤلفون: | Stojkovic, Jovan, Choukse, Esha, Zhang, Chaojie, Goiri, Inigo, Torrellas, Josep |
سنة النشر: | 2024 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Artificial Intelligence, Computer Science - Hardware Architecture, Computer Science - Distributed, Parallel, and Cluster Computing, C.0, I.2 |
الوصف: | With the ubiquitous use of modern large language models (LLMs) across industries, the inference serving for these models is ever expanding. Given the high compute and memory requirements of modern LLMs, more and more top-of-the-line GPUs are being deployed to serve these models. Energy availability has come to the forefront as the biggest challenge for data center expansion to serve these models. In this paper, we present the trade-offs brought up by making energy efficiency the primary goal of LLM serving under performance SLOs. We show that depending on the inputs, the model, and the service-level agreements, there are several knobs available to the LLM inference provider to use for being energy efficient. We characterize the impact of these knobs on the latency, throughput, as well as the energy. By exploring these trade-offs, we offer valuable insights into optimizing energy usage without compromising on performance, thereby paving the way for sustainable and cost-effective LLM deployment in data center environments. Comment: 6 pages, 15 figures |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2403.20306 |
رقم الأكسشن: | edsarx.2403.20306 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |