MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs

التفاصيل البيبلوغرافية
العنوان: MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs
المؤلفون: Nguyen, Quang H., Hoang, Duy C., Decugis, Juliette, Manchanda, Saurav, Chawla, Nitesh V., Doan, Khoa D.
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
الوصف: The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas. These LLMs come with different abilities and costs in terms of computation or pricing. Since the demand for each query can vary, e.g., because of the queried domain or its complexity, defaulting to one LLM in an application is not usually the best choice, whether it is the biggest, priciest, or even the one with the best average test performance. Consequently, picking the right LLM that is both accurate and cost-effective for an application remains a challenge. In this paper, we introduce MetaLLM, a framework that dynamically and intelligently routes each query to the optimal LLM (among several available LLMs) for classification tasks, achieving significantly improved accuracy and cost-effectiveness. By framing the selection problem as a multi-armed bandit, MetaLLM balances prediction accuracy and cost efficiency under uncertainty. Our experiments, conducted on popular LLM platforms such as OpenAI's GPT models, Amazon's Titan, Anthropic's Claude, and Meta's LLaMa, showcase MetaLLM's efficacy in real-world scenarios, laying the groundwork for future extensions beyond classification tasks.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.10834
رقم الأكسشن: edsarx.2407.10834
قاعدة البيانات: arXiv