CascadeServe: Unlocking Model Cascades for Inference Serving

التفاصيل البيبلوغرافية
العنوان: CascadeServe: Unlocking Model Cascades for Inference Serving
المؤلفون: Kossmann, Ferdi, Wu, Ziniu, Turk, Alex, Tatbul, Nesime, Cao, Lei, Madden, Samuel
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Machine Learning
الوصف: Machine learning (ML) models are increasingly deployed to production, calling for efficient inference serving systems. Efficient inference serving is complicated by two challenges: (i) ML models incur high computational costs, and (ii) the request arrival rates of practical applications have frequent, high, and sudden variations which make it hard to correctly provision hardware. Model cascades are positioned to tackle both of these challenges, as they (i) save work while maintaining accuracy, and (ii) expose a high-resolution trade-off between work and accuracy, allowing for fine-grained adjustments to request arrival rates. Despite their potential, model cascades haven't been used inside an online serving system. This comes with its own set of challenges, including workload adaption, model replication onto hardware, inference scheduling, request batching, and more. In this work, we propose CascadeServe, which automates and optimizes end-to-end inference serving with cascades. CascadeServe operates in an offline and online phase. In the offline phase, the system pre-computes a gear plan that specifies how to serve inferences online. In the online phase, the gear plan allows the system to serve inferences while making near-optimal adaptations to the query load at negligible decision overheads. We find that CascadeServe saves 2-3x in cost across a wide spectrum of the latency-accuracy space when compared to state-of-the-art baselines on different workloads.
Comment: 17 pages, 13 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.14424
رقم الأكسشن: edsarx.2406.14424
قاعدة البيانات: arXiv