Residential College | false |
Status | 即將出版Forthcoming |
UELLM: A Unified and Efficient Approach for Large Language Model Inference Serving | |
He, Yiyuan1,2; Xu, Minxian1; Wu, Jingfeng1; Zheng, Wanyi3; Ye, Kejiang1; Xu, Chengzhong4![]() | |
2025 | |
Conference Name | 22nd International Conference on Service-Oriented Computing, ICSOC 2024 |
Source Publication | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
![]() |
Volume | 15404 LNCS |
Pages | 218-235 |
Conference Date | 3 December 2024 to 6 December 2024 |
Conference Place | Tunis; Tunisia |
Publisher | Springer Science and Business Media Deutschland GmbH |
Abstract | In the context of Machine Learning as a Service (MLaaS) clouds, the extensive use of Large Language Models (LLMs) often requires efficient management of significant query loads. When providing real-time inference services, several challenges arise. Firstly, increasing the number of GPUs may lead to a decrease in inference speed due to a heightened communication overhead, while an inadequate number of GPUs can lead to out-of-memory errors. Secondly, different deployment strategies need to be evaluated to guarantee optimal utilization and minimal inference latency. Lastly, inefficient orchestration of inference queries can easily lead to significant Service Level Objective (SLO) violations. To address these challenges, we propose a Unified and Efficient approach for Large Language Model inference serving (UELLM), which consists of three main components: 1)resourceprofiler, 2)batchscheduler, and 3)LLMdeployer. The resourceprofiler characterizes resource usage of inference queries by predicting resource demands based on a fine-tuned LLM. The batchscheduler effectively batches the queries profiled by the resourceprofiler based on batching algorithms, aiming to decrease inference delays while meeting SLO and efficient batch processing of inference queries. The LLMdeployer can efficiently deploy LLMs by considering the current cluster hardware topology and LLM characteristics, enhancing resource utilization and reducing resource overhead. UELLM minimizes resource overhead, reduces inference latency, and lowers SLO violation rates. Compared with state-of-the-art (SOTA) techniques, UELLM reduces the inference latency by 72.3% to 90.3%, enhances GPU utilization by 1.2× to 4.1×, and increases throughput by 1.92× to 4.98×, it can also serve without violating the inference latency SLO. |
Keyword | Cloud Computing Large Language Model Inference Resource Management Scheduling Algorithm |
DOI | 10.1007/978-981-96-0805-8_16 |
URL | View the original |
Language | 英語English |
Scopus ID | 2-s2.0-85212921991 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Affiliation | 1.Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China 2.Southern University of Science and Technology, Shenzhen, China 3.Shenzhen University of Advanced Technology, Shenzhen, China 4.State Key Lab of IoTSC, University of Macau, Macao |
Recommended Citation GB/T 7714 | He, Yiyuan,Xu, Minxian,Wu, Jingfeng,et al. UELLM: A Unified and Efficient Approach for Large Language Model Inference Serving[C]:Springer Science and Business Media Deutschland GmbH, 2025, 218-235. |
APA | He, Yiyuan., Xu, Minxian., Wu, Jingfeng., Zheng, Wanyi., Ye, Kejiang., & Xu, Chengzhong (2025). UELLM: A Unified and Efficient Approach for Large Language Model Inference Serving. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 15404 LNCS, 218-235. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment