Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Inference Acceleration for Large Language Models on CPUs

Version 1 : Received: 28 February 2024 / Approved: 29 February 2024 / Online: 29 February 2024 (08:21:32 CET)

How to cite: PS, D.; VG, J. Inference Acceleration for Large Language Models on CPUs. Preprints 2024, 2024021702. https://doi.org/10.20944/preprints202402.1702.v1 PS, D.; VG, J. Inference Acceleration for Large Language Models on CPUs. Preprints 2024, 2024021702. https://doi.org/10.20944/preprints202402.1702.v1

Abstract

In recent years, large language models have demonstrated remarkable performance across various natural language processing (NLP) tasks. However, deploying these models for real-world applications often requires efficient inference solutions to handle the computational demands. In this paper, we explore the utilization of CPUs for accelerating the inference of large language models. Specifically, we introduce a parallelized approach to enhance throughput by 1) Exploiting the parallel processing capabilities of modern CPU architectures, 2) Batching the inference request. Our evaluation shows the accelerated inference engine gives an 18-22x improvement in the generated token per sec. The improvement is more with longer sequence and larger models. In addition to this, we can also run multiple workers in the same machine with NUMA node isolation to further improvement in tokens/s. Table 2, we have received 4x additional improvement with 4 workers. This would also make Gen-AI based products and companies’ environment friendly, our estimates shows that CPU usage for Inference could reduce the power consumption of LLMs by 48.9% (1252 W for A100 with AMD EPYC 7V13 vs 613 W for Intel® Xeon® Gold 6538N) while providing production ready throughput & latency.

Keywords

LLM; Inference; CPU optimization; Intel xenon

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.