Preprint
Article

This version is not peer-reviewed.

Small Language Models: Architecture, Evolution, and the Future of Artificial Intelligence

Submitted:

12 January 2026

Posted:

13 January 2026

You are already at the latest version

Abstract
Large language models (LLMs) have significantly advanced artificial intelligence, yet their high com-putational, energy, and privacy costs pose substantial challenges. In contrast, Small Language Models(SLMs), typically with fewer than 15 billion parameters, have emerged as efficient alternatives. Thissurvey provides a comprehensive analysis of the SLM landscape, tracing their evolution and examiningarchitectural innovations that enhance efficiency. A novel multi-axis taxonomy is introduced to classifySLMs by genesis, architecture, and optimization goals, offering a structured framework for this field.Performance benchmarks are reviewed exhaustively, demonstrating that while LLMs excel in broadknowledge tasks, state-of-the-art SLMs match or exceed larger models in domains such as mathematicalreasoning and code generation. The analysis concludes that the future of AI lies in hybrid ecosystems,where specialized SLMs manage most tasks locally, escalating complex queries to cloud-based LLMs.This tiered approach promises scalability, privacy, and the democratization of AI.
Keywords: 
;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated