Submitted:
25 April 2025
Posted:
25 April 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Literature Review
2.1. Physics-Based Electrochemical Models
2.2. Reduced-Order Models: Equivalent Circuit Models (ECMs)

2.3. Data-Driven Modeling Approaches
- NARX (Nonlinear Auto-Regressive with eXogenous input) networks use tapped delays of past voltages and currents to regress future voltage [10]. These models can effectively capture dynamics and have been used in voltage prediction and SOC estimation [11,12]. However, they struggle with long-term memory and transients unless hybridized or trained with special loss functions to avoid error accumulation in multi-step predictions.
- Koopman Operator-based methods lift nonlinear dynamics into a latent space with approximately linear evolution. Deep Koopman networks employ autoencoders to discover such coordinates, offering a structured model with interpretable linear dynamics [5,6]. This duality—nonlinear mapping to and from a latent linear system—makes them attractive for both prediction and control [15].
- Hybrid LSTM-NARX approaches combine the autoregressive formulation of NARX type architecture with LSTM’s memory handling [16]. LSTM (Long Short-Term Memory) networks are a class of gated recurrent neural networks capable of learning long-range dependencies [17]. These have shown promise in SOC estimation and direct voltage emulation tasks under dynamic drive cycles [7,8]. LSTMs tend to require large datasets, and their interpretability is limited—making them black-box models from a control perspective. Wei et al. [9,10] showed these hybrids outperform standalone models by shortening gradient propagation paths and better modeling battery behaviors, especially under dynamic conditions.
2.4. Focus of This Work
3. Battery Dynamical System
4. NARX-RNN Model
5. Deep Koopman Model
6. LSTM-NARX Model
7. Data Generation and Model Architectures
7.1. Data Generation for Training and Testing Datasets
7.2. Training and Testing Datasets
7.3. Model Implementation and Evaluation
7.4. Performance Evaluation Metrics
8. Results and Discussion
- Steady-State Accuracy: The NARX-RNN model achieved the highest accuracy in steady-state regions, followed closely by Deep Koopman and LSTM-NARX.
- Transient Accuracy: LSTM-NARX outperformed the others during transient conditions, with Deep Koopman ranking second and NARX-RNN trailing.
- Model Complexity: NARX-RNN had the lowest architectural complexity, followed by Deep Koopman. LSTM-NARX was the most complex due to its recurrent layers and memory mechanisms.
- Interpretability: Deep Koopman provided the most interpretable dynamics through its linear latent space, whereas NARX-RNN and LSTM-NARX behaved as black-box models.
- Prediction Stability: LSTM-NARX and Deep Koopman produced more stable long-horizon predictions, while NARX-RNN exhibited occasional divergence or drift.
9. Conclusions
- NARX-RNN demonstrated superior accuracy in steady-state conditions, making it well-suited for applications involving slow dynamics and computational constraints.
- LSTM-NARX delivered the best overall performance, particularly excelling in transient regions due to its ability to capture long-term dependencies.
- Deep Koopman offered a favorable balance between accuracy and interpretability by learning latent linear dynamics, making it attractive for control-informed applications.
Abbreviations
| BMS | Battery Management System |
| CNN | Convolutional Neural Network |
| ECM | Equivalent Circuit Model |
| HPPC | Hybrid Pulse Power Characterization |
| LSTM | Long Short-Term Memory |
| Li-ion | Lithium-Ion |
| MAE | Mean Absolute Error |
| ML | Machine Learning |
| MPC | Model Predictive Control |
| NARX | Nonlinear AutoRegressive with eXogenous input |
| NRMSE | Normalized Root Mean Square Error |
| OCV | Open-Circuit Voltage |
| RNN | Recurrent Neural Network |
| RMSE | Root Mean Square Error |
| SOC | State of Charge |
| SOH | State of Health |
| UDDS | Urban Dynamometer Driving Schedule |
References
- Chaturvedi, N.; Yang, R.; Qin, Y.; Krüger, M. Algorithms for advanced battery-management systems. IEEE Control Systems Magazine 2010, 30, 49–68. [Google Scholar] [CrossRef]
- Doyle, M.; Fuller, T.F.; Newman, J. Modeling of Galvanostatic Charge and Discharge of the Lithium/Polymer/Insertion Cell. Journal of The Electrochemical Society 1993, 140, 1526–1533. [Google Scholar] [CrossRef]
- Fotouhi, A.; Auger, D.; Propp, K.; Longo, S.; Foster, M. A Review on Electric Vehicle Battery Modelling: From Lithium-Ion toward Lithium–Sulphur. Renewable and Sustainable Energy Reviews 2016, 56, 1008–1021. [Google Scholar] [CrossRef]
- Chen, M.; Rincón-Mora, G.A. Accurate electrical battery model capable of predicting runtime and I-V performance. IEEE Transactions on Energy Conversion 2006, 21, 504–511. [Google Scholar] [CrossRef]
- Choi, H.; McClintock, R.G.; Subramanian, V.R. Koopman Operator-Based Surrogate Modeling for Lithium-Ion Battery Systems. Journal of The Electrochemical Society 2023, 170, 020541. [Google Scholar] [CrossRef]
- Lusch, B.; Kutz, J.N.; Brunton, S.L. Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications 2018, 9, 4950. [Google Scholar] [CrossRef]
- Song, Z.; He, H.; Zhang, J.; Ji, C. A hybrid CNN–LSTM model for state of charge estimation of lithium-ion batteries. Journal of Power Sources 2019, 449, 227452. [Google Scholar] [CrossRef]
- Oka, Y.; et al. Battery Emulator Using LSTM for Real-Time Voltage Prediction in EVs. IEEE Transactions on Vehicular Technology 2024, 73, 151–162. [Google Scholar] [CrossRef]
- Wei, Y.; Huang, Y.; Li, J.; Zhang, D. A Hybrid NARX–LSTM Model for Accurate State of Charge Estimation of Lithium-Ion Batteries. Journal of Power Sources 2020, 448, 227400. [Google Scholar] [CrossRef]
- Wei, Y.; Wang, W.; Liu, X. A Hybrid Deep-State Attention Network for Joint SOC and SOH Estimation of Lithium-Ion Batteries. Applied Energy 2023, 345, 120346. [Google Scholar] [CrossRef]
- Plett, G.L. Extended Kalman Filtering for Battery Management Systems of LiPB-Based HEV Battery Packs Part 1. Background. Journal of Power Sources 2004, 134, 252–261. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Computation 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Transactions on Neural Networks and Learning Systems 2017, 28, 2222–2232. [Google Scholar] [CrossRef] [PubMed]




| Model | Architecture Type | Neurons / Layers |
|---|---|---|
| NARX-RNN | RNN + Fully Connected | 32 units, 2 layers |
| Deep Koopman | Encoder–Latent–Decoder (Linear) | 32–4 latent dim–32 |
| LSTM-NARX | LSTM + Residual Skip Connection | 16 units, 1 layer |
| Model | RMSE (V) | MAE (V) | NRMSE | |
|---|---|---|---|---|
| NARX-RNN | 0.00690 | 0.00531 | 0.0205 | 0.9941 |
| Deep Koopman | 0.01730 | 0.01423 | 0.0515 | 0.9631 |
| LSTM-NARX | 0.01805 | 0.01734 | 0.0537 | 0.9599 |
| Model | RMSE (V) | MAE (V) | NRMSE | |
|---|---|---|---|---|
| NARX-RNN | 0.04897 | 0.03703 | 0.1515 | 0.5398 |
| Deep Koopman | 0.03101 | 0.02321 | 0.0959 | 0.8155 |
| LSTM-NARX | 0.02225 | 0.01846 | 0.0688 | 0.9050 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).