Submitted:
02 April 2025
Posted:
07 April 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Materials and Methods
2.1. Deep Belief Network (DBNs) and Restricted Boltzmann Machines (RBMs)
Restricted Boltzmann Machines (RBMs)
- -
- A set of m visible layers (), representing the input data. The notation means that the vector is a binary vector of dimension m, that is, a set of m elements where each element can be either 0 or 1.
- -
- A set of n hidden layers (): representing the learned representations of the input data.
- -
- and are the activations (0 or 1) of the visible and hidden units. In a RBMs there are no connections between units within the same layer. The units in the visible and hidden layers are connected by weights.
3. Results
- -
- Ambient data: Class 1, comprising 14864 measurements recorded before retrofitting, and Class 2, consisting of 16272 measurements taken afterward.
- -
- Train data: Class 1, comprising 4232 measurements recorded before retrofitting, and Class 2, consisting of 4968 measurements taken afterward.
3.1. Discriminatory Frequencies for Ambient and Train Data
3.2. Unsupervised PCA Analysis and t-SNE
3.3. Unsupervised K-Means Classifier for Train Data
3.4. Random Forest Classifier
3.5. Deep Belief Network (DBN) Classifier
3.5.1. Data Loading and Preprocessing
3.5.2. Hyperparameter Search
- Number of Hidden Units: The number of hidden units (neurons) in the RBM determines the capacity of the model to learn complex features from the input data. Using too few hidden units may prevent the model from capturing the full complexity of the data. On the other hand, using too many hidden units can lead to overfitting and make the training process slower. Therefore, it is necessary to strike a balance between the complexity of the model and the fitting of the data.
- Learning Rate controls the step size at each iteration while updating the weights. It determines how much the model adjusts its weights based on the gradient. If the learning rate is too high, the training may become unstable or diverge. On the other hand, if the learning rate is too low, it could result in slow convergence or cause the model to get stuck in local minima. Typically, a value between and is used, although it may vary depending on the specific problem.
- Number of Training Epochs: This defines how many times the algorithm will iterate over the entire dataset. Using too few epochs can lead to underfitting, where the model fails to learn sufficient patterns from the data. Conversely, using too many epochs can cause overfitting, where the model memorizes the data instead of generalizing effectively.
- Batch Size: This refers to the number of samples processed in each training step. Smaller batches can provide a more precise gradient estimate, but they make the training process noisier. Larger batches offer smoother gradients but can result in slower convergence and require more memory. The adopted solution consists of using cross-validation to evaluate how well the model generalizes to unseen data.
-
Model Complexity
- –
- Number of layers:
- –
- Number of units per layer:
-
Learning rate
- –
-
Number of training epochs
- –
- Epochs per RBM layer:
- –
- Epochs per DBN layer:
-
Batch size: Cross-validation experiments were conducted with partitions.This analysis has shown that the optimal configurations were:
- –
- Number of layers: 4
- –
- Number of units per layer:
- –
- Learning rate:
- –
- Epochs per RBM layer: 10
- –
- Epochs per DBN layer: 100
3.5.3. Cross-Validation Results with Ambient and Train Data Using DBNs
-
Ambient data:
- –
- Training size
- –
- Testing size
-
Train data:
- –
- Training size
- –
- Testing size
3.5.4. Robustness to Noise
- Both graphs show a decreasing trend in accuracy as the noise level increases, indicating that the model’s performance deteriorates under noisier conditions. This decline is expected, as higher noise levels introduce more uncertainty, making it harder for the model to maintain high accuracy.
- The Ambient data accuracy fluctuates slightly, with a noticeable dip in the range , followed by a small increase before continuing to decline.
- The Train data appears more robust to noise, with a smoother and more gradual decline in accuracy. Even at higher noise levels, the performance does not degrade as sharply as in the Ambient data.
- The accuracy of the Train data remains relatively stable at first, with a slight fluctuation in the range , where it briefly increases before gradually declining. The decrease is steady, but a more noticeable drop occurs around , after which the accuracy stabilizes slightly.
- The Ambient data graph (left) exhibits a steeper decline in accuracy compared to the Train data graph (right). At the highest noise level, the accuracy for the ambient data drops to , whereas the train data maintains a slightly higher accuracy ( approximately).
4. Discussion
5. Conclusions
Author Contributions
Conflicts of Interest
Abbreviations
| SHM | Structural Health Monitoring |
| DBN | Deep Belief Network |
| RBM | Restricted Boltzmann Machine |
| PCA | Principal Component Analysis |
| t-SNE | t-Distributed Stochastic Neighbor Embedding |
| PSD | Power Spectral Density |
| ROC | Receiver Operating Characteristic |
| CV | Cross-Validation |
| RF | Random Forest |
| SNR | Signal-to-Noise Ratio |
| ML | Machine Learning |
| AI | Artificial Intelligence |
| DL | Deep Learning |
References
- Maes, K.; Lombaert, G. Monitoring railway bridge KW51 before, during, and after retrofitting. J. Bridge Eng. 2021, 26(3), 04721001. [Google Scholar] [CrossRef]
- Omori Yano, M.; Figueiredo, E.; da Silva, S.; Cury, A.; Moldovan, I. Transfer Learning for Structural Health Monitoring in Bridges That Underwent Retrofitting. Buildings 2023, 13. [Google Scholar] [CrossRef]
- Maes, K.; Van Meerbeeck, L.; Reynders, E.P.B.; Lombaert, G. Validation of vibration-based structural health monitoring on retrofitted railway bridge KW51. Mech. Syst. Signal Process 2022, 165, 108380. [Google Scholar] [CrossRef]
- Altabey, W.A.; Noori, M. Artificial-Intelligence-Based Methods for Structural Health Monitoring. Appl. Sci. 2022, 12(24), 12726. [Google Scholar] [CrossRef]
- Plevris, V.; Papazafeiropoulos, G. AI in Structural Health Monitoring for Infrastructure Maintenance and Safety. Infrastructures 2024, 9, 225. [Google Scholar] [CrossRef]
- Farrar, C.; Worden, K. Structural Health Monitoring: A Machine Learning Perspective; John Wiley and Sons: 2012. [CrossRef]
- Azimi, M.; Eslamlou, A.D.; Pekcan, G. Data-Driven Structural Health Monitoring and Damage Detection through Deep Learning: State-of-the-Art Review. Sensors (Basel) 2020, 20(10), 2778. [Google Scholar] [CrossRef] [PubMed]
- Sargiotis, D. Transforming Civil Engineering with AI and Machine Learning: Innovations, Applications, and Future Directions. Int. J. Res. Publ. Rev. 2025, 6(1), 3780–3805. [Google Scholar] [CrossRef]
- Mansouri, T.S.; Lubarsky, G.; Finlay, D.; McLaughlin, J. Machine Learning-Based Structural Health Monitoring Technique for Crack Detection and Localisation Using Bluetooth Strain Gauge Sensor Network. J. Sens. Actuator Netw. 2024, 13(6), 79. [Google Scholar] [CrossRef]
- Malekloo, A.; Ozer, E.; AlHamaydeh, M.; Girolami, M. Machine learning and structural health monitoring overview with emerging technology and high-dimensional data source highlights. Struct. Health Monit. 2021, 21(4), 147592172110368. [Google Scholar] [CrossRef]
- Presno Vélez, A.; Fernández Muñiz, M.Z.; Fernández Martínez, J.L. Enhancing Structural Health Monitoring with Machine Learning for Accurate Damage Prediction. AIMS Math. 2024, 9(11), 30493–30514. [Google Scholar] [CrossRef]
- Hinton, G.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18(7), 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Kamada, S.; Ichimura, T.; Iwasaki, T. An Adaptive Structural Learning of Deep Belief Network for Image-based Crack Detection in Concrete Structures Using SDNET2018. In Proc. ICIP 2020. [CrossRef]
- Welch, P. The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms. IEEE Trans. Audio Electroacoust. 1967, 15(2), 70–73. [Google Scholar] [CrossRef]
- Fischer, A.; Igel, C. Training Restricted Boltzmann Machines: An Introduction. In Lecture Notes in Computer Science; Springer, 2014; Vol. 8357, pp. 39–78.
- Smolensky, P. Information processing in dynamical systems: Foundations of harmony theory. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, 1986, Vol. 1, pp. 194–281.
- Hinton, G. Training products of experts by minimizing contrastive divergence. Neural Comput. 2002, 14(8), 1771–1800. [Google Scholar] [CrossRef] [PubMed]
- Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy Layer-Wise Training of Deep Networks. Adv. Neural Inf. Process. Syst. 2007, 19, 153–160. [Google Scholar]
- Lee, Y.; Kim, H.; Min, S.; et al. Structural damage detection using deep learning and FE model updating techniques. Sci. Rep. 2023, 13, 18694. [Google Scholar] [CrossRef]
- Li, Z.-J.; Adamu, K.; Yan, K.; Xu, X.-L.; Shao, P.; Li, X.-H.; Bashir, H. M. Detection of Nut–Bolt Loss in Steel Bridges Using Deep Learning Techniques. Sustainability 2022, 14. [Google Scholar] [CrossRef]
- Tang, Y.; Chen, Z.; Wang, K.; Li, H. Vehicle Load Identification Based on Bridge Response Using Deep Learning. J. Civ. Struct. Health Monit. 2024, 14. [Google Scholar] [CrossRef]










| Ambient Data | Train Data | ||
|---|---|---|---|
| Fold | Accuracy | Fold | Accuracy |
| Fold 1 | 0.7834 | Fold 1 | 0.7766 |
| Fold 2 | 0.7965 | Fold 2 | 0.7717 |
| Fold 3 | 0.8670 | Fold 3 | 0.7707 |
| Fold 4 | 0.8381 | Fold 4 | 0.7723 |
| Fold 5 | 0.8169 | Fold 5 | 0.7832 |
| Ambient Data | Train Data | ||
|---|---|---|---|
| Fold | Accuracy | Fold | Accuracy |
| Fold 1 | 0.9878 | Fold 1 | 0.9668 |
| Fold 2 | 0.9746 | Fold 2 | 0.9707 |
| Fold 3 | 0.9883 | Fold 3 | 0.9712 |
| Fold 4 | 0.9606 | Fold 4 | 0.9707 |
| Fold 5 | 0.9905 | Fold 5 | 0.9685 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).