Submitted:
22 May 2024
Posted:
24 May 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Identification of Rotorcraft Flight Parameters and Data for MLP Training
2.1. Input and Target Parameters of MLP Model
2.2. Identification of Abnormal Data and Construction of Training and Inference Data
3. MLP Model Structure and Training
- The output of the previous layer (input p for the 1st layer) is used as input;
- The inputs of the layer are multiplied by the weights;
- Biases are added to the calculation results in step 2;
- The results of step 3 become inputs to the activation functions;
- The outputs of the activation functions become the outputs of the layer;
- The outputs of the last layers ultimately become the outputs of the network.
4. Performance Evaluation of Trained MLP Model
4.1. Performance Evaluation Using Test Data
4.2. Performance Evaluation Using Normal Inference Data
4.3. Performance Evaluation Using Abnormal Inference Data
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kim, O.C.; Kim, J.W.; Lee, J.H.; Jo, S.H.; Huh, S.J.; Shin, S.M. Health and Usage Monitoring System Development for Rotorcraft. KSAS 2016 Fall Conference, Jeju, Republic of Korea, 16-18 Nov. 2016.
- Lee, S.M.; Hwang, J.S. A Way to Perform a Helicopter PFAT by KUH Case Study. Journal of The Korean Society for Aeronautical and Space Sciences 2013, 41, 994-1001. [CrossRef]
- Astridge, D.G. Helicopter Transmissions – Design for Safety and Reliability. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 1989, 203, 123-138. [CrossRef]
- Jung, H.S. A Study for Helicopter’s Health and Usage Monitoring System Efficient Operation – Focusing on the KUH-1 -. Master’s degree, Kongju National University, Gongju-si, Republic of Korea, Feb. 2012.
- Kim, O.C.; Ryu, J.B. Application of HUMS System for KHP. Autumn Annual Conference of The Institute of Electronics and Information Engineers, Cheongju-si, Republic of Korea, 3 Nov. 2006.
- Sarafanov, M.; Nikitin, N.O.; Kalyuzhnaya, A.V. Automated Data-driven Approach for Gap Filling in the Time Series using Evolutionary Learning. 16th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2021), Bilbao, Spain, 22-24 Sep. 2021. [CrossRef]
- Lepot, M.; Aubin, J.-B.; Clemens, F.H.L.R. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment. Water 2017, 9, 796. [CrossRef]
- Ding, Z.; Mei, G.; Cuomo, S.; Li, Y.; Xu, N. Comparison of Estimating Missing Values in IoT Time Series Data Using Different Interpolation Algorithms. Int J Parallel Prog 2020, 48, 534-548. [CrossRef]
- Rodrigues, R. Filling in the Gap: A General Method Using Neural Networks. 2010 Computing in Cardiology, Belfast, Northern Ireland, 26-29 Sep. 2010.
- Pascual-Granado, J.; Garrido, R.; Suárez, J.C. MIARMA: A Minimal-Loss Information Method for Filling Gaps in Time Series. A&A 2015, 575, A78 1-8. [CrossRef]
- Coutinho, E.R.C.; da Silva, R.M.; Madeira, J.G.F.; Coutinho, P.R.O.S.; Boloy, R.A.M.; Delgado, A.R.S. Application of Artificial Neural Networks (ANNs) in the Gap Filling of Meteorological Time Series. Revista Brasileira de Meteorologia 2018, 33, 317-328. [CrossRef]
- Bustami, R.; Bessaih, N.; Bong, C.; Suhaili, S. Artificial Neural Network for Precipitation and Water Level Predictions of Bedup River. IAENG International Journal of Computer Science 2007, 34.
- Maqsood, I.; Khan, M.R.; Abraham, A. An Ensemble of Neural Networks for Weather Forecasting. Neural Comput & Applic 2004, 13, 112-122. [CrossRef]
- Olcese, L.E.; Palancar, G.G.; Toselli, B.M. A Method to Estimate Missing AERONET AOD Values based on Artificial Neural Networks. Atmospheric Environment 2015, 113, 140-150. [CrossRef]
- Hagan, M.T.; Demuth, H.B.; Beale, M.H; Jesús, O.D. Neural Network Design, 2nd ed.; Martin Hagan: Stillwater, U.S., 2014.
- Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd ed.; O’Reilly Media, Inc.: Sebastopol, U.S., 2019.
- Glorot, X.; Bengio, Y. Understanding the Difficulty of Training Deep Feedforward Neural Networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13-15 May 2010.
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7-13 Dec. 2015. [CrossRef]
- Lecun, Y.A.; Bottou, L.; Orr, G.B.; Müller, K.-R. Efficient BackProp. Neural Networks: Tricks of the Trade; Montavon, G., Müller, K,-R, Orr, G.,B., Eds.; Springer: Berlin, Germany, 2002; Volume 7700, pp. 9-48. [CrossRef]
- Bengio, Y.; Lecun, Y. Scaling Learning Algorithms toward AI. Large-Scale Kernel Machines; Bottou, L., Chapelle, O., DeCoste, D., Weston, J., Eds.; The MIT Press: Location, Country, 2007. [CrossRef]
- Kingma, D.P.; Ba, J.L. ADAM: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [CrossRef]
- Jeong, S.H.; Park, E.G.; Cho, J.Y.; Kim, J.H. Development of Automatic Hard Landing Detection Model using Autoencoder. International Journal of Aeronautical and Space Sciences 2023, 24, 778-791. [CrossRef]
- Jeong. S.H.; Lee, K.B.; Ham, J.H.; Kim, J.H.; Cho, J.Y. Estimation of Maximum Strains and Loads in Aircraft Landing using Artificial Neural Network. International Journal of Aeronautical and Space Sciences 2019, 21, 117-132. [CrossRef]









| Sampling Rate | 1hz | 2hz | 4hz | 8hz | ||
|---|---|---|---|---|---|---|
| Type of Data |
Discrete | INT | 40 | 3 | 7 | 36 |
| HEX | 10 | 18 | 11 | 53 | ||
| Continuous | INT | 21 | 2 | 7 | 11 | |
| FLOAT | 16 | 16 | 22 | 52 | ||
| Total | 87 | 39 | 47 | 152 | ||

| Initialization | Activation functions |
|---|---|
| Glorot | None, tanh, logistic, softmax |
| He | ReLU and variants |
| LeCun | SELU |
| Model & Hyper Parameters | Value | |
|---|---|---|
| Number of layers | 5 to 8 | |
| Number of neurons in hidden layers | 30 to 80 | |
| Loss function | MSE | |
| Normalization | StandardScaler | |
| Regularization | Early stopping | |
| Patience | 20 | |
| Epochs | 1,000 | |
| Batch size | 128 | |
| Activation functions |
hidden layers | SELU |
| output layer | Linear | |
| Kernel Initializers |
hidden layers | LeCun |
| output layer | Glorot | |
| Adam Optimizer |
beta1 | 0.9 |
| beta2 | 0.999 | |
| epsilon | 1e-7 | |
| learning rate | 0.001 | |
| Layers | Neurons | Number of Model Parameters |
EG1 OP |
EG1 OT |
EG2 OP |
EG2 OT |
MGB OP |
MGB OT |
IGB OT |
TGB OT |
N1 HYD TEMP |
N2 HYD TEMP |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 5 | 30 | 4,960 | 0.986 | 0.982 | 0.984 | 0.984 | 0.911 | 0.962 | 0.955 | 0.931 | 0.959 | 0.955 |
| 6 | 30 | 5,890 | 0.986 | 0.982 | 0.986 | 0.985 | 0.916 | 0.966 | 0.960 | 0.937 | 0.963 | 0.960 |
| 7 | 30 | 6,820 | 0.987 | 0.982 | 0.985 | 0.984 | 0.916 | 0.966 | 0.959 | 0.937 | 0.963 | 0.959 |
| 8 | 30 | 7,750 | 0.987 | 0.984 | 0.984 | 0.986 | 0.922 | 0.968 | 0.963 | 0.943 | 0.966 | 0.963 |
| 5 | 40 | 7,810 | 0.988 | 0.984 | 0.986 | 0.986 | 0.920 | 0.968 | 0.962 | 0.942 | 0.965 | 0.962 |
| 6 | 40 | 9,450 | 0.988 | 0.985 | 0.987 | 0.988 | 0.926 | 0.970 | 0.964 | 0.947 | 0.968 | 0.965 |
| 7 | 40 | 11,090 | 0.988 | 0.985 | 0.988 | 0.987 | 0.931 | 0.972 | 0.967 | 0.950 | 0.970 | 0.967 |
| 5 | 50 | 11,260 | 0.989 | 0.985 | 0.988 | 0.987 | 0.929 | 0.971 | 0.966 | 0.949 | 0.969 | 0.965 |
| 8 | 40 | 12,730 | 0.988 | 0.985 | 0.987 | 0.987 | 0.931 | 0.971 | 0.968 | 0.951 | 0.970 | 0.967 |
| 6 | 50 | 13,810 | 0.989 | 0.986 | 0.989 | 0.988 | 0.935 | 0.973 | 0.969 | 0.953 | 0.972 | 0.969 |
| 5 | 60 | 15,310 | 0.989 | 0.986 | 0.989 | 0.988 | 0.934 | 0.973 | 0.969 | 0.953 | 0.972 | 0.969 |
| 7 | 50 | 16,360 | 0.988 | 0.986 | 0.987 | 0.988 | 0.937 | 0.975 | 0.971 | 0.956 | 0.974 | 0.970 |
| 8 | 50 | 18,910 | 0.989 | 0.987 | 0.989 | 0.989 | 0.942 | 0.976 | 0.973 | 0.958 | 0.975 | 0.972 |
| 6 | 60 | 18,970 | 0.989 | 0.987 | 0.988 | 0.989 | 0.942 | 0.976 | 0.973 | 0.959 | 0.975 | 0.972 |
| 5 | 70 | 19,960 | 0.991 | 0.987 | 0.989 | 0.990 | 0.944 | 0.977 | 0.974 | 0.961 | 0.977 | 0.974 |
| 7 | 60 | 22,630 | 0.989 | 0.988 | 0.990 | 0.990 | 0.946 | 0.977 | 0.975 | 0.962 | 0.978 | 0.975 |
| 6 | 70 | 24,930 | 0.990 | 0.988 | 0.990 | 0.989 | 0.945 | 0.976 | 0.975 | 0.962 | 0.977 | 0.973 |
| 5 | 80 | 25,210 | 0.991 | 0.988 | 0.989 | 0.990 | 0.944 | 0.976 | 0.975 | 0.962 | 0.977 | 0.974 |
| 8 | 60 | 26,290 | 0.990 | 0.988 | 0.990 | 0.989 | 0.945 | 0.977 | 0.975 | 0.964 | 0.979 | 0.976 |
| 7 | 70 | 29,900 | 0.990 | 0.988 | 0.990 | 0.989 | 0.949 | 0.979 | 0.977 | 0.965 | 0.979 | 0.976 |
| 6 | 80 | 31,690 | 0.991 | 0.988 | 0.990 | 0.990 | 0.949 | 0.979 | 0.978 | 0.967 | 0.980 | 0.977 |
| 8 | 70 | 34,870 | 0.990 | 0.988 | 0.988 | 0.990 | 0.950 | 0.979 | 0.977 | 0.967 | 0.980 | 0.978 |
| 7 | 80 | 38,170 | 0.991 | 0.989 | 0.989 | 0.991 | 0.950 | 0.980 | 0.978 | 0.969 | 0.981 | 0.979 |
| 8 | 80 | 44,650 | 0.991 | 0.989 | 0.991 | 0.991 | 0.954 | 0.981 | 0.980 | 0.970 | 0.982 | 0.980 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).