Submitted:
08 August 2024
Posted:
09 August 2024
You are already at the latest version
Abstract

Keywords:
1. Introduction
2. Problem of Eigenvalues
- ➢
- z1≈0.064
- ➢
- z2≈1.870
- ➢
- z3≈3.193
- ➢
- z4≈4.320
- ➢
- z5≈5.393
- ➢
- z6≈6.443
- ➢
- z7≈7.484z
- ➢
- z8≈8.520z
- ➢
- z9≈9.553
- ➢
- z10≈10.585
3. Traditional Iterative Methods for Symmetric Matrices
3.1. Algorithm (Power Method)
3.2. Description of the Problem, Algorithm, and Application:
4. Result for the First Ten Eigenvalues Rounded to 20 Decimal Places Applying This Algorithm:
- X-axis: Algorithm Iterations.
- Y-axis: Eigenvalue Values.

- X-axis: Algorithm Iterations.
- Y-axis: Approximation Error..
5. General Overview
- Power Method: This algorithm iteratively calculates approximations of the eigenvalues and corresponding eigenvectors of the matrix. The initial vector is normalized in each iteration to avoid overflow and underflow errors. The dominant eigenvalue, the one with the largest modulus, becomes increasingly dominant in the result through iterations.
- Convergence: The graphs show the convergence process of the algorithm, demonstrating how the iterative power method successfully finds the dominant eigenvalues of the matrix P30(z)
6. Newton-Raphson Method
6.1. Durand-Kerner Method

7. Hybrid Approaches for Eigenvalues and Eigenvectors Using Neural Networks
7.1. Activation Function and Mathematical Model
7.2. Mathematical Model of the Hybrid Approach
- z is the input (e.g., complex number z)
- W1 is the weight matrix,
- b1 is the bias vector,
- σ1, σ2,σ3 are activation functions applied to the outputs of the neural layers.

8. Conclusions
References
- Laverde, J.G.T. On the Newton-Raphson method and its modifications. Cienc. EN Desarro. 2023, 14, 75–80. [Google Scholar] [CrossRef]
- Menini, L.; Possieri, C.; Tornambè, A. A locally convergent continuous-time algorithm to find all the roots of a time-varying polynomial. Automatica 2021, 131, 109681. [Google Scholar] [CrossRef]
- Bishop, C.M. Pattern Recognition and Machine Learning, Springer: New York, USA, 2006.
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, Massachusetts, USA, 2016. [Google Scholar]
- Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.-R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N.; et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
- Galić, D.; Stojanović, Z.; Čajić, E. Application of Neural Networks and Machine Learning in Image Recognition. Tech. Gaz. 2024, 31, 316–323. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep Learning in Neural Networks. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
- Kreysig, E. Advanced Engineering Mathematics; John Wiley&Sons: Canada, 2011. [Google Scholar]
- Alekseev, V.B. Abel′s Theorem in problmes and Solutions, Springer: USA, 2004.
- Saad, Y.; van der Vorst, H.A. Iterative solution of linear systems in the 20th century. J. Comput. Appl. Math. 2000, 123, 1–33. [Google Scholar] [CrossRef]
- Han, J.; Lu, J.; Zhou, M. Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach. J. Comput. Phys. 2020, 423, 109792. [Google Scholar] [CrossRef]
- Urban, S. Neural Network Architectures and Activation Functions: A Gaussian Process Approach. Doctoral dissertation. Technischen Universtät München, 2017.
- Čajić, E.; Ibrišimović, I.; Šehanović, A.; Bajrić, D.; Šćekić, J. Fuzzy Logic and Neural Networks for Disease Detection and Simulation in Matlab. CS IT Conf. Proc. 2023, 13, 5. [Google Scholar]
- Srivastava, N.; Hinton, G.E.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting Journal of Machine Learning Research. J. Mach. Larning Res. 2014, 15, 1929–1958. [Google Scholar]
- Barić, T.; Boras, V.; Galić, R. Substitutional model of the soil based on artificial neural networks. J. Energy: Energ. 2007, 56, 96–113. [Google Scholar]
- Bengio, Y.; Simard, P.; Frasconi, P. Learning Long-Term Dependencies with Recurrent Neural Networks. IEEE Trans. Neural Netw. 1994, 5, 239–246. [Google Scholar] [CrossRef] [PubMed]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Ajić, E.; Rešić, S.; Elezaj, M.R. Development of efficient models of artificial intelligence for autonomous decision making in dynamic information systems. J. Math. Tech. Comput. Math. 2024, 3. [Google Scholar]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).