Submitted:
09 January 2026
Posted:
09 January 2026
You are already at the latest version
Abstract
Keywords:
I. Introduction
II. Related Work
III. Research on the Personalized Mining Algorithm for Grassroots Network Data Based on Deep Learning
A. Overall Design of the Mining Process
B. Data Preparation Stage
C. Construction and Training of the Fuzzy Neural Network
- 1).
- Construction of the Fuzzy Neural Network


- 2).
- Training of the Fuzzy Neural Network
D. Network Pruning and Rule Extraction Stage
IV. Experimental Results and Analysis
A. Comparison of Overall Mining Efficiency
B. Comparison of Mining Accuracy



References
- Jamali, M.; Ester, M. A matrix factorization technique with trust propagation for recommendation in social networks. Proc. RecSys, 2010; pp. 135–142. [Google Scholar]
- Palla, G.; Barabási, A. L.; Vicsek, T. Quantifying social group evolution. Nature 2007, vol. 446(no. 7136), 664–667. [Google Scholar] [CrossRef]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. Proc. NIPS, 2017; pp. 1024–1034. [Google Scholar]
- Miorandi, D.; Sicari, S.; De Pellegrini, F.; Chlamtac, I. Internet of Things: Vision, applications and research challenges. Ad Hoc Networks 2012, vol. 10(no. 7), 1497–1516. [Google Scholar] [CrossRef]
- Heckerman, D.; Geiger, D.; Chickering, D. M. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning 1995, vol. 20(no. 3), 197–243. [Google Scholar] [CrossRef]
- Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques; Morgan Kaufmann, 2011. [Google Scholar]
- Baker, R. S.; Inventado, P. S. “Educational data mining and learning analytics,” in Learning Analytics; Springer: New York, NY, 2014; pp. 61–75. [Google Scholar]
- D. O. Hebb, The Organization of Behavior: A Neuropsychological Theory; Wiley, 1949.
- Moustapha, A. I.; Selmic, R. R. Wireless sensor network modeling using modified recurrent neural networks: Application to fault detection. IEEE Transactions on Instrumentation and Measurement 2008, vol. 57(no. 5), 981–988. [Google Scholar] [CrossRef]
- Chen C., Wu Y., Dai Q., Zhou H. Y., Xu M., Yang S., … and Yu Y., A survey on graph neural networks and graph transformers in computer vision: A task-oriented perspective. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
- Peng, S.; Zhang, X.; Zhou, L.; Wang, P. YOLO-CBD: Classroom Behavior Detection Method Based on Behavior Feature Extraction and Aggregation. Sensors 2025, vol. 25(no. 10), 3073. [Google Scholar] [CrossRef]
- Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph attention networks. stat 2017, vol. 1050(no. 20), 10–48550. [Google Scholar]
- Zhang, X.; Wang, Q. EEG anomaly detection using temporal graph attention for clinical applications. Journal of Computer Technology and Software 2025, vol. 4(no. 7). [Google Scholar]
- Sehgal, U.; Kaur, K.; Kumar, P. Notice of violation of IEEE publication principles: The anatomy of a large-scale hyper textual web search engine. Proc. 2009 Second Int. Conf. Computer and Electrical Engineering 2009, vol. 2, 491–495. [Google Scholar]
- Perozzi, B.; Al-Rfou, R.; Skiena, S. Deepwalk: Online learning of social representations. Proc. 20th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Aug. 2014; pp. 701–710. [Google Scholar]
- Khedri, K.; Rawassizadeh, R.; Wen, Q.; Hosseinzadeh, M. Pruning and quantization impact on graph neural networks. arXiv 2025, arXiv:2510.22058. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 2014, vol. 15(no. 1), 1929–1958. [Google Scholar]
- Donoho, D. L.; Johnstone, I. M. Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, vol. 81(no. 3), 425–455. [Google Scholar] [CrossRef]
- Xie, A.; Chang, W. C. Deep learning approach for clinical risk identification using transformer modeling of heterogeneous EHR data. arXiv 2025, arXiv:2511.04158. [Google Scholar] [CrossRef]
- Wu, Y.; Qin, Y.; Su, X.; Lin, Y. Transformer-based risk monitoring for anti-money laundering with transaction graph integration. Proc. 2025 2nd Int. Conf. Digital Economy, Blockchain and Artificial Intelligence, June 2025; pp. 388–393. [Google Scholar]
- Hu, C.; Cheng, Z.; Wu, D.; Wang, Y.; Liu, F.; Qiu, Z. Structural generalization for microservice routing using graph neural networks. arXiv 2025, arXiv:2510.15210. [Google Scholar] [CrossRef]
- Pan, S.; Wu, D. Trustworthy summarization via uncertainty quantification and risk awareness in large language models. arXiv 2025, arXiv:2510.01231. [Google Scholar]
- Guan, T.; Sun, S.; Chen, B. Faithfulness-aware multi-objective context ranking for retrieval-augmented generation. 2025. [Google Scholar]
- Wang, C.; Yuan, T.; Hua, C.; Chang, L.; Yang, X.; Qiu, Z. Integrating large language models with cloud-native observability for automated root cause analysis and remediation. (preprint). 2025. [Google Scholar] [CrossRef]
- Cao, K.; Zhao, Y.; Chen, H.; Liang, X.; Zheng, Y.; Huang, S. Multi-hop relational modeling for credit fraud detection via graph neural networks. (preprint). 2025. [Google Scholar] [CrossRef]
- Rama Moorthy, H.; Avinash, N. J.; Krishnaraj Rao, N. S.; Raghunandan, K. R.; Dodmane, R.; Blum, J. J.; Gabralla, L. A. Dual stream graph augmented transformer model integrating BERT and GNNs for context aware fake news detection. Scientific Reports 2025, vol. 15(no. 1), 25436. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. Advances in Neural Information Processing Systems 2017, vol. 30. [Google Scholar]
- Kipf, T. N. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Jamali, M.; Ester, M. A matrix factorization technique with trust propagation for recommendation in social networks. Proc. Fourth ACM Conf. Recommender Systems, Sept. 2010; pp. 135–142. [Google Scholar]
- Huang, J.; Li, S. Z.; Zhang, Y. Learning Bayesian networks from data: An efficient approach. IEEE Trans. Syst., Man, Cybern. 2000, vol. 30(no. 6), 263–274. [Google Scholar]
- Li, J.; Gan, Q.; Wu, R.; Chen, C.; Fang, R.; Lai, J. Causal representation learning for robust and interpretable audit risk identification in financial systems. 2025. [Google Scholar] [CrossRef]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems 2021, vol. 33(no. 12), 6999–7019. [Google Scholar] [CrossRef]
- Bishop, C. M.; Nasrabadi, N. M. Pattern Recognition and Machine Learning; Springer, 2006. [Google Scholar]
- Brin, S.; Page, L. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems 1998, vol. 30(no. 1-7), 107–117. [Google Scholar] [CrossRef]
- Baker, R. S.; Inventado, P. S. Educational data mining and learning analytics. In in Learning Analytics; Springer, 2014; pp. 61–75. [Google Scholar]
- Kim, Y. Convolutional neural networks for sentence classification. arXiv 2014, arXiv:1408.5882. [Google Scholar] [CrossRef]
- Lyu, N.; Wang, Y.; Chen, F.; Zhang, Q. Advancing text classification with large language models and neural attention mechanisms. arXiv 2025, arXiv:2512.09444. [Google Scholar] [CrossRef]
- A self-supervised learning framework for robust anomaly detection in imbalanced and heterogeneous time-series data. 2025.
- Hu, X.; Kang, Y.; Yao, G.; Kang, T.; Wang, M.; Liu, H. Dynamic prompt fusion for multi-task and cross-domain adaptation in LLMs. arXiv 2025, arXiv:2509.18113. [Google Scholar]
- Lyu, N.; Chen, F.; Zhang, C.; Shao, C.; Jiang, J. Deep temporal convolutional neural networks with attention mechanisms for resource contention classification in cloud computing. 2025. [Google Scholar] [CrossRef]




| Algorithm | Dataset | Precision P (%) | Recall R (%) | Similarity D (%) |
|---|---|---|---|---|
| Proposed Algorithm | Dataset A | 98.5 | 97.86 | 95.47 |
| Proposed Algorithm | Dataset B | 94.83 | 92.11 | 90.03 |
| Method in [5] | Dataset A | 93.46 | 91.27 | 89.06 |
| Method in [5] | Dataset B | 84.8 | 80.44 | 79.33 |
| Method in [6] | Dataset A | 87.48 | 82.7 | 84.33 |
| Method in [6] | Dataset B | 82.77 | 73.73 | 76.28 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.