Submitted:
13 March 2025
Posted:
13 March 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
- A New Early Forest Fire Detection Model: YOLO-UFS Model: We propose a novel detection model, YOLO-UFS, designed to enhance drone-based early forest fire and smoke detection by addressing low computational cost, low latency, complex background interference, and the coexistence of smoke and fire.
- Self-built Dataset: A custom dataset was created, comprising three types of data: small flames only, smoke only, and combined small flames and smoke. Experiments were conducted to compare its performance with classical algorithms.
- Model improvements: We improved by replacing the C3 module with C3-MNV4 to reduce parameters and improve feature extraction. The AF-IoU loss function optimizes detection accuracy, especially for small targets. NAM concentrates the kernel in the target region, while ObjectBox and BiFPN improve detail retention and generalization. These upgrades make YOLO-UFS more accurate and efficient in early forest fire detection.
2. Materials and Methods
2.1. Early Forest Fire Selection Image Acquisition
- a.
- Target Identification
- b.
- Interference with Detection Targets
- c.
- Position of Detection Targets
2.1.1. Data Augmentation Processing
2.1.2. Dataset Construction
2.2. The YOLO-UFS Model
2.2.1. Replace the C3 Module
2.2.2. Introduction of the Attention Mechanism NAM
2.2.3. Bidirectional Characteristic Pyramid Network (BiFPN)
2.2.4. ObjectBox Detector
2.2.5. Optimize the Loss Function
3. Experiments and Analysis of Results
3.1. Test Conditions and Indicators
3.2. Comparative Experiments
3.3. Ablation Experiments
3.4. Generalization Experiment
3.4.1. Generalization Comparison Experiments
3.4.2. Generalized Ablation Experiments
4. Visual Analysis and Discussion
4.1. Visual Analysis and Shortcomings
4.2. Discussion of Future Work
5. Conclusion
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- CUI, R. K., QIAN, L. H., & WANG, Q. H. (2023). Research progress on fire protection function evaluation of forest road network. World Forestry Research, 36(6), 32-37.
- National Bureau of Statistics. (2024). Statistical Bulletin of the People’s Republic of China on National Economic and Social Development for 2023 [EB/OL]. Available online: https://www.stats.gov.cn/sj/zxfb/202402/t20240228_1947915.html.
- ABID, F. (2021). A survey of machine learning algorithms based forest fires prediction and detection systems. Fire Technology, 57, 559-590.
- ALKHATIB, A. A. A. (2014). A review on forest fire detection techniques. International Journal of Distributed Sensor Networks, 10, 597368.
- CHUN, B., JIE, C., QUN, H., et al. (2023). Dual-YOLO architecture from infrared and visible images for object detection. Sensors, 23(6), 2934.
- ZHANG, S., GAO, D., LIN, H., et al. (2019). Wildfire detection using sound spectrum analysis based on the internet of things. Sensors, 19, 5093.
- LAI, X. L. (2015). Research and design of forest fire monitoring system based on data fusion and Iradium communication [D]. Beijing: Beijing Forestry University.
- NAN, Y. L., ZHANG, H. C., ZHENG, J. Q., et al. (2021). Application of deep learning to forestry. World Forestry Research, 34(5), 87-90.
- YUAN, C., ZHANG, Y. M., & LIU, Z. X. (2015). A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Canadian Journal of Forest Research, 45(7), 783-792.
- HAN, Z. S., FAN, X. Q., FU, Q., et al. (2024). Multi-source information fusion target detection from the perspective of unmanned aerial vehicle. Systems Engineering and Electronic Technology. Available online: https://link.cnki.net/urlid/11.2422.tn.20240430.1210.003.
- PANAGIOTIS, B., PERIKLIS, P., KOSMAS, D., et al. (2020). A review on early forest fire detection systems using optical remote sensing. Sensors, 20(22), 6442.
- LI, D. (2023). The research on early forest fire detection algorithm based on deep learning [D]. Changsha: Central South University of Forestry & Technology.
- XUE, Z., LIN, H., & WANG, F. (2022). A small target forest fire detection model based on YOLOv5 improvement. Forests, 13(8), 1332.
- ZHAO, L., ZHI, L., ZHAO, C., et al. (2022). Fire-YOLO: A small target object detection method for fire inspection. Sustainability, 14(9), 4930.
- ZU, X. P. (2023). Research on forest fire smoke recognition method based on deep learning [D]. Harbin: Northeast Forestry University.
- SU, X. D., HU, J. X., CHENLIN, Z. T., et al. (2023). Fire image detection algorithm for UAV based on improved YOLOv5. Computer Measurement & Control, 31(5), 41-47.
- PI, J., LIU, Y. H., & LI, J. H. (2023). Research on lightweight forest fire detection algorithm based on YOLOv5s. Journal of Graphics, 44(1), 26-32.
- Zhang, Y., Li, Q., & Wang, H. (2024). YOLOv8-FFD: An Enhanced Deformable Convolution Network for Forest Fire Detection. *IEEE Transactions on Geoscience and Remote Sensing*, 62, 5602715.
- Wang, X., Chen, Z., & Liu, Y. (2024). Visible-Infrared Cross-Modality Forest Fire Detection Using Dynamic Feature Fusion. *Remote Sensing*, 16(3), 521.
- Liu, J., Zhang, W., & Zhou, L. (2024). 3D-YOLOv5: A Height-Aware Network for Aerial Forest Fire Detection. *ISPRS Journal of Photogrammetry and Remote Sensing*, 210, 1-15.
- Chen, K., Xu, M., & Huang, S. (2024). Swin-YOLO: Transformer-Based Adaptive Sparse Attention for Forest Fire Detection. *Expert Systems with Applications*, 238, 121731.
- Zhou, T., Wang, Q., & Li, X. (2025). 3D Localization of Forest Fires Using Neural Radiance Fields and Deep Learning. *Fire Technology*, 61(2), 789-806.
- Gupta, A., Sharma, P., & Kumar, R. (2025). EdgeFireNet: Neural Architecture Search for Real-Time Forest Fire Detection on Edge Devices. *IEEE Internet of Things Journal*, 12(5), 8012-8023.
- SUN X, SUN L, HUANG Y. (2020). Forest fire smoke recognition based on convolutional neural network [J]. Journal of Forestry Research, 32(5):1-7.
- LU K J, HUANG J W, LI J H, et al. MTL-FFDET: a multi task learning-based model for forest fire detection [J]. Forests, 2022,13(9):1448.
- PREMA C E, VINSLEY S S, SURESH S. Multi feature analysis of smoke in YUV color space for early forest fire detection [J]. Fire Technology,2016,52:1319-1342.
- XU Y Q, LI J M, ZHANG F Q. A UAV-based forest fire patrol path planning strategy [J]. Forests,2022,13:1952.
- Zu Xinping, Li Dan. (2022). Forest fire smoke recognition method based on UAV images and improved YOLOv3-SPP algorithm. Journal of Forestry Engineering, 7(5), 142-149. [CrossRef]
- Zhang, Q. A., Liu, Y. Q., Gong, C. Y., et al. (2020). Applications of deep learning for dense scenes analysis in agriculture: A review. Sensors, 20(5), 1520. [CrossRef]
- Bu Huili, Fang Xianjin, Yang Gaoming. (2022). Object Detection Algorithm for Remote Sensing Images Based on Multi-dimensional Information Interaction. Journal of Heilongjiang Institute of Technology (Comprehensive Edition), 22(10), 58-65. [CrossRef]
- QIN D F, CHAS LEICHNER, MANOLIS DELAKIS, et al. MobilNetV4: universal models for the mobile ecosystem [J]. ArXiv Preprint ArXiv:2404.10518.
- [MARK SANDLER, ANDREW HOWARD, ZHU M L, et al. (2018) MobileNetV2: inverted residuals and linear bottlenecks [C]. IEEE Conference on Computer Vision and Pattern Recognition, :4510-4520.
- Yang, Yu. (2022). Research on image data augmentation method based on generative adversarial network. Zhengzhou:Strategic Support Force Information Engineering University. [CrossRef]
- Xiu, Y., Zheng, X. Y., Sun, L. L., et al. (2022). FreMix: Frequency-based Mixup for data augmentation. Wireless Communications and Mobile Computing, 2022, 1-8. [CrossRef]
- Liu, Y. C., Shao, Z. R., Teng, Y. Y., et al. (2021). NAM: Normalization-based attention module. arXiv. Available online: https://arxiv.org/abs/2111.12419.
- Yao Xiaokai. (2022). Research on airtight water inspection method of closed container based on deep separable convolutional neural network. Xi'an:Xijing University. [CrossRef]
- Tan, M. X., Pang, R. M., & Quoc, V. L. (2020). EfficientDet: Scalable and efficient object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA (pp. 10781-10790).
- Liu, S., Qi, L., Qin, H. F., et al. (2018). Path aggregation network for instance segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA (pp. 8759-8768).
- Zand, M., Etemad, A., & Greenspan, M. (2022). ObjectBox: From centers to boxes for anchor-free object detection. In Lecture Notes in Computer Science (pp. 390-406). Cham: Springer Nature Switzerland. [CrossRef]
- TONG Z J, CHEN Y H, XU Z W,et al. (2023) Wise-IoU:boun⁃T-box regression loss with dynamic focusing mechanism, 2023, 08-12.
- REDMON J, FARHADI A. Yolov3: An incremental improvement [J]. arXiv preprint arXiv:1804.02767,2018.
- SU X D, HU J X, CHENLIN Z T, et al.( 2023) Fire image detection algorithm for UAV based on improved YOLOv5 [J]. Computer Measurement & Control, 31(5):41-47.












| Dataset | Sample type | Number of samples | Total | |
|---|---|---|---|---|
| Training set | Positive sample | only flames only smoke Flames, smoke coexist |
2720 1776 5980 |
10476 |
| Validation set | Positive sample | only flames only smoke Flames, smoke coexist |
2181 1141 4098 |
7685 |
| Negative samples | 639 | 639 | ||
| Total | 18800 | |||
| Method | mAP/% | Flops/G | Parameters/piece |
|---|---|---|---|
| YOLOV3-Tiny | 79.7 | 13.2 | 8849182 |
| YOLOv5s | 87.7 | 15.8 | 7015519 |
| YOLOv8 | 89.2 | 28.4 | 11126358 |
| YOLOXs | 86.9 | 9.6 | 2975226 |
| YOLO-UFS | 91.3 | 4 | 1525465 |
| Number | ObjectBox | BiFPN | NAM | AF | C3- MNV4 | Weight /MB | Precision /% | Recall /% | mAP /% | FLoPs/G | Parameters piece |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 14.2 | 85.3 | 80.4 | 88.4 | 15.8 | 7015519 | |||||
| 2 | √ | 14.0 | 85.7 | 80.7 | 88.6 | 3.5 | 1630157 | ||||
| 3 | √ | 14.1 | 85.6 | 80.6 | 88.7 | 4.6 | 1685145 | ||||
| 4 | √ | 14.1 | 85.4 | 80.5 | 88.6 | 4.3 | 1944973 | ||||
| 5 | √ | 14.0 | 86.2 | 81.3 | 88.8 | 5.3 | 5015519 | ||||
| 6 | √ | 14.0 | 87.8 | 81.2 | 89.2 | 4.4 | 1014517 | ||||
| 7 | √ | √ | 14.2 | 85.4 | 80.7 | 88.4 | 3.8 | 1447897 | |||
| 8 | √ | √ | 14.0 | 87.1 | 80.6 | 88.6 | 4.5 | 1944973 | |||
| 9 | √ | √ | 14.1 | 88.3 | 82.3 | 90.4 | 4.3 | 3499378 | |||
| 10 | √ | √ | √ | 14.1 | 88.4 | 81.6 | 90.6 | 3.9 | 1447897 | ||
| 11 | √ | √ | √ | 14.1 | 87.8 | 81.4 | 90.3 | 4 | 1525465 | ||
| 12 | √ | √ | √ | 14.2 | 88.2 | 82.6 | 91.4 | 4 | 1477634 | ||
| 13 | √ | √ | √ | √ | √ | 14.0 | 88.6 | 83.7 | 91.3 | 4 | 1525465 |
| Method | Weight /MB | Enter a size | Precision /% | Recall /% | Mean of average accuracy /% | F1 | Recognition rate/(frame*s-1) |
|---|---|---|---|---|---|---|---|
| YOLOv3 | 120.5 | 640 | 70.9 | 59.9 | 63.1 | 64.9 | 39.7 |
| YOLOv4 | 18.1 | 640 | 73.2 | 59.3 | 64.4 | 65.5 | 71.0 |
| YOLOv5s | 14.2 | 640 | 75.4 | 57.3 | 62.3 | 65.1 | 85.6 |
| YOLOv7 | 135.0 | 640 | 76.2 | 52.9 | 79.4 | 62.4 | 35.3 |
| YOLOX | 15.5 | 640 | 74.2 | 53.5 | 77.4 | 62.2 | 169.5 |
| YOLO-UFS | 14.0 | 640 | 74.9 | 58.7 | 82.3 | 65.8 | 172.4 |
| Model | PA-S | PA-M | PA-L | RA-S | RA-M | RA-L |
|---|---|---|---|---|---|---|
| YOLOv5s | 18.3 | 27.0 | 22.1 | 30.7 | 43.8 | 27.6 |
| YOLO-UFS | 23.8 | 29.8 | 27.6 | 34.2 | 47.2 | 37.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).