Version 1
: Received: 29 February 2024 / Approved: 29 February 2024 / Online: 29 February 2024 (12:56:51 CET)
How to cite:
Son, E.; Ha, U. R.; Seong, H.; Park, Y.; Song, J.; Kim, H. ResNet152-U-Net: Transfer Learning for the Semantic Segmentation of Forest Restoration Site Using Aerial Images. Preprints2024, 2024021732. https://doi.org/10.20944/preprints202402.1732.v1
Son, E.; Ha, U. R.; Seong, H.; Park, Y.; Song, J.; Kim, H. ResNet152-U-Net: Transfer Learning for the Semantic Segmentation of Forest Restoration Site Using Aerial Images. Preprints 2024, 2024021732. https://doi.org/10.20944/preprints202402.1732.v1
Son, E.; Ha, U. R.; Seong, H.; Park, Y.; Song, J.; Kim, H. ResNet152-U-Net: Transfer Learning for the Semantic Segmentation of Forest Restoration Site Using Aerial Images. Preprints2024, 2024021732. https://doi.org/10.20944/preprints202402.1732.v1
APA Style
Son, E., Ha, U. R., Seong, H., Park, Y., Song, J., & Kim, H. (2024). ResNet152-U-Net: Transfer Learning for the Semantic Segmentation of Forest Restoration Site Using Aerial Images. Preprints. https://doi.org/10.20944/preprints202402.1732.v1
Chicago/Turabian Style
Son, E., Jungeun Song and Hyungho Kim. 2024 "ResNet152-U-Net: Transfer Learning for the Semantic Segmentation of Forest Restoration Site Using Aerial Images" Preprints. https://doi.org/10.20944/preprints202402.1732.v1
Abstract
Accurate detection of forest restoration sites using deep learning is crucial for effective ecosystem management, optimizing reforestation efforts, and ensuring environmental sustainability. While various deep learning-based systems have been developed for this purpose, the influence of different neural network architectures on model performance during the identification of forest restoration sites remains underexplored. This study aimed to address this gap by exploring the optimal methodology for extracting and classifying candidate sites for forest restoration based on Convolutional Neural Networks (CNNs), specialized for image recognition. Four categories—arable land, road and barren, quarry, and forest—were defined as candidate sites for forest restoration. A dataset comprising 17,043 samples was divided into training (11,929) and validation (5,114) sets at a ratio of 7:3 for model learning. Model accuracy was evaluated using pixel accuracy (PA) and mean intersection over union (mean IoU). The Resnet152-U-Net model demonstrated a pixel accuracy of 95.2% and a mean IoU of 61.3%, indicating excellent performance in extracting candidate sites for forest restoration. This approach offers spatial and temporal advantages over conventional field surveys or aerial image-based assessments, potentially serving as valuable data for selecting future forest restoration sites.
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.