Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Mapping Single Palm-Trees Species in Forest Environments with a Deep Convolutional Neural Network

Version 1 : Received: 5 March 2021 / Approved: 8 March 2021 / Online: 8 March 2021 (13:37:58 CET)

How to cite: Arce, L.S.D.; Arruda, M.D.S.D.; Furuya, D.E.G.; Osco, L.P.; Marques Ramos, A.P.; Aoki, C.; Pott, A.; Fatholahi, S.; Li, J.; Gonçalves, W.N.; Marcato Junior, J. Mapping Single Palm-Trees Species in Forest Environments with a Deep Convolutional Neural Network. Preprints 2021, 2021030220 (doi: 10.20944/preprints202103.0220.v1). Arce, L.S.D.; Arruda, M.D.S.D.; Furuya, D.E.G.; Osco, L.P.; Marques Ramos, A.P.; Aoki, C.; Pott, A.; Fatholahi, S.; Li, J.; Gonçalves, W.N.; Marcato Junior, J. Mapping Single Palm-Trees Species in Forest Environments with a Deep Convolutional Neural Network. Preprints 2021, 2021030220 (doi: 10.20944/preprints202103.0220.v1).

Abstract

Accurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. State-of-the-art deep learning methods could be capable of identifying tree species with an attractive cost, accuracy, and computational load in RGB images. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment, and considers the likelihood of every pixel in the image to be recognized as a possible tree by implementing a confidence map feature extraction. This study compares the performance of the proposed method against state-of-the-art object detection networks. For this, images from a dataset composed of 1,394 airborne scenes, where 5,334 palm-trees were manually labeled, were used. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than both Faster R-CNN and RetinaNet considering equal experiment conditions. The proposed network provided fast solutions to detect the palm trees, with a delivered image detection of 0.073 seconds and a standard deviation of 0.002 using the GPU. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M flexuosa palm tree and may be useful for future frameworks.

Subject Areas

Convolutional Neural Network; Deep Learning; Environmental Monitoring

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.