Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Mapping Single Palm-Trees Species in Forest Environments with a Deep Convolutional Neural Network

Version 1 : Received: 5 March 2021 / Approved: 8 March 2021 / Online: 8 March 2021 (13:37:58 CET)

A peer-reviewed article of this Preprint also exists.

Arce, L.S.D., Osco, L.P., Arruda, M.d.S.d. et al. Mauritia flexuosa palm trees airborne mapping with deep convolutional neural network. Sci Rep 11, 19619 (2021). https://doi.org/10.1038/s41598-021-98522-7 Arce, L.S.D., Osco, L.P., Arruda, M.d.S.d. et al. Mauritia flexuosa palm trees airborne mapping with deep convolutional neural network. Sci Rep 11, 19619 (2021). https://doi.org/10.1038/s41598-021-98522-7

Abstract

Accurately mapping individual tree species in densely forested environments is crucial to forest inventory. When considering only RGB images, this is a challenging task for many automatic photogrammetry processes. The main reason for that is the spectral similarity between species in RGB scenes, which can be a hindrance for most automatic methods. State-of-the-art deep learning methods could be capable of identifying tree species with an attractive cost, accuracy, and computational load in RGB images. This paper presents a deep learning-based approach to detect an important multi-use species of palm trees (Mauritia flexuosa; i.e., Buriti) on aerial RGB imagery. In South-America, this palm tree is essential for many indigenous and local communities because of its characteristics. The species is also a valuable indicator of water resources, which comes as a benefit for mapping its location. The method is based on a Convolutional Neural Network (CNN) to identify and geolocate singular tree species in a high-complexity forest environment, and considers the likelihood of every pixel in the image to be recognized as a possible tree by implementing a confidence map feature extraction. This study compares the performance of the proposed method against state-of-the-art object detection networks. For this, images from a dataset composed of 1,394 airborne scenes, where 5,334 palm-trees were manually labeled, were used. The results returned a mean absolute error (MAE) of 0.75 trees and an F1-measure of 86.9%. These results are better than both Faster R-CNN and RetinaNet considering equal experiment conditions. The proposed network provided fast solutions to detect the palm trees, with a delivered image detection of 0.073 seconds and a standard deviation of 0.002 using the GPU. In conclusion, the method presented is efficient to deal with a high-density forest scenario and can accurately map the location of single species like the M flexuosa palm tree and may be useful for future frameworks.

Keywords

Convolutional Neural Network; Deep Learning; Environmental Monitoring

Subject

Environmental and Earth Sciences, Atmospheric Science and Meteorology

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.