Submitted:
16 April 2024
Posted:
16 April 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
- To our knowledge, this is the first fully unsupervised method for flood area segmentation in color images captured by UAVs. The current work faces for the first time the problem of flood segmentation based on parameter-free calculated masks and unsupervised image analysis techniques.
- The flood areas are given as solutions of a probability optimization problem based on the evolution of an isocontour starting from the high confidence areas and gradually growing according to the hysteresis thresholding method.
- The proposed formulation yields a robust unsupervised algorithm that is simple and effective for the flood segmentation problem.
- The proposed framework is suitable for on-board execution on UAVs, enabling real-time processing of data and decision making during flight, since the processing time per image is about 0.5 sec without the need of substantial computational resources or specialized GPU capabilities.
2. Materials
3. Methodology
3.1. System Overview
3.2. RGB Vegetation Index Mask
3.3. LAB Components Masks
3.4. Flood Dominant Color Estimation
3.5. Hysteresis Thresholding
- We adapt the process for region growing, where the high threshold is applied to the entire probability map to identify pixels with as flood. These regions have high confidence to belong to flood areas, so they can be used as seeds in a region-growing process that is described below.
- Next, a connectivity-based approach is used to track the flood. Starting from the pixels identified in the first step, the algorithm looks at neighboring pixels. If a neighboring pixel has a probability value higher than the low threshold , it is considered part of the flood.
- This process continues recursively until no more connected pixels above the low threshold are found.
3.6. Final Segmentation
4. Result and Discussion
4.1. Evaluation Metrics
4.2. Implementation
4.3. Flood Area Dataset: Experimental Results and Discussion
4.4. Exploring the Impact of Environmental Zones and Camera Rotation Angle
- 1
-
Environmental zone:
- (a)
- Rural, predominantly featuring fields, hills, rugged mountainsides, scattered housing structures reminiscent of villages or rural settlements, and sparse roads depicted within the images. It consists of 87 out of the 290 images in the dataset.
- (b)
- Urban and peri-urban, distinctly showcasing urban landscapes characterized by well-defined infrastructure, that conforms to urban planning guidelines, a dense network of roads and high population density reflected in the presence of numerous buildings and structures. It encompasses a collection of 203 images.
- 2
-
Camera rotation angle:
- (a)
- No sky (almost top-down view, low camera rotation angle), distinguished by the absence of any sky elements; specifically, these images entirely lack any portion of the sky or clouding within their composition. It comprises 182 images of the dataset.
- (b)
- With sky (bird’s-eye view, high camera rotation angle), where elements of the sky, such as clouds or open sky expanses, are visibly present within the image composition. It encompasses the remaining 103 images within the dataset.
4.5. Ablation Study
| Method | |||||
|---|---|---|---|---|---|
| UFS-HT-REM | 84.9% | 79.5% | 78.6% | 79.1% | 77.3% |
| UFS-HT-REM (weights by Eq. 9) | 84.7% | 79.2% | 78.5% | 78.8% | 77.0% |
| UFS-HT-REM (weights by Eq. 8) | 84.2% | 78.6% | 78.7% | 78.6% | 76.7% |
| UFS-HT | 83.4% | 76.3% | 79.4% | 77.8% | 75.9% |
| UFS-REM | 82.5% | 75.7% | 79.1% | 77.3% | 74.9% |
| UFS-HT-REM - L | 79.9% | 68.9% | 82.0% | 74.8% | 72.8% |
| UFS | 78.6% | 68.9% | 81.1% | 74.5% | 71.6% |
| Mask | 76.0% | 64.9% | 82.7% | 72.7% | 69.8% |
| UFS-HT-REM - B | 74.6% | 66.3% | 76.1% | 70.8% | 67.8% |
| UFS-HT-REM - A | 68.6% | 56.7% | 88.0% | 68.9% | 66.0% |
| UFS(Otsu)-HT-REM | 72.6% | 66.9% | 36.8% | 47.5% | 43.5% |
- L (UFS-HT-REM-L with )
- A (UFS-HT-REM-A with )
- B (UFS-HT-REM-B with )
4.6. Comparison with DL Approaches
4.7. Flood Semantic Segmentation Dataset: Experimental Results and Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| ADNet | Attentive Decoder Network |
| AI | Artificial Intelligence |
| APLS | Average Path Length Similarity |
| BCNN | Bayesian Convolutional Neural Network |
| BiT | Bitemporal image Transformer |
| CNN | Convolutional Neural Network |
| CV | Computer Vision |
| DCFAM | Densely Connected Feature Aggregation Module |
| DELTA | Deep Earth Learning, Tools, and Analysis |
| DL | Deep Learning |
| DNN | Deep Neural Network |
| DSM | Digital Surface Model |
| EDN | Encoder Decoder Network |
| EDR | Encoder Decoder Residual |
| EHAAC | Encoder-Decoder High-Accuracy Activation Cropping |
| ENet | Efficient Neural Network |
| HR | High Resolution |
| ISPRS | International Society for Photogrammetry and Remote Sensing |
| LSTM | Long Short-Term Memory |
| ML | Machine Learning |
| MRF | Markov Random Field |
| NDWI | Normalized Difference Water Index |
| PFA | Potential Flood Area |
| PSPNet | Pyramid Scene Parsing Network |
| RGBVI | RGB Vegetation Index |
| ResNet | Residual Network |
| SAM | Segment Anything Model |
| SAR | Synthetic Aperture Radar |
| UAV | Unmanned Aerial Vehicle |
| VH | Vertical-Horizontal |
| VV | Vertical-Vertical |
| WSSS | Weakly Supervised Semantic Segmentation |
References
- Algiriyage, N.; Prasanna, R.; Stock, K.; Doyle, E.E.; Johnston, D. Multi-source multimodal data and deep learning for disaster response: a systematic review. SN Computer Science 2022, 3, 1–29. [Google Scholar] [CrossRef]
- Linardos, V.; Drakaki, M.; Tzionas, P.; Karnavas, Y.L. Machine learning in disaster management: recent developments in methods and applications. Machine Learning and Knowledge Extraction 2022, 4. [Google Scholar] [CrossRef]
- Bentivoglio, R.; Isufi, E.; Jonkman, S.N.; Taormina, R. Deep learning methods for flood mapping: a review of existing applications and future research directions. Hydrology and Earth System Sciences 2022, 26, 4345–4378. [Google Scholar] [CrossRef]
- Kumar, V.; Azamathulla, H.M.; Sharma, K.V.; Mehta, D.J.; Maharaj, K.T. The state of the art in deep learning applications, challenges, and future prospects: A comprehensive review of flood forecasting and management. Sustainability 2023, 15, 10543. [Google Scholar] [CrossRef]
- Chouhan, A.; Chutia, D.; Aggarwal, S.P. Attentive decoder network for flood analysis using sentinel 1 images. 2023 International Conference on Communication, Circuits, and Systems (IC3S). IEEE, 2023, pp. 1–5.
- Drakonakis, G.I.; Tsagkatakis, G.; Fotiadou, K.; Tsakalides, P. OmbriaNet—supervised flood mapping via convolutional neural networks using multitemporal sentinel-1 and sentinel-2 data fusion. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2022, 15, 2341–2356. [Google Scholar] [CrossRef]
- Dong, Z.; Liang, Z.; Wang, G.; Amankwah, S.O.Y.; Feng, D.; Wei, X.; Duan, Z. Mapping inundation extents in Poyang Lake area using Sentinel-1 data and transformer-based change detection method. Journal of Hydrology 2023, 620, 129455. [Google Scholar] [CrossRef]
- Hänsch, R.; Arndt, J.; Lunga, D.; Gibb, M.; Pedelose, T.; Boedihardjo, A.; Petrie, D.; Bacastow, T.M. Spacenet 8-the detection of flooded roads and buildings. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1472–1480.
- Hänsch, R.; Arndt, J.; Lunga, D.; Pedelose, T.; Boedihardjo, A.; Pfefferkorn, J.; Petrie, D.; Bacastow, T.M. SpaceNet 8: Winning Approaches to Multi-Class Feature Segmentation from Satellite Imagery for Flood Disasters. IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2023, pp. 1241–1244.
- He, Y.; Wang, J.; Zhang, Y.; Liao, C. An efficient urban flood mapping framework towards disaster response driven by weakly supervised semantic segmentation with decoupled training samples. ISPRS Journal of Photogrammetry and Remote Sensing 2024, 207, 338–358. [Google Scholar] [CrossRef]
- Hernández, D.; Cecilia, J.M.; Cano, J.C.; Calafate, C.T. Flood detection using real-time image segmentation from unmanned aerial vehicles on edge-computing platform. Remote Sensing 2022, 14, 223. [Google Scholar] [CrossRef]
- Hertel, V.; Chow, C.; Wani, O.; Wieland, M.; Martinis, S. Probabilistic SAR-based water segmentation with adapted Bayesian convolutional neural network. Remote Sensing of Environment 2023, 285, 113388. [Google Scholar] [CrossRef]
- Ibrahim, N.; Sharun, S.; Osman, M.; Mohamed, S.; Abdullah, S. The application of UAV images in flood detection using image segmentation techniques. Indones. J. Electr. Eng. Comput. Sci 2021, 23, 1219. [Google Scholar] [CrossRef]
- Inthizami, N.S.; Ma’sum, M.A.; Alhamidi, M.R.; Gamal, A.; Ardhianto, R.; Jatmiko, W.; others. Flood video segmentation on remotely sensed UAV using improved Efficient Neural Network. ICT Express 2022, 8, 347–351. [Google Scholar] [CrossRef]
- Li, Z.; Demir, I. U-net-based semantic classification for flood extent extraction using SAR imagery and GEE platform: A case study for 2019 central US flooding. Science of The Total Environment 2023, 869, 161757. [Google Scholar] [CrossRef] [PubMed]
- Lo, S.W.; Wu, J.H.; Lin, F.P.; Hsu, C.H. Cyber surveillance for flood disasters. Sensors 2015, 15, 2369–2387. [Google Scholar] [CrossRef] [PubMed]
- Munawar, H.S.; Ullah, F.; Qayyum, S.; Heravi, A. Application of deep learning on uav-based aerial images for flood detection. Smart Cities 2021, 4, 1220–1242. [Google Scholar] [CrossRef]
- Park, J.C.; Kim, D.G.; Yang, J.R.; Kang, K.S. Transformer-Based Flood Detection Using Multiclass Segmentation. 2023 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2023, pp. 291–292.
- Rahnemoonfar, M.; Chowdhury, T.; Sarkar, A.; Varshney, D.; Yari, M.; Murphy, R.R. Floodnet: A high resolution aerial imagery dataset for post flood scene understanding. IEEE Access 2021, 9, 89644–89654. [Google Scholar] [CrossRef]
- Şener, A.; Doğan, G.; Ergen, B. A novel convolutional neural network model with hybrid attentional atrous convolution module for detecting the areas affected by the flood. Earth Science Informatics 2024, 17, 193–209. [Google Scholar] [CrossRef]
- Shastry, A.; Carter, E.; Coltin, B.; Sleeter, R.; McMichael, S.; Eggleston, J. Mapping floods from remote sensing data and quantifying the effects of surface obstruction by clouds and vegetation. Remote Sensing of Environment 2023, 291, 113556. [Google Scholar] [CrossRef]
- Wang, L.; Li, R.; Duan, C.; Zhang, C.; Meng, X.; Fang, S. A novel transformer based semantic segmentation scheme for fine-resolution remote sensing images. IEEE Geoscience and Remote Sensing Letters 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Wieland, M.; Martinis, S.; Kiefl, R.; Gstaiger, V. Semantic segmentation of water bodies in very high-resolution satellite and aerial images. Remote Sensing of Environment 2023, 287, 113452. [Google Scholar] [CrossRef]
- Bauer-Marschallinger, B.; Cao, S.; Tupas, M.E.; Roth, F.; Navacchi, C.; Melzer, T.; Freeman, V.; Wagner, W. Satellite-Based Flood Mapping through Bayesian Inference from a Sentinel-1 SAR Datacube. Remote Sensing 2022, 14, 3673. [Google Scholar] [CrossRef]
- Filonenko, A.; Hernández, D.C.; Seo, D.; Jo, K.H. ; others. Real-time flood detection for video surveillance. IECON 2015-41st annual conference of the IEEE industrial electronics society. IEEE, 2015, pp. 004082–004085.
- Landuyt, L.; Verhoest, N.E.; Van Coillie, F.M. Flood mapping in vegetated areas using an unsupervised clustering approach on sentinel-1 and-2 imagery. Remote Sensing 2020, 12, 3611. [Google Scholar] [CrossRef]
- Li, J.; Meng, Y.; Li, Y.; Cui, Q.; Yang, X.; Tao, C.; Wang, Z.; Li, L.; Zhang, W. Accurate water extraction using remote sensing imagery based on normalized difference water index and unsupervised deep learning. Journal of Hydrology 2022, 612, 128202. [Google Scholar] [CrossRef]
- McCormack, T.; Campanyà, J.; Naughton, O. A methodology for mapping annual flood extent using multi-temporal Sentinel-1 imagery. Remote Sensing of Environment 2022, 282, 113273. [Google Scholar] [CrossRef]
- Trombini, M.; Solarna, D.; Moser, G.; Dellepiane, S. A goal-driven unsupervised image segmentation method combining graph-based processing and Markov random fields. Pattern Recognition 2023, 134, 109082. [Google Scholar] [CrossRef]
- Yu, F.; Koltun, V. 2016; arXiv:cs.CV/1511.07122].
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; Dollár, P.; Girshick, R. 2023; arXiv:cs.CV/2304.02643].
- Tarpanelli, A.; Mondini, A.C.; Camici, S. Effectiveness of Sentinel-1 and Sentinel-2 for flood detection assessment in Europe. Natural Hazards and Earth System Sciences 2022, 22, 2473–2489. [Google Scholar] [CrossRef]
- Guo, Z.; Wu, L.; Huang, Y.; Guo, Z.; Zhao, J.; Li, N. Water-body segmentation for SAR images: past, current, and future. Remote Sensing 2022, 14, 1752. [Google Scholar] [CrossRef]
- O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Volume 1 1. Springer, 2020, pp. 128–144.
- Karim, F.; Sharma, K.; Barman, N.R. Flood Area Segmentation. https://www.kaggle.com/datasets/faizalkarim/flood-area-segmentation. Accessed on 23. 20 November.
- Yang, L. Flood Semantic Segmentation Dataset. https://www.kaggle.com/datasets/lihuayang111265/flood-semantic-segmentation-dataset. Accessed on 24. 20 April.
- Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. International Journal of Applied Earth Observation and Geoinformation 2015, 39, 79–87. [Google Scholar] [CrossRef]
- Markaki, S.; Panagiotakis, C. Unsupervised Tree Detection and Counting via Region-Based Circle Fitting. ICPRAM, 2023, pp. 95–106.
- Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A comparative study of RGB and multispectral sensor-based cotton canopy cover modelling using multi-temporal UAS data. Remote Sensing 2019, 11, 2757. [Google Scholar] [CrossRef]
- Chavolla, E.; Zaldivar, D.; Cuevas, E.; Perez, M.A. Color spaces advantages and disadvantages in image color clustering segmentation. Advances in soft computing and machine learning in image processing.
- Hernandez-Lopez, J.J.; Quintanilla-Olvera, A.L.; López-Ramírez, J.L.; Rangel-Butanda, F.J.; Ibarra-Manzano, M.A.; Almanza-Ojeda, D.L. Detecting objects using color and depth segmentation with Kinect sensor. Procedia Technology 2012, 3, 196–204. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence.
- Fabbri, R.; Costa, L.D.F.; Torelli, J.C.; Bruno, O.M. 2D Euclidean distance transform algorithms: A comparative survey. ACM Computing Surveys (CSUR) 2008, 40, 1–44. [Google Scholar] [CrossRef]
- Xu, X.; Xu, S.; Jin, L.; Song, E. Characteristic analysis of Otsu threshold and its applications. Pattern recognition letters 2011, 32, 956–961. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, -9, 2015, proceedings, part III 18. Springer, 2015, pp. 234–241. 5 October.
- Sun, K.; Xiao, B.; Liu, D.; Wang, J. Deep high-resolution representation learning for human pose estimation. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5693–5703.
- Grinias, I.; Panagiotakis, C.; Tziritas, G. MRF-based segmentation and unsupervised classification for building and road detection in peri-urban areas of high-resolution satellite images. ISPRS journal of photogrammetry and remote sensing 2016, 122, 145–166. [Google Scholar] [CrossRef]
| 1 |













| Authors | Year | Approach | Imagery | Method |
|---|---|---|---|---|
| Chouhan, A. et al. [5] | 2023 | Supervised | Sentinel-1 | Multi-scale ADNet |
| Drakonakis, G.I. et al. [6] | 2022 | Supervised | Sentinel-1, 2 | CNN change detection |
| Dong, Z. et al. [7] | 2023 | Supervised | Sentinel-1 | STANets, SNUNet, BiT |
| Hänsch, R. et al. [8,9] | 2022 | Supervised | HR satelite RGB | U-Net |
| He, Y. et al. [10] | 2024 | Weakly- supervised |
HR aerial RGB | End-to-end WSSS framework structure constraints and self-distillation |
| Hernández, D. et al. [11] | 2021 | Supervised | UAV RGB | Optimized DNN |
| Hertel, V. et al. [12] | 2023 | Supervised | SAR | BCNN |
| Ibrahim, N. et al. [13] | 2021 | Semi- supervised |
UAV RGB | RGB and HSI color models, k-means clustering, region growing |
| Inthizami, N.S. et al. [14] | 2022 | Supervised | UAV video | Improved ENet |
| Li, Z. et al. [15] | 2023 | Supervised | Sentinel-1 | U-Net |
| Lo, S.W. et al. [16] | 2015 | Semi- supervised |
RGB (Surveillance camera) |
HSV color model, seeded region growing |
| Munawar, H.S. et al. [17] | 2021 | Supervised | UAV RGB | Landmark-based feature selection, CNN hybrid |
| Park, J.C. et al. [18] | 2023 | Supervised | HR satelite RGB | Swin transformer in a Siamese-UNet |
| Rahnemoonfar, M. et al. [19] | 2021 | Supervised | UAV RGB | InceptionNetv3, ResNet50, XceptionNet, PSPNet, ENet, DeepLabv3+ |
| Şener, A. et al. [20] | 2024 | Supervised | UAV RGB | ED network with EDR block and atrous convolutions (FASegNet) |
| Shastry, A. et al. [21] | 2023 | Supervised | WorldView 2, 3 multispectral |
CNN with atrous convolutions |
| Wang, L. et al. [22] | 2022 | Supervised | True Orthophoto (near infrared), DSM |
Swin transformer and DCFAM |
| Wieland, M. et al. [23] | 2023 | Supervised | Satelite and aerial | U-Net model with MobileNet-V3 backbone pre-trained on ImageNet |
| Bauer-Marschallinger, B. et al. [24] |
2022 | Unsupervised | SAR | Datacube, time series-based detection, Bayes classifier |
| Filonenko, A. et al. [25] | 2015 | Unsupervised | RGB (surveillance camera) |
Change detection, color probability calculation |
| Landuyt, L. et al. [26] | 2020 | Unsupervised | Sentinel-1, 2 | K-means clustering, region growing |
| Li, J. et al. [27] | 2022 | Unsupervised | Sentinel-2 Landsat |
NDWI, U-Net |
| McCormack, T. et al. [28] | 2022 | Unsupervised | Sentinel-1 | Histogram thresholding, multi-temporal and contextual filters |
| Trombini, M. et al. [29] | 2023 | Unsupervised | SAR | Graph-based MRF segmentation |
| Category | |||||
|---|---|---|---|---|---|
| 1.(a) Rural | 83.6% | 82.3% | 78.4% | 80.3% | 78.2% |
| 1.(b) Urban/peri-urban | 85.4% | 78.4% | 78.7% | 78.5% | 76.9% |
| 2.(a) No sky | 85.2% | 81.6% | 78.0% | 79.8% | 77.9% |
| 2.(b) With sky | 84.4% | 76.1% | 79.5% | 77.7% | 76.1% |
| All | 84.9% | 79.5% | 78.6% | 79.1% | 77.3% |
| Method | Tr. Par. | ||||
|---|---|---|---|---|---|
| FASegNet | 91.5% | 91.4% | 90.3% | 90.9% | 0.64 M |
| UNet | 90.7% | 90.0% | 90.1% | 90.0% | 31.05 M |
| HRNet | 88.6% | 84.8% | 92.0% | 88.3% | 28.60 M |
| Ours | 84.9% | 79.5% | 78.6% | 79.1% | 0 M |
| Dataset | Images | |||||
|---|---|---|---|---|---|---|
| FSSD | 663 | 88.5% | 79.8% | 83.7% | 81.7% | 79.4% |
| FAD | 290 | 84.9% | 79.5% | 78.6% | 79.1% | 77.3% |
| FSSD ∪ FAD | 953 | 87.4% | 79.7% | 82.2% | 80.9% | 78.8% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
