Sentinel-1 and Sentinel-2 Data Fusion for Mapping and Monitoring Wetlands

Wetlands benefits can be summarized but are not limited to their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values. Over the past few decades, remote sensing and geographical information technologies has proven to be a useful and frequent applications in monitoring and mapping wetlands. Combining both optical and microwave satellite data can give significant information about the biophysical characteristics of wetlands and wetlands` vegetation. Also, fusing data from different sensors, such as radar and optical remote sensing data, can increase the wetland classification accuracy. In this paper we investigate the ability of fusion two fine spatial resolution satellite data, Sentinel-2 and the Synthetic Aperture Radar Satellite, Sentinel-1, for mapping wetlands. As a study area in this paper, Balikdami wetland located in the Anatolian part of Turkey has been selected. Both Sentinel-1 and Sentinel-2 images require pre-processing before their use. After the pre-processing, several vegetation indices calculated from the Sentinel-2 bands were included in the data set. Furthermore, an object-based classification was performed. For the accuracy assessment of the obtained results, number of random points were added over the study area. In addition, the results were compared with data from Unmanned Aerial Vehicle collected on the same data of the overpass of the Sentinel-2, and three days before the overpass of Sentinel-1 satellite. The accuracy assessment showed that the results significant and satisfying in the wetland classification using both multispectral and microwave data. The statistical results of the fusion of the optical and radar data showed high wetland mapping accuracy, with an overall classification accuracy of approximately 90% in the object-based classification. Compared with the high resolution UAV data, the classification results give promising results for mapping and monitoring not just wetlands, but also the sub-classes of the study area. For future research, multi-temporal image use and terrain data collection are recommended.


Introduction
Wetlands provide a number of environmental and socioeconomic benefits such as their ability to store floodwaters and improve water quality, providing habitats for wildlife and supporting biodiversity, as well as aesthetic values.The loss of wetlands which is considered to be more than 50% since 1900, has gained considerable attention over the past years.A major cause of wetland loss is considered to be the conversion to agricultural land due to economic and population growth [1], and threats like global climate changes.Mapping wetlands have always been of great need since societies depend on natural resources.Wetlands include a range of habitats from permanently flooded areas to seasonally wet areas, both cover with a portion of vegetation.The wetter the wetland area is, the easier it identifies both on the ground and through remote sensing methods.
Remote sensing data such as aerial photo interpretation, satellite imagery or other geospatial data, has proven to be a useful and frequent application in monitoring and mapping wetlands.In the past, aerial photographs have been traditionally used for mapping wetlands, but in the past two decades, multispectral and SAR (Synthetic Aperture Radar) satellite remote sensing data have been effectively used for mapping and monitoring wetlands.Multispectral data has been used for classifying wetlands generally through indices, such as Normalized Difference Vegetation Index (NDVI) [2], Land Surface Water Index (LSWI) [3], Normalized Difference Water Index (NDWI) [4], Soil and Atmosphere Resistant Vegetation Index (SARVI) [5], etc. SAR data which are considerably different from optical data, are being collected by active sensors that operate at longer wavelengths and provide different information.C-band operating at 3.75 to 7.5 cm wavelength has been widely used in wetland mapping [6,7].The use of SAR data (C-band) has provided overall accuracy of 59% to 86% [7], while the use of optical sensors (Landsat TM) had difficulties separating upper salt marsh from upland forest [8].Thus, the combination/fusion of both sensors can provide sufficient information for accurately extracting wetlands from the other land covers [9].
Sentinel-2A and Sentinel-2B, are a part of the European Copernicus program created by the European Space Agency (ESA) [10].Sentinel-2 Multispectral Instrument (MSI), is considered to be the follow-up mission to the Landsat instruments, intended to provide continuity of remote sensing products [11].In comparison with the latest Landsat OLI/TIRS, Sentinel-2 has better spatial resolution, better spectral resolution in the near infrared region, three Vegetation Red Edge bands with 20-meter spatial resolution, but does not offer thermal data nor panchromatic band.Sentinel-2 MSI sensor compared to existing satellite sensors require adjustment to allow extending actual time series [12].Sentinel-2 offers satellite images with a resolution from 10 to 60 meters [13].The Visual and NIR bands have 10 m spatial resolution, four Vegetation Red Edge and two SWIR bands have 20 m spatial resolution, while the Coastal aerosol, Water vapour, and Cirrus bands have 60 m spatial resolution.However, considering the four fine spectral resolution bands, panchromatic band can be produced and used in the Sentinel-2 image fusion for producing ten fine spatial resolution bands [14].
Sentinel-1 is an imaging radar satellite at C-band (⁓5.7 cm wavelength) consisting of a constellation of two satellites, Sentinel-1A and Sentinel-1B, also part of the European Copernicus program created by the ESA.Their main cover applications are: monitoring sea ice zones and the arctic environment; Surveillance of marine environment; Mentoring land surface motion risks; Mapping of land surfaces: forest, water and soil, agriculture; Mapping in support of humanitarian aid on crisis situation [15,16].
In this study, a fusion of Sentinel-1 and Sentinel-2 satellite images has been made for wetland classification.As a study area, Balikdami wetland located in the Anatolian part of Turkey has been chosen.The Anatolian part of Turkey has a number of wetlands area, and all of the wetland areas within the common are between Sentinel-1 and Sentinel-2 were also taken into consideration.For that purpose, one Sentinel-1 and one Senitnel-2 datasets have been downloaded from the Copernicus Open Access Hub.Before fusing the images from the different sensors, both Sentinel-1 and Sentinel-2 images were pre-processed.The pre-processing of the images includes atmospheric correction and increasing of the spatial resolution from 20 meters to 10 meters of the Sentinel-2 red-edge and shortwave infrared bands, and radiometric calibration, speckle reduction and terrain correction of the Sentinel-1 SAR image.Furthermore, object-based classification has been applied to the common area of the images.
Several statistical analyses were used in the accuracy assessment of the obtained results, and in addition the results from the classification were compared with data obtained from Unmanned Aerial Vehicle and field measurements collected on the same date as the overpass of the Sentinel-2 satellite.

Study Area
Sakarya river is the third longest river in Turkey with 824 km length.Balikdami is one of the wetlands formed along Sakarya riverbed.Located in the Anatolian part in Turkey, Balikdami is unique wetland containing rich flora and fauna and more than 235 bird species.The study area in this paper contains four other wetland areas that were taken into consideration.The image used for classification cover area of approximately 2.200 km 2 (Figure 1).It is known that this area has been losing its value since the 1980s.

Data and Methods
For the classification, both Sentinel-1 and Sentinel-2 data were used.For that purpose, the images were downloaded from the Copernicus Data Hub.The images were taken in the summer period when the vegetation in the wetland areas is dense and green which makes it difficult to separate it from other vegetated areas.Sentinel-1 image was taken on 13 August 2017, while Sentinel-2 was taken on 10 August 2017.
Since the structure of the wetland is complex and it contains different wetland classes, in addition, data from Unmanned Aerial Vehicle (UAV) with high resolution were collected on August 10, 2017 over the Balikdami wetland.The UAV data was used in the accuracy assessment for comparing the classification results.A flowchart of the used methodology in this paper is given in Figure 2.

Pre-processing
Before its application, Senintel-1 images need pre-processing.After the download of the image, radiometric and terrain calibration, as well as speckle reduction has been performed.The product has been filtered with Lee Sigma filer 5x5 window size.For the terrain correction a Range Doppler Terrain Correction with a digital elevation model of 30 m has been used.The pre-processing has been performed in the SNAP software by ESA using the Sentinel-1 toolbox.The digital number values have been converted into backscattering values in decibel (dB) scale following Equation 1.
Where ° is the digital number value of the image, and ° is the backscattered value in dB.
The pre-processing of Sentinel-2 product include atmospheric correction and increasing the spatial resolution of the 20-m bands to 10-m.In order to increase the spatial resolution of the Vegetation Red-Edge and Short Wave infrared bands, pan-sharpening techniques should be performed.However, the main pan-sharpening approaches were originally developed for image fusion with a single fine band [17].Sentinel-2 provides four 10-m bands that are highly correlated with the 20-m bands.In this study, a single panchromatic band by averaging all fine multispectral bands was produced [14,17].For the pan-sharpening, a Hybrid Fusion Technique -Wavelet Principal Component (WPC) was used.For the quantitative analyses of the pan-sharpened image, Wald`s protocol was followed which the most widely used one for validation of pan-sharpening methods [18].For the quantitative analyses, four indices were used: correlation coefficient (CC) which provides correlation between the fused and the reference image, Universal Image Quality Index (UIQI) which uses covariance, variance, and means of fused and reference image [19], Relative Average Spectral Error (RASE) [20], and Spectral Angle Mapper (SAM), curtail for the case under concern [21].
UAV platforms are nowadays a valuable source of data for surveillance, inspection, mapping and 3D modeling issues.New applications in the short-and close-range domain are introduced, being the UAVs a low-cost alternatives to the classical manned aerial photogrammetry [22].Today the UAVs images and 3D data are used in many different fields such as forestry and agriculture [23], archeology and cultural heritage, environmental surveying, traffic monitoring [24], 3D reconstruction etc.
In this study, UAV was used for collecting data over the Balikdami wetland area.The flight was performed on 10 August 2017 over Balikdami.The steps followed for obtaining the UAV data are: (i) Flight Planning; (ii) Taking areal images; (iii) Data processing; and (iv) Results.The UAV and the integrated camera as well as their characteristics are given below.

Methods
Radar image backscatter values gives valuable information for land cover.Both pre-processed VV and VH Sentinel-1 polarizations were included in the dataset as well as their different combinations such as their average value.
Using the Sentinel-2 bands, several vegetation indexes were calculated: NDVI, NDWI, the Sentinel-2 Red-Edge Position Index (S2REP) [25], and the Modified Soil Adjusted Index (MSAVI).All of the calculated indices were included in the dataset.

NDVI= b8+b4 b8+b4
(2 MSAVI= where the soil adjustment value L = 0.5.The indices were calculated using the pan-sharpened Senitnel-2 bands with a spatial resolution of 10-m.The 60-m Sentinel-2 bands were not included in the dataset.The dataset contains 17 bands that were stacked into single image [26]: resolution segmentation, where pixels are grouped into objects [27].In this study, importance was given to VH, NIR, and SWIR bands since these areas of the electromagnetic spectrum are sensitive to wettnes.The scale parameter determines the maximum possible change of heterogeneity, and it is indirectly related to the size of the created object.Compactness describes the closeness of pixels clustered in an object by comparing it to a circle.The parameters used in this study are given in Table 1.Afterwards, samples of nine classes were collected using the Sentinel-2 image as a reference; Wetlands -representing low vegetated wetlands, Vegetated wetlands, dense vegetated wetlandsrepresenting marsh with high vegetation, agricultural fields -1 -representing high vegetated fields, agricultural fields -2 -representing low vegetated fields, sedimentary rocks, barren land, and bare land.The collected samples were also identified in high-resolution imagery using Google Earth.
The estimation of the classification accuracy assessment was performed based on 129 random points that were used for calculating user and producer accuracy, overall accuracy and kappa statistics.Furthermore, the classification results were visually compared with the UAV data.

Sentinel-2 pan-sharpening
The results from the pan-sharpening over the 20-m Sentinel-2 bands are presented in Figure 3 and Table 2.Both qualitative and quantitative analyses gave satisfactory results of the performed pan-sharpening using the WPC method.It can be easily noticed from Table 1 that all of the quantitative indices calculated were close to the ideal values.

UAV Results
As a results from every flight Digital Surface Model and Ortophoto was created as well as a quality report for the data processing.The details for every flight data are given in Table 3 and the ortophoto and the DSM mosaic of all flights are given in Figure 4.  marsh with high vegetation, agricultural fields -1 -representing high vegetated fields, agricultural fields -2 -representing low vegetated fields, sedimentary rocks, barren land, and bare land.After the classification visual inspection was made and it was concluded that some agricultural fields that were not classified in neither of the two assigned agricultural classes, thus taking an advantage of the geometry of the objects, an additional condition for a Rectangular Fit of 0.6 was set and new class of agricultural fields was created.The results are presented in Figure A7 for the full study area, while in Figure 5 are presented the results for the Balikdami wetland area, and the statistical results are presented in Table 4.The overall accuracy was estimated to be more than 89%, while the kappa coefficient was 0.88.All of the wetland classes had both producer and user accuracy between 85% and 92.3%.The confusion matrix and more detailed information about the accuracy assessment are given in the Appendix, Table A5.

Discussion
In this paper, an object-based classification of a Sentinel-1/Sentinel-2 combined dataset have been performed.Both of the image have been pre-processed before their use.The time difference between the overpass of the two satellites is 3 days.Additionally, on the same day of the overpass of the Sentinel-2 satellite, both filed measurements and data collection with UAV has been done.Since some areas of the wetland were not easily accessible, the UAV data has been used for visual comparison of the classification results.
The pan-sharpening of the Sentinel-2 20 meter bands, has been under investigation in several studies [17,21,28].Although all of the studies showed similar results, a recent comparison between the used methodologies, showed that using the average of the high resolution bands in order to produce the missing panchromatic band gives the best results.As for the pan-sharpening methodology, the WPC method has given both quantitative and qualitative values close to the ideal values [21].
The object-based classification was performed on an area that contains different land covers, such as different agricultural fields, barren lands, bare lands, open water areas, and a number of wetlands.As it known, wetlands are difficult to separate from the other similar land covers.The combination of Sentinel-1 and Sentinel-2 has been used in different studies; it has been use for soil moisture mapping [29], as well as for land cover mapping in the lower Magdalena region, Colombia [26].Similar to the methodology applied in this paper, the authors applied object-based classification, and achieved successful results.However, the taking in consideration the differences in the land cover, an appropriate comparison cannot be made, since no wetlands were present in the mentioned study area.The highest statistical errors can be noticed in the agricultural and dense vegetation wetlands as in some cases the reeds are misclassified with similar agricultural areas.The data obtained from the UAV has been used for visual evaluation of the Balikdami wetland classification.Even though both spatial and spectral characteristics of the sensors are different, the results from the object-based classification overlapped the UAV data, as seen on Figure 6.

Conclusions
The complex structure of wetlands, makes it difficult to classify wetlands using remote sensing data.Both multispectral and radar data have advantages and disadvantages in wetland mapping and monitoring.Combining these different sensors and using their advantages, in this paper, we fused Sentinel-1 and Sentinel-2 and achieved overall accuracy of more than 89%.Still, some of the wetlands areas were mistakenly classified as agricultural areas which could be fresh watered fields.However, this allegation needs to be confirmed by ground control points.

Figure 2 .
Figure 2. Flowchart of the used methodology

Figure 4 .
Figure 4. Orthophoto of the study area Balikdami

Figure 6 .
Figure 6.Dense vegetation class compared with UAV data.

Table 2 .
Quantitative analyses of the Sentinel-2 Pan-sharpening

Table 3 .
Details of the flight results

Table 4 .
Classification accuracy assessment