Estimating Dredge-Induced Turbidity using Drone Imagery

While maintenance dredging of port access channels is often required to maintain navigability, it can result in increased turbidity, sediment plumes, and associated reductions in water quality. Unoccupied aircraft systems (UAS, or drones) are increasingly applied to study water quality due to their high spatial and temporal resolutions. In this study, we investigated the use of drone imagery to monitor turbidity in the Morehead City Harbor, North Carolina, USA, during channel maintenance by hopper dredge. Drone flights were conducted concurrently with in-situ sampling during active dredging and post-dredging. Multispectral drone images were radiometrically calibrated, converted to reflectance and then turbidity using two separate processing methods and a single-band (red; 620nm-700nm) generic turbidity retrieval algorithm, and then compared to in-situ measurements. The method of using average reflectance to retrieve a single turbidity measurement per drone image produced agreeable results when compared to the in-situ measurements (R2 = 0.84). This method was then used to generate turbidity maps and extract surface plumes. While this could be considered a limited validation, the results indicate that realistic values can be obtained from drone imagery for low and high turbidity concentrations (1-72 FNU), making drones a viable option for monitoring surface turbidity associated with dredging.


Introduction
Marine dredging is a worldwide industry that involves removing sediment and depositing it in a new location. The uses are numerous, including navigation, maintenance of beaches, reclamation, and remediation. Most major port access channels often require maintenance dredging, the removal of sediments that have accumulated, to ensure channels are deep enough for large vessels to reach ports. While the removed sediment can be used to renourish beaches in some areas, the act of dredging and relocating the material can result in changes to coastal morphology, destruction of habitat, increased risk to fauna, and reduction of water quality [1].
A common impact of dredging is an increase in turbidity and the creation of sediment plumes. Sediment plumes can extend the impact of dredging over large areas, and although these effects are often short-lived, dredging-related increases in turbidity can exceed natural levels and vary in timing, impacting some organisms' ability to survive [2]. Monitoring the concentration and extent of sediment plumes can help to understand the impact of dredging, comply with water quality regulations, and inform future operations to avoid sensitive areas and times. Turbidity is a relative index of water clarity and can be correlated with total suspended sediment (TSS) concentrations, light availability for photosynthesis, and sedimentation [3]. As such, monitoring turbidity can lead to a greater understanding of the spatial and temporal characteristics of suspended sediment plumes, and therefore dredging impacts.
While conventional in-situ methods provide accurate, fine-scale, and real-time data, they can fail to quantify sediment plume dynamics due to limitations in temporal and spatial sampling [4]. Variables related to water clarity have been associated with reflectance at distinct wavelengths and reflectance is expected to increase with increasing turbidity levels [5][6][7], making remote sensing a viable option to monitor turbidity. Good correlation between turbidity and reflectance at bands located in the red part of the spectrum is found for low to moderate turbid waters. Nechad et al. (2009) [8] developed a generic one-band algorithm to calculate turbidity as a function of reflectance in coastal waters. This algorithm is applicable to any optical sensor that has a spectral range of 520-885 nm and showed a best fit in the red band for turbidity values from 0.6-83 FNU. The use of satellite data for routine monitoring of turbidity has proven successful in many areas [9][10][11] but faces limitations in spatial resolution and revisit time. Therefore, satellite imagery is not applicable to monitoring turbidity of specific events, such as those caused by dredging and particularly in smaller areas. Unoccupied aircraft systems (UAS, or drones) provide a unique opportunity as an intermediate scale for measuring turbidity and sediment plumes. UAS are rapidly deployable, and therefore highly applicable for monitoring specific turbidity events, and can cover a spatial extent necessary for decision-making [12]. Additionally, by flying at lower elevations, drones can significantly increase spatial resolution. Applications for UAS in water quality monitoring are

Drone Flights and In-Situ Sampling
Drone and in-situ sampling were conducted concurrently on six different days during active dredging of the Morehead City Harbor. Sampling in Range A seaward of Station 110+00 was conducted on July 17 and sampling in Range B was conducted on July 22, July 23, July 24, July 27, and July 28 ( Figure 1). Two additional days of flights and in-situ sampling were conducted after the completion of dredging (Appendix A). All but one day of flights were completed with the senseFly eBee Plus, a commercially available fixed-wing drone, and the Parrot Sequoia multispectral sensor. The Sequoia sensor collects imagery in the bands green, red, red edge, and near infrared and has a resolution of 1.2 MP and image size of 1280x960 pixels. Flights were conducted at an average altitude of 90 meters, resulting in an average image footprint of 109x81 meters and average resolution of 10 cm/pixel. Before each flight, images of the calibration target were collected for each band. The radiometric calibration target along with the Sequoia's attached daylight sensor are used to calibrate and correct the images' reflectance by helping normalize imagery taken in different conditions (sun angle and cloudiness). The spectral specifications of the Parrot Sequoia sensor are outlined in Table 1. The eBee flights were conducted autonomously through the flight planning software eMotion where flight paths were set as parallel lines with minimal overlap. The lowest overlap between flight lines was used because the raw images were intended to be processed individually instead of photogrammetrically, which requires a higher overlap between images. Flights on July 17, 2020, used the multi-rotor DJI Phantom 4 Advanced, which was engineered inhouse to integrate the Parrot Sequoia as the sensor. The DJI Phantom was used because flights were conducted off the in-situ sampling boat, while all the other flights were conducted from the shoreline. Flights on July 17 had to be conducted from a boat to reach the extent of Range A while maintaining visual line of sight. These flights were completed in Range A Station 110+00 seaward, while all other flights completed during active dredging were in Range B. An example of drone imagery during active dredging is seen in Figure 2 for Range A and Figure 3 for Range B. The two days of flights completed post dredging were in the Outer Harbor section of Range A. The flight area extents are outlined in Figure 1. All drone flight paths were configured to in-situ sampling with the assumption that most drone images would overlap in-situ samples by both time and location.
In-situ sampling utilized two separate survey designs to sample water quality: 1) a stratified random sampling (SRS) survey design where sampling occurred at stations within three strata (within the channel, within 500m of the channel, and between 500m and 1km of the channel) and 2) an ad hoc, repeated measures design whereby we nonrandomly sampled SRS stations based on the area actively being dredged, along with the immediately adjacent areas up to ~800m from the channel. Sampling was conducted until the hopper dredge filled its hold and departed to the disposal site (ranging from 40 to 110 minutes), then all the sampled stations were sampled a second time while the hopper dredge was inactive. Range A in-situ sampling on July 17 and August 11 utilized the repeated measures design while sampling on August 12 used the SRS design. Range B sampling on July 22 and July 24 used the repeated measures design and sampling on July 23, July 27, and July 28 used the SRS design (Appendix B). Water quality was monitored with a YSI 6600 vr2 sonde outfitted with temperature, conductivity, depth, dissolved oxygen, and turbidity sensors. Sondes were set to collect and record measurements at a two-second frequency. At each station, the sonde was lowered into the water and held at a depth of 1m for 10 seconds to allow for equilibration and to remove any artifacts from lowering the sonde into the water (e.g., bubbles that can interfere with optical sensors such as turbidity). The sonde was then lowered at 1-meter intervals to within 1 meter of the bottom. The sonde was maintained at each meter interval for 3 seconds to allow for water quality measurements. Sondes were checked against calibration standards before and after every survey, and a full sonde calibration was performed approximately every 3 weeks following the National Estuarine Research Reserve's System-Wide Monitoring Program protocols (NERRS, 2021). Sampling, both drone flights and insitu, was conducted at different times in the tidal cycle, and the corresponding tidal currents are found in Appendix C.

Drone Image Processing
Drone image processing was completed with the Micasense Red Edge Python library with added functionality to enable Sequoia image processing and in-house scripts to convert to turbidity and georeference. To derive turbidity from the drone images, the raw imagery must first be radiometrically corrected. This process converts the raw information expressed in Digital Numbers (DN) to water leaving reflectance. Water leaving reflectance is an optical property and as the light travels through the water column it contains information regarding bio-physical parameters like turbidity [21]. Water leaving reflectance is defined as: where is water leaving reflectance, is the downwelling irradiance and is the water leaving radiance. The downwelling irradiance is obtained from the sunshine sensor on the Sequoia, which captures changing light conditions. The Sequoia sensor captures total radiance ( ) instead of water leaving radiance ( ). To get water leaving radiance, the surface reflected radiance must be subtracted from the total radiance: where (surface reflected light) is a function of and . The extended equation for water leaving reflectance is then: where is the downwelling irradiance, total upwelling radiance (from the air-sea interface) is , and is the sky radiance.
is the air-water interface reflection coefficient for radiance, which is dependent on sea state, sky conditions, and viewing geometry, and varies strongly with wind speed during clear sky conditions. This has been modelled by [22] to show that for clear days: and = 0.0256 for cloudy days. The 0.0256 value is related to the ratio of sky radiance and downwelling irradiance and was calculated in simulations by [23].
To calculate water leaving reflectance, first the raw image is converted to irradiance. This step includes vignetting where a vignette map is multiplied by the raw DN values to reverse the darkening at the image corners. The raw image is then converted to irradiance through the following equation: where is the Sequoia irradiance, f is the f-number, p is the pixel value, is the gain, is exposure time, and A, B, and C are calibration coefficients measured in production. All these variables are found within the image metadata. The sunshine irradiance is then calculated, which is used to normalize the images in a dataset according to variations in the incoming solar radiation. Sunshine irradiance ( ) is calculated with the CH0 count (v), relative gain factor (g), and the exposure time ( ): These parameters are found within the image metadata. The calibration coefficient K must then be determined to relate the ratio of Sequoia sensor irradiance to sunshine sensor irradiance. This is done by using a calibration panel with known reflectance ( ) in the band of interest and determining and of the calibration panel with Equations 5 and 6, respectively. K is then calculated: Once the scale factor K is determined, this value can be multiplied by the Sequoia irradiance and sunshine irradiance of the flight images to determine R reflectance: Although this does not consider the sky radiance or air-water interface, as indicated in Equation 3. It is suitable to just use the above equation (8) for terrestrial mapping, but there are further steps for marine remote sensing. Before using the irradiance to reflectance scale factor K to determine reflectance of flight images, the sky radiance and air-water interface reflection coefficient for radiance must be determined. The total upwelling radiance is assumed to be the Sequoia irradiance calculated in Equation 5.
(or the sky radiance) is calculated by flipping the sensor over and taking an image of the sky pointing away from the sun, and then using Equation 5 again to determine irradiance. is calculated with Equation 4 and multiplied by the sky radiance. This value is then subtracted from the Sequoia irradiance to get water leaving radiance ( ). For this specific sensor, multiplying by is not necessary. as defined by Equation 1 and 3 is the water leaving radiance and multiplying this value by results in a value that has the same units as water leaving irradiance. The Sequoia sensor collects data in irradiance units already, so this conversion is not necessary. The water leaving irradiance is then divided by the sunshine sensor irradiance. This number can then be multiplied by the irradiance to reflectance scale factor to get water leaving reflectance: For each flight, the irradiance to reflectance scale factor was calculated for the red band. The red band was selected for this project as a strong correlation is found between reflectance in the red part of the spectrum and low to moderate turbidity values [24]. The majority of Morehead City Harbor is >90% sand, which is heavier and falls out of suspension more quickly, resulting in less turbid waters. Due to this, turbidity levels were expected to be moderate, which would be most accurately reflected in the red part of the spectrum. All raw images in the red band were converted to water leaving reflectance using the above method (Equation 9) and the top 10% of brightest reflectance pixels were then removed from each image ( Figure 4). This threshold was used as it effectively removes pixels that are outliers due to sun glint, waves and man-made objects, such as boats.

Pixel-by-Pixel Reflectance-Based Turbidity Retrieval
The reflectance images with bright outliers removed were then converted into turbidity images following the Nechad et al. (2009) [8] semi-analytical turbidity algorithm: where is the water leaving reflectance ( ) and and C are two wavelength dependent calibration coefficients.
This was a pixel-by-pixel conversion, where the above equation was run with the water leaving reflectance ( ) value in each pixel. This resulted in a "full turbidity image" where each pixel contained a turbidity measurement ( Figure 5). Both calibration coefficients for the red band in the Sequoia sensor (at 660nm) were obtained from Nechad et al.

Average Reflectance-Based Turbidity Retrieval
An average turbidity point feature was also generated for each flight image. The average reflectance for each image was calculated by determining the mean of all pixels in the reflectance image with bright outliers removed. Turbidity for each image was determined from Equation 10 using the single average reflectance value as input. These "average" turbidity values were also georeferenced to the center latitude and longitude of their respective images and exported as shapefiles containing mean spectra, mean turbidity, time the image was taken, and latitude and longitude.

Correlation to In-Situ Data
The turbidity derived from drone data is expressed in Formazin Nephelometric Units (FNU), according to the definition of the International Standards Organization ISO 7027 [25] using the 90 o side-scattering of light at 860nm with respect to Formazin, a chemical standard. These units are slightly different compared to the turbidity extracted from insitu sonde measurements, which are in Nephelometric Turbidity Units (NTU), but both can be intercompared. For instance, the standard solution used for calibrating in-situ turbidity sensors should measure 126 FNU and 124 NTU, which is a small difference within the range of turbidity values observed in this project. Surface turbidity within the top 2 meters of the water column from the sonde measurements was used to compare to drone derived turbidity.
The pixel-by-pixel reflectance-based turbidity images and the average reflectance-based turbidity point features, hereafter referred to as "zonal mean turbidity" and "image mean turbidity" respectively, were exported to ArcGIS Pro (version 2.3.0). The in-situ data was imported as point features, containing attributes for surface turbidity, time, and latitude and longitude.

Image Mean Turbidity Derived from Average Reflectance
The image mean turbidity features were spatially joined to the in-situ features where the two occurred within 100 meters of one another, with an average distance of 28 meters +/-17 meters. This distance was chosen as it was closest to the average width of the images used to generate the drone features, so an in-situ point in this range would likely fall within the image footprint. After the spatial join, all image mean turbidity features that had a time difference of over 30 minutes from the joined in-situ point were removed, with an average time of 14 minutes +/-9 minutes. This was done separately for each flight.

Zonal Mean Turbidity Derived from Pixel-by-Pixel Reflectance
To compare different methods for extracting turbidity from drone imagery, the pixel-by-pixel reflectance based full turbidity images were also compared to the in-situ data. For this method, the images that were within the 100-meter distance (average difference 28 meters +/-17) and below a 30-minute time difference with in-situ points (average time 14 minutes +/-9) were used from the above spatial join. Zonal statistics were run on each image, using a 10m buffer radius around each in-situ point, to generate a mean turbidity value from the image pixels within the buffer circle. The 10m buffer was chosen to account for any drifting of the in-situ sampling boat. These zonal mean turbidity values were joined to the above image mean turbidity feature tables.
Some in-situ points did not match up in time or distance to be compared to the drone derived data. Additionally, as some in-situ points were close enough in distance or time to more than one drone image, there are multiple drone derived values for both methods compared to the same in-situ value. For these multiple values, the two that were closest to the in-situ point by both time and space were kept.
To estimate the correlation between drone derived turbidity from both image mean and zonal mean to in-situ data we used data from five flights during dredging from July 17 (Range A), 9 from July 22 (Range B), one from July 23 (Range B), two from July 27 (Range B), and one from July 28 (Range B). Additionally, we used data from three flights concurrent with in-situ sampling from August 11 (Range A) and three flights from August 12 (Range A) after dredging was completed. Six drone flights were not used for correlation, as the flights did not overlap with in-situ sampling by both time and space.

Turbidity Maps
After it was determined that the drone derived data could be accurately correlated to the in-situ data (see 3.1 below), turbidity maps were generated for each flight and day. For each flight, the image mean turbidity features were used. It was determined that this method for turbidity extraction was more accurate than zonal mean turbidity, as there is less noise when using the average reflectance compared to calculating turbidity on a pixel-by-pixel basis. The number of points from each flight used to generate the turbidity maps is the same as the number of images collected during each flight, which is outlined in Appendix A.
Spline with barriers was run for each flight using the image mean turbidity features. This ArcGIS tool uses a minimum curvature method that moves from an initial coarse grid through a series of finer grids using a one-directional multigrid technique. At each grid level, a convergent linear iterative deformation operator is applied repeatedly at each node. The deformation is calculated on the basis of a molecular summation [26]. This results in an approximation of a minimum curvature surface that honors both the input point data and discontinuities encoded in the barriers, where each cell is the result of the weighted summation of 12 neighboring cells. The barriers used were simply the flight extent. Extract by mask was then run on the resulting rasters to clip the turbidity maps to the flight extents. The resulting turbidity maps were exported as map layouts with the location of in-situ points and the dredge path from that day. The flights that were during active dredging also included the location of the active dredge.
To compare the drone turbidity maps with the in-situ maps, four separate layouts were created. Using the in-situ dataset containing 10 points from July 17 during active dredging (T1), 10 points were selected from the corresponding drone dataset that most closely aligned with each in-situ point by both time and space. This process was repeated for the second in-situ dataset from July 17 (T2), which also contained 10 points in the same locations as T1 but collected 25 minutes later as the dredge was transiting to the disposal site. The single in-situ dataset from July 27 was separated into two datasets for active dredging and 30 minutes post dredging, also referred to as T1 and T2, respectively. Again, a single drone point that most closely aligned with an in-situ point by both time and space in both datasets was selected. For all four layouts, Spline with barriers was run separately on the in-situ points and selected drone points.
Additionally, to highlight the ability of a drone to detect and quantify the extent of surface turbidity, the six flights from July 17 in Range A and two from July 28 in Range B were combined to generate full flight turbidity maps.

Plume Extraction
As a proof of concept, turbidity surface plumes were extracted from the flights in Range A on July 17 and Range B on July 27. A baseline turbidity map of the Range A and B area was used, which was based on in-situ surface turbidity collected before and after hopper dredging occurred. No drone flights were conducted in Range B or this area of Range A on a non-dredging day, so a background map of drone derived turbidity in this area could not be generated. The baseline in-situ maps were subtracted from the turbidity map for drone flights on July 17 and July 27 to create a new map that outlined where turbidity was elevated from background levels during days of active dredging. For visual comparison, an in-situ map was generated from surface turbidity measurements on July 17 and July 27 and the baseline in-situ maps were subtracted to outline where turbidity levels were elevated based solely on sonde measurements.

Turbidity Validation with In-Situ Measurements
The correlation between drone derived image mean turbidity and zonal mean turbidity to in-situ turbidity resulted in a total of 212 drone derived and in-situ turbidity values. Separate correlations were run for the image mean and zonal mean methods. Figure 6 provides an example of an image illustrating the location of in-situ sampling point and correlated image mean and zonal mean turbidity used to assess correlation between drone derived and in-situ estimates of turbidity. The exact derived measurements are found in Table 2.  The correlation between image mean turbidity measurements compared to the in-situ data on linear and logarithmic scales is shown in Figure 7. The linear regression coefficients for the drone derived image mean turbidity yield a slope of 0.76, R 2 of 0.84, root mean square error (RMSE) of 3.39, and weighted mean absolute percentage error (wMAPE) of 36.73%. The log plot indicates scattering across the sampling spectrum, particularly for lower turbidity values. This is the expected outcome as low turbidity waters have low water leaving reflectance values, making the signal detected by the sensor more subject to influences of the atmosphere [24]. The correlation between zonal mean turbidity measurements compared to the in-situ data on linear and logarithmic scales is shown in Figure 8   Drone image mean turbidity proved to be a better metric than zonal mean turbidity (Figure 7). Hereafter, "drone turbidity maps" refers to those generated from image mean turbidity values unless otherwise specified. For the drone flights on July 17 in Range A, the turbidity maps were compared to turbidity maps generated from the in-situ data. The in-situ data was collected in the same location during two separate times, one during active dredging (T1) and one while the dredge was away from the channel disposing of sediment (T2). The drone points proximal to the in-situ points in both time (<30 min) and distance (< 100m) were used to generate the drone turbidity maps. The compared turbidity maps are shown in Figure 9 for active dredging and Figure 10 for 25 minutes after dredging had halted in this area. The mean turbidity value of the differences for the active dredging dataset was -0.77 (SD=8.48) and 0.34 (SD=0.88) for postdredging.  The drone data from July 27 in Range B was also compared to the in-situ data collected during active dredging (T1) and while the dredge was away from the channel disposing of sediment (T2). We split the in-situ dataset into active dredging and 30 minutes post dredging, so the points used to generate maps in both T1 and T2 are not in the same location. The drone points used for these turbidity maps were proximal to the in-situ data by both time and distance. There is one in-situ point that did not have a corresponding drone point for both datasets. This is seen in the area further away from the shoreline, where the drone could not reach due to Visual Line of Sight rules. The compared turbidity maps are shown in Figure 11 for active dredging and Figure 12 for post dredging. The mean value of the differences for the active dredging dataset (T1) was -1.47 (SD=1.59) and -0.83 (SD=1.45) for post dredging.

Drone Flight Maps
Generating image mean drone turbidity maps resulted in a total of 22 maps, with 8 overlapping with active dredging, 6 from days where flights and active dredging did not overlap in space or time, and 8 from days after dredging was completed. The total area mapped was 18.25 km 2 . An example of the turbidity map generated from July 17 during active dredging is seen in Figure 13 and another example from July 28 is seen in Figure 14.  As outlined in Appendix A, six flights were conducted in Range A on July 17, each approximately 15 minutes long. This was a combined flight time of 90 minutes, conducted in a 2.5-hour timeframe. From these six flights, the drone captured an area of 1.5 km 2 , taking photos 5 seconds apart. In comparison, the in-situ sampling covered an area of 0.24 km 2 in 75 minutes with approximately five minutes in between samples. The drone captured an area 6.25x greater than the in-situ sampling area in half the time. The two 20-minute flights on July 28 resulted in a combined flight time of 40 minutes conducted in a 60-minute timeframe. These combined flights captured an area of 1.8 km 2 . The in-situ sampling was conducted in a 3.5-hour timeframe with approximately 10 minutes in between samples, covering a total area of 1.5 km 2 . The drone captured an area 1.2x greater than the in-situ sampling area in a third of the time.

Turbidity Observations
The highest observed drone image mean turbidity was seen on July 17 (Range A) during active dredging (>72 FNU), while most turbidity measurements in Range B on all flight days were below 10 FNU. Figures 10 and 12 both show turbidity maps derived from data collected around 30 minutes after dredging had halted, with turbidity less than 5 FNU in Range A and less than 10 FNU in Range B. The 25-minute period, in which dredging had stopped and before the second round of drone flights were conducted on July 17 in Range A, was during ebb tide with wind speeds around 4 m/s. Turbidity appeared to increase slightly (within 5 turbidity units) 30-minutes post dredging on July 27 in Range B in one area, but turbidity levels remained steady in other areas. The active dredging turbidity map was generated from data collected during slack tide and the post dredging data was from flood tide, with wind speeds staying around 5 m/s. Although, for both Range A and Range B, it is unclear how much impact surface dynamics had on turbidity levels.

Plume extraction
Subtracting the background in-situ turbidity values from before and after hopper dredging from the drone derived turbidity values during a dredging day indicated that this could be a viable method to determine where turbidity may be elevated. The results of this are shown in Figure 15 (Range A) and Figure 16 (Range B), compared to the in-situ data from the same day and time. For Range A, turbidity was elevated in an area of 0.23 km 2 , which was 96% of the area mapped. Range B showed turbidity elevated in an area of 4.12 km 2 , or 89% of the total area mapped. Furthermore, the maps from both ranges visually appear similar between drone and in-situ sampling, with slight differences in areas where drone imagery was not collected.

Discussion
This study effectively highlights how a standard multispectral drone sensor provides useful data for water quality applications associated with localized turbidity events during marine dredging. While monitoring turbidity plumes using traditional methods (i.e., on the water sampling) has proven to be an effective method in numerous situations, the high-resolution aerial view from a drone can increase ability to detect patchily distributed plumes, quantify extent, and track how a plume may move or dissipate over time [17]. The drone-based automated workflow presented in this study was successful in measuring turbidity ranging from 1-72 FNU and the larger-scale view from the drone highlighted how quickly surface turbidity plumes dissipated. The steep decrease in turbidity from the active dredging map shown in Figure 9 suggests that while dredging did significantly increase turbidity, these surface plumes dissipated quickly. It is unclear from the drone imagery whether surface turbidity decreased due to settlement of sediment, lateral dispersion, or diffusion, and it is possible that the high turbidity plume in the top of Figure 9 dispersed out of the map frame rather than dissipating. It is also worth noting that while the surface level plumes may dissipate relatively quickly, that does not mean the entire dredge induced sediment plume also dissipates quickly because sub-surface plumes persist longer. For example, sub-surface plumes in the Outer Entrance Channel persisted for the full duration of dredging, up to 60 days [19]. Yet, this study highlighted the ability of a drone to take more turbidity samples in a larger area during a shorter timeframe than in-situ sampling, making it more likely to detect dispersion.
Additionally, using drones to monitor dredging operations provides many benefits over other remote sensing methods such as satellite imagery, including higher temporal and spatial resolutions that allow better opportunities to monitor short-term plumes in specific areas. While satellite imagery cannot be collected on demand, drones are rapidly deployable, presenting a unique opportunity to capture specific turbidity events that may form and dissipate over the course of a few hours. In-situ point measurements are often collected at a fine scale (1 m 2 ), while satellite imagery is often limited to a 10 m resolution or greater [12]. Drone imagery is much more representative of the in-situ scale, for example the Parrot Sequoia multispectral sensor used in this study, at an average altitude of 90 meters, collected imagery at an average resolution of 10 cm/pixel. As drone technology continues to advance, the imagery is proving to be an important intermediate scale between satellite and in-situ data, particularly for water quality monitoring [27][28][29].
While drones show certain advantages, there application is not without challenges, particularly when mapping over water. The optical properties of water, including sun glint and sky glint, can skew results and make it difficult to extract meaningful information on bio-physical properties of the water [30]. Waves and white caps can also affect the signal captured by the sensor. Furthermore, water is a strongly absorbing feature with low reflectance, so the drone sensor used must be able to capture lower signals, and noise can become a prominent issue (low signal to noise ratio). To estimate reflectance from drone imagery without complex modeling, many assumptions must be made. As drones fly at a limited altitude, it is assumed that the atmospheric radiance can be ignored. It is also assumed that the sensor is tilted slightly to avoid sun glint, so this component is neglected in the radiative transfer formula [31]. The sky radiance component in this project was calculated from images taken on a different day, so while it is assumed the radiance values are similar due to the similarity between lighting conditions, they most likely differ to some degree. Without the ability to collect in-situ measurements of reflectance, it is difficult to test the validity of these assumptions. The turbidity formula was not calibrated or fine-tuned based on the in-situ measurements. The two wavelength dependent calibration coefficients were taken directly from the tabulated values within Nechad et al. (2009) [8] and have not been fine-tuned to the Sequoia sensor. This fine-tuning can be done with more in-situ measurements in future work, although it is unclear how many paired observations are required.
Using the average reflectance to generate a single turbidity value per drone image (i.e., image mean turbidity) proved to be more accurate than using the entire reflectance image and calculating turbidity pixel-by-pixel (i.e., zonal mean turbidity). There are most likely discrepancies in the method used to georeference the drone images, which could contribute to the low R 2 of the zonal mean turbidity retrieval method (R 2 =0.16). Georeferencing discrepancies in the zonal mean method are a more prominent issue because the buffered area is much smaller (10 m 2 ) than the image mean method area (109x81m). The in-house script did not consider the roll, pitch, or yaw of the drone when georeferencing, so it is assumed that the drone image locations may be slightly off-center. Additionally, the in-situ sampling boat is susceptible to drifting, so these samples are not collected in a stationary state. The ability to compare drone data to insitu points also depends on the accuracy of the drone and in-situ GPSs and errors in location will propagate to derived measurements. These factors could contribute to the low accuracy in the zonal mean method because the images do not perfectly align with the in-situ points. The zonal mean method is also much more susceptible to noise, as any extraneous value in any one of the pixels will contribute greatly to the final mean turbidity value. If the above factors can be accounted for and tested, then the pixel-by-pixel reflectance-based turbidity retrieval (i.e., zonal mean) method may be more accurate. For simplicity, the average reflectance-based turbidity retrieval (i.e., image mean) method is recommended for future work. Good correlation was found between this method and the in-situ measurements (R 2 =0.84) and with a slope close to 1 (0.76) and weighted MAPE of 36.73%. These results indicate that this method can be used to derive turbidity values from drone imagery when using the single band approach in Nechad et al. (2009) [8], although more in-situ and drone measurements in the Range A Station 110+00 seaward channel, where observed turbidity values were highest, would provide more moderate turbidity values to compare and strengthen this conclusion.
Although visual comparison of the drone derived turbidity maps and the in-situ surface turbidity maps may deem the products different, this is not a practical difference. As outlined in the post dredging maps and the active dredging map in Range B, the differences are all less than 5 turbidity units, which is negligible when considering the time and space of sampling being compared. Although the turbidity differences were much larger when comparing the active dredging maps in Range A, the measured turbidity in this area was significantly higher, so a difference of 10 turbidity units may also be considered negligible. These differences may stem from the fact that the drone points do not perfectly align with the in-situ measurements by both time and distance, or from the different measuring techniques. Turbidity was measured in-situ by using a hand-held device using Nephelometric Turbidity Units (NTU), while the semi-empirical relation used in the Nechad et al. (2009) [8] algorithm to derive turbidity from drone images is based on in-situ measurements with an instrument that measures turbidity in Formazin Nephelometric Units (FNU). Furthermore, using the single band algorithm means that errors made in reflectance conversion will be further propagated into the end products. It is also important to note that the in-situ values used are the average of turbidity in the upper 2 meters of the water column, while the drone derived turbidity is directly related to how far the sensor can penetrate the water column. While drone imagery only captures information in the top-layer of the water column, it still provides data on the formation and dispersal of the sediment plume at the surface and can be combined with in-situ measurements at different depths for comprehensive sediment modelling [18].
Due to the above differences in comparing the drone derived turbidity to in-situ turbidity, plume extraction may be more representative of turbidity levels. By generating a background turbidity map using the drone data during days where no dredging occurred, the drone derived turbidity from active dredging days can be directly compared to see where turbidity is elevated during this time. Additionally, turbidity in the Range B and Range A to Station 110+00 channel is not expected to increase significantly during dredging as these areas contain sediment with >90% sand. Considering this, it may be more accurate to measure where turbidity is elevated from background levels as a means of plume detection as opposed to measuring turbidity more directly. Future iterations of this study should focus on plume tracking. A baseline turbidity map in both Range A and Range B should be generated from drone data collected before dredging, and then drone flights during dredging should be focused on capturing the extent of surface plumes over time.
The results of this study provide important considerations for risk-based management of dredging operations in Morehead City Harbor and are applicable to other localized turbidity events. Drones can collect data over a much larger area in a shorter time than in-situ measuring. As turbidity plumes can dissipate rapidly during marine dredging, it is important to collect data in a short timeframe to detect these plumes accurately and study how they move and change over time. For hopper dredging, turbidity sources come from the drag head on the bottom and overflow from the ship. The turbidity signal from the drag head most likely does not rise to the surface in this water column, so it is assumed that we are measuring elevated turbidity from overflow only when sampling with the drone. While surface turbidity measured by the drone does not show the extent of a sediment plume throughout the water column, it can be easily and rapidly measured for quick management response. Measuring turbidity directly through the outlined drone method should not replace in-situ sampling, but the results of this study show that drone derived turbidity could be a valuable additional dataset for water quality monitoring.