Preprint
Article

This version is not peer-reviewed.

Quantification of Opercular Pigmentation Changes in Farmed Atlantic Salmon: A Novel Application for Computer Vision in Fish Welfare Assessment

Submitted:

09 April 2026

Posted:

15 April 2026

You are already at the latest version

Abstract
Intensive salmon farming is associated with high mortality rates, highlighting the need for new welfare indicators that can detect adverse conditions earlier and less invasively than many current approaches. Existing animal-based indicators used in the industry typically depend on subjective scoring and provide information mostly after welfare problems have already developed, such as emaciation, wounds, or scale loss. Preliminary data and ongoing investigation suggest that melanin-based skin pigmentation may change dynamically with stress and condition in salmonid fishes. In this study, we present a semi-automated methodology for assessing changes in the grayscale intensity of melanin-based skin spots within the operculum region of adult Atlantic salmon (Salmo salar) kept in sea water. The pipeline combines computer vision models to detect the operculum, segment individual spots, and extract grayscale-based features for spot-level analysis over time. The method was applied to out-of-water images collected before and after exposure to a confinement episode. The results showed an overall shift in grayscale intensity from black to pigmentation fading after the challenge, although responses varied among individuals. These findings indicate that the proposed methodology can detect temporal changes in opercular melanin-based spots under applied experimental conditions. We therefore present this work as proof of principle for using computer vision to quantify changes in melanin-based skin spots as a potentially useful, non-invasive indicator of stress and welfare in Atlantic Salmon.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The aquaculture industry is facing persistently high mortality rates in farmed Atlantic salmon, highlighting the need for improved welfare assessment tools and management practices. In Norway, approximately 70 million salmon were lost during the sea phase of production in 2023, of which 62.8 million were recorded as dead, corresponding to a sea-phase mortality of 16.7%, the highest level reported in recent years (Sommerset et al., 2023; Tvete et al., 2023). These losses are associated with multiple health and welfare challenges in aquaculture production, including delousing-related injuries, complex gill disease, winter ulcers and stress associated with management procedures such as delousing and handling (Overton et al., 2019; Rey et al., 2019; Stien et al., 2020; Keihani et al., 2024). This context has stimulated considerable scholarly interest in the development of advanced welfare indicators for assessing welfare parameters across diverse production systems (Adams et al., 2007; Stien et al., 2020; Volpato et al., 2007; Keihani et al., 2024).
Many welfare indicators currently used in salmon farming are invasive, labor-intensive, or become informative only after a welfare problem has already developed. External indicators such as emaciation, wounds, and scale loss are useful, but they are retrospective in nature and may not provide an early warning signal. Physiological indicators, including plasma cortisol, mucus cortisol, and fecal cortisol metabolites, may offer additional insight, but their collection and interpretation remain challenging in practical aquaculture settings (Keihani et al., 2024). Even when sample collection is considered minimally invasive, handling itself may influence the stress response, and the temporal dynamic of cortisol-related measures can complete interpretation (Cao et al., 2017). These limitations motivate the search for alternative indicators that are less invasive and more easily integrated into imaging-based monitoring approaches.
Melanin-based skin pigmentation may represent one such alternative. In Atlantic salmon and other salmonids, dark spots are formed by the aggregation of chromatophores such as melanophores that store and produce eumelanin and are responsible for their black appearance (Bagnara et al., 2006). Eumelanin has been associated with a wide range of ecological and physiological functions, including photoprotection, camouflage, and communication in vertebrates (Leclercq et al., 2010; C’esarini et al., 1996; Riley et al., 1997; Mackintosh el al., 2001; Roulin et al., 2004; Hoekstra et al., 2006). Previous studies in salmonids have further suggested that melanin-based spot patterns may be associated with individual physiological and behavioral differences. For example, in strains of Atlantic salmon (Salmo salar) and rainbow trout (Oncorhynchus mykiss), individuals with a higher number of spots have been reported to show reduced cortisol responsiveness, faster recovery of feeding after transfer to novel environments, and lower ectoparasitic sea lice burdens (Kittilsen et al., 2009, 2012; Khan et al., 2016).
More broadly, body coloration in fish can reflect both long-term and short-term physiological states and changes associated with behavior and environment. Coloration has been linked to social status, reproductive signaling, and agonistic interactions, and can also change in response to environment background for camouflage (Hoglund et al., 2000; Maan et al., 2004; Yasir et al., 2009). It is known that centrifugal dispersion of melanosomes regulated by melanocyte-stimulating hormone (MSH) makes the fish appear darker, conversely, the centripetal aggregation regulated by Melanin-concentrating hormone (MCH) makes the appearance paler or lighter (Logan et al., 2006; Mills et al., 2009). Higher MCH and higher cortisol levels have been associated with salmon infected with pathogens. Additionally, body coloration changes have been observed; specifically, fish showing greater visual distinctness against black backgrounds and lower visual distinctness against white backgrounds (Yi et al., 2021). However, the physiological pathways involved in color changes of skin-based spots under stress (depicted by high cortisol levels) are not yet well understood.
At the same time, advances in computer vision (Voulodimos et al., 2018) provide new opportunities for quantitative, image-based assessment of external phenotypes in fish. Existing applications in aquaculture have focused primarily on disease detection, lice monitoring, wound identification, fish counting, and individual re-identification based on body patterns (Ahmed et al., 2022; Zhang et al., 2024; Gupta et al., 2022; Banno et al., 2022; Cisar et al., 2021). These studies demonstrate the utility of automated and semi-automated image analysis for detecting visual features in fish, but relatively little work has explored whether similar approaches can be used to quantify temporal changes in melanin-based pigmentation features relevant to welfare research.
In the present study, we investigate whether changes in the grayscale appearance of melanin-based skin spots on the operculum of Atlantic salmon can be quantified from image collected before and after exposure to a confinement episode. To do this, we developed a semi-automated computer vision pipeline that combines operculum detection, spot segmentation, and grayscale-based features extraction. The aim of the study is not to establish a validated stress biomarker, but rather to examine whether this imaging-based methodology can detect measurable temporal changes in opercular spots under the applied experimental conditions. We therefore present this work as a proof-of-principle study that may support future research on non-invasive, image-based welfare assessment in Atlantic salmon.
The main contributions of our work are listed as:
  • We apply a computer vision-based methodology to quantify melanin-based skin spots on the operculum of Atlantic salmon (salmo salar).
  • We propose a semi-automated methodological pipeline for operculum detection, spot segmentation, and grayscale-based quantification of temporal changes in spot appearance.
  • We describe an imaging and annotation workflow that may support future studies of opercular pigmentation dynamics in salmon.
The rest of the paper is divided into 4 main sections; section 2 introduces related work, section 3 explains data collection and the methodology, section 4 explains the results, and finally section 5 highlights the main points of the conducted study.

3. Materials and Methods

In this section, we describe the experimental setup for data collection, the annotation process for identifying operculum regions and spots, and the utility of image augmentation techniques to expand these annotations. We then describe how the models are fine-tuned on these annotations for operculum and spot segmentation, followed by the inference process using these finetuned models to assess visual changes in the spots. The methodology pipeline can be seen in Figure 1.

3.1. Experimental Setup and Dataset

A total of 130 Atlantic salmon from a commercial breeding programme (Aquagen AS, Trondheim, Norway), with a mean age of approximately 1.5 years and body mass ranging between 2 and 10 kg, with equal sex representation (65 males and 65 females), were used in this experiment. The fish were kept in a circular tank (7 m diameter) with dark green coloration and a continuous supply of UV-light-illuminated seawater pumped from a depth of 90 m, maintained at a constant temperature of 8.9 °C, at the Matre Research Station, Institute of Marine Research (IMR), Matredal, Norway.
On the first day, unstressed control fish (n = 8) were captured individually by netting and administered a lethal dose of anesthetic (MS-222; 1 g/L). They were photographed on both sides of the body, and close-up images of the head were also captured under ambient (natural sunlight) lighting using an Olympus Corporation TG-6 camera model with settings set to auto. The remaining fish (n = 122) were sedated and photographed in the same manner as the control fish. Following photography, these fish were transferred to a new tank and subjected to confinement stress overnight (approximately 18 h) by lowering the water level. During confinement, the water depth was approximately 10–15 cm at the edge of the tank. In most fish, the dorsal fins were above the water surface, and the largest individuals were unable to remain upright when attempting to swim near the tank wall.
After overnight confinement stress, fish were captured individually using a net, and groups of five to six fish at a time were placed in a 1 m³ container with seawater, after which they were administered a lethal dose of anesthetic and photographed. A total of 1,040 images were captured, comprising eight images per salmon, and were further categorized into head and body images. In this study, only the 520 head images were utilized, as these clearly captured the operculum region at close range and allowed spots to be distinctly visible. Only operculum spots were selected due to their prominence and suitability for semi-automated methodology.

3.2. Image Annotations and Augmentations

The training of the pretrained models for detection and segmentation of operculum regions and spots was supported by annotating the operculum (Figure 3(right)) and spots regions (Figure 3(left)) in 275 close-up head images using Roboflow’s annotation toolkit (Dwyer et al., 2025). Spots were annotated based on an instance segmentation strategy aiming to handle variations in shape, size, and texture. Moreover, to facilitate both detection and semantic segmentation of spots (Hafiz et al., 2020). The operculum region was annotated using standard polygon tool, similarly, spots on the operculum were annotated with both standard and smart polygon tools (based on Segment Anything Model (SAM) (Kirillov et al., 2023)). A total of 9000 spots (32 spots per fish) and 275 operculum regions were annotated. The annotated operculum regions were split into 70% training (193 images), 20% validation (55 images), and 10% testing (27 images) sets. Geometric and pixel intensity transformations (Xu et al., 2023) based augmentations available in Roboflow were applied to the annotated spots images. The motivation behind applying augmentation was to build a dataset with increased variation in shape, size, and texture of spots. The explanation against the augmentation strategies (Dwyer et al., 2024) used are as follows:
Figure 2. (Top-row) Close-up images of the left side of a salmon specimen taken before (left) and after (right) confinement stress. (Bottom-row) Corresponding images of the right side.
Figure 2. (Top-row) Close-up images of the left side of a salmon specimen taken before (left) and after (right) confinement stress. (Bottom-row) Corresponding images of the right side.
Preprints 207479 g002
Figure 3. Spots (left) and operculum (right) region in the close-up image annotated with smart and standard polygon tools in Roboflow.
Figure 3. Spots (left) and operculum (right) region in the close-up image annotated with smart and standard polygon tools in Roboflow.
Preprints 207479 g003

3.2.1. Geometric Transformations

  • Reflection: Clockwise, counterclockwise, upside-down reflections were generated with their respective transformation matrices using the following equation:
I m a g e x , y = c e n t e r + ( i = 0 w i d t h j = 0 h e i g h t ( I m a g e i , j c e n t e r * M c w | c c w | u p d w ) )
where c e n t e r = [ w i d t h 2 , H e i g h t 2 ] and M c w =   0 1 1 0 ,   M c c w = 0 1 1 0 ,   M u p d w = 1 0 0 1 are clockwise, counterclockwise, and upside-down reflection transformation matrices respectively and image spatial resolution; width=1080, height=1080.
  • Rotation: Images were randomly rotated using angles (ϴ) randomly sampled from a given range = [ 15 ° , 15 °   ] using the following transformation equation:
    I m a g e x , y = c e n t e r + ( i = 0 w i d t h j = 0 h e i g h t ( I m a g e i , j c e n t e r * M r o t ) )
    where M r o t = c o s ( ϴ ) s i n ( ϴ ) s i n ( ϴ ) c o s ( ϴ ) is the rotation transformation matrix.
  • Shear: Shearing was applied in both horizontal and vertical directions using angles (ϴ) randomly sampled from a given range= [ 10 ° , 10 °   ]
  • Crops: Crops were generated with a randomly sampled zoom factor percentage (zf_perc) within a given range= [0, 10%] using the following transformation:
    i d x = I N T w * h * z f p e r c w
    C r o p = I [ i d x : w + i d x ,   i d x : h + i d x ]
    where idx is the zoom factor index along which pixels in horizontal (w) and vertical (h) directions of Image (I) are sampled for the crop.

3.2.2. Pixel Intensity Transformations

  • Exposure: Image exposure in both directions was randomly adjusted by randomly sampling a threshold from a given range = [-2%, 2%] using the following transformation equations:
  • I L A B c v t C o l o r L A B ( I R G B )
  • I L ; I A ; I B s p l i t ( I L A B )
  • e x p o s u r e = t h r e s h o l d * 255
  • I L _ a d j u s t e d ± e x p o s u r e
  • I L A B m e r g e ( I L _ a d j u s t e d ; I A ; I B )
  • I R G B c v t C o l o r ( I L A B )
where cvtColor; spit; merge is color space conversion, channel-wise splitting, and channel-wise merging OpenCV (Bradski et al., 2000) functions. I R G B is image in RGB color space (Busin et al., 2008) and I L A B   is the variant of same image in LAB color space (Busin et al., 2008).
  • Brightness: Brightness adjustments were like exposure adjustments except instead of utilizing the LAB color space, the RGB images were converted to HSV color space (Busin et al., 2008), and the value channel of the converted images were adjusted.
  • Blur and Noise: Blur was introduced using a gaussian filter (Bradski et al., 2000; Nelsonet al., 2020) while for noise salt-and-pepper noise (Bradski et al., 2000; Azzeh et al., 2018) was used.
The augmentations (Figure 4) were applied randomly to training images. Geometric transformations such as reflection, rotation, and shearing render the model invariant to camera orientation related variations while cropping improves the model resilience to variations in size and positioning of the spots and operculum regions. Pixel intensity transformations such as exposure, brightness, and blur were used to improve model robustness to varying lighting conditions and resilience to camera focus. Lastly, noise was added to help the model against adversarial attacks (Goodfellow et al., 2014). The augmented spots training set consisted of 1065 images (approximately representing 35,000 spots) while the validation and test sets were left un-augmented. These augmentations were applied to improve the model robustness to variation in spots characteristics (size, pixel intensities, structure, texture, etc)

3.3. Detection and Segmentation of the Operculum Region and Spots

YOLOv8 (Sohan et al., 2024) and Segment anything 2 (SAM 2) (Ravi et al., 2024) object detection and segmentation models were used for segmentation of operculum regions and spots within those regions respectively. YOLOv8 is an object detector comprising of three primary components: i) a backbone ii) a neck, iii) and a head. i) The backbone includes convolutional blocks, C2f blocks, and a spatial pyramid pooling block. Convolutional block consists of a convolutional layer (Wu et al., 2017), batch normalization layer (Ramachandran et al., 2017), and SiLU activation layer (Yoon et al., 2025). C2f block is further composed of a convolutional block, channel-wise feature maps splitting, bottleneck blocks, feature maps concatenation and finally a convolutional block. The bottleneck block is made up of convolutional blocks with residual connections (He et al., 2016). Spatial pyramid pooling block (He et al., 2015) consists of a convolutional block, maxpooling layers (Wu et al., 2017), residual connections, and finally feature maps concatenation followed by a convolutional block. ii) The Neck component is composed of convolutional blocks, c2f blocks, concatenation layers, and transposed convolutional layers (Zeiler et al., 2010) for upsampling the spatial resolution of the feature maps. iii) The head is made up of convolutional blocks and convolutional layers. C2f blocks enable residual learning which helps mitigate the vanishing gradient problem (He et al., 2016), it is also responsible for learning a hierarchical representation of the data that captures both textural and semantic information. The spatial pyramid pooling block is responsible for learning multi-scale features which renders the model invariant to different sized and scaled objects. The neck is responsible for further refining the multi-scale features for better detections. Finally, the head component is responsible for predicting bounding box coordinates (Ren et al., 2024), segmentation masks (Ren et al., 2024), and class labels (Ren et al., 2024) against the detections. State-of-the-art segmentation model, segment anything 2 (SAM 2) is composed of the following architectural components: i) image encoder, ii) memory attention, iii) mask decoder, iv) prompt encoder, v) memory encoder, and vi) memory bank. i) Image encoder is based on Hiera image encoder (Ryali et al., 2023), which is mainly composed of 4 hierarchical vision transformers (ViT) (Dosovitskiy et al., 2020) termed as 4 stages of the architecture. Two of those stages feature maps are fused using a feature pyramid network (Lin et al., 2017) to produce embeddings against an input image. ii) Memory attention block is made up of a stack of transformers (Turner et al., 2023) responsible for self-attention (Shaw et al., 2018) and follow-up cross-attention (Gheini et al., 2021) with memory embeddings. iii) Prompt encoder follows (Kirillov et al., 2023) encoder enabling prompts through clicks, bounding boxes, and masks. iv) Mask decoder architecture also largely follows (Kirillov et al., 2023) decoder with its bi-directional transformer blocks approach to prompt self-attention and cross attention between prompt-to-image embedding and vice versa. v) Memory encoder reuses the image embedding generated by the image encoder and fuse it with downsampled version of previously generated mask to generate a memory. vi) Memory bank is based on first-in-first-out (FIFO) queue of previously generated memories (including images (frames) and prompts) against previously predicted objects. Image encoder generates image embeddings for a given image which is conditioned on previous frames and their respective predictions by the memory attention block. Mask decoder takes prompt information along with information provided by the memory attention block for generating a prediction for a given image.

3.3.1. YOLOv8 and SAM2.1 Training

Pretrained Yolov8 (Lin et al., 2014) for operculum region segmentation was retrained on non-augmented operculum images dataset with Ultralytics (Jocher et al., 2020) framework. During retraining spatial resolution of 1080x1080 was adopted with a batch size of 4. The whole training duration lasted 100 epochs with training epochs set to 100 with 25 early stopping rounds for mitigating overfitting (Li et al., 2021) along with dropout rate of 20%. Adam optimizer (Adam et al., 2014) with a learning rate of 0.002 and momentum of 0.99 was used for fine-tuning the model. The default data augmentations via the albumentations library (Buslaev et al., 2020) such as Gaussian blur, CLAHE, grayscale conversion, and an 8×8 tile-size adjustment were used during fine-tuning. The comprehensive list of other data augmentations applied during training of the model can be seen in Table 1. The model was evaluated on a set aside validation set with metrics such as recall (Padilla et al., 2021), precision (Padilla et al., 2021) and mean-average precision (mAP) (Padilla et al., 2021). The training and validation losses of the model can be seen in Figure 5 with segmentation loss exhibiting underfitting which could be the consequence of using 20% dropout rate. SAM2.1 (Pretrained Hiera-B+ (Ryali et al., 2023)) was finetuned with augmented dataset of spots with scripts provided by Roboflow (Gallagher, 2020) for 40 epochs. Precision (equation 5), Recall (equation 6), and Mean-Average Precision (equation 7) metrics of YOLOv8 are mentioned in Table 2. Binary cross-entropy (equation 8) and intersection-over-union (equation 9), and mask losses average over 40 epochs of SAM model are mentioned in Table 3. Figure 6 represents the different training losses of SAM model.
R e c a l l =   T P T P + F P  
P r e c i s i o n =   T P T P + F N  
m e a n a v e r a g e   P r e c i s i o n =   1 C   i = 1 C t = 1 T ( R e c a l l t R e c a l l t 1 ) P r e c i s i o n t
where TP; FP; FN are true positives, false positives, and false negatives. C is the total number of classes (in our case 1, roi-region of interest (operculum)) and T is the threshold on which the recall and precision are computed (Typically set to 0.25).
B i n a r y   C r o s s e n t r o p y   B C E   L o s s =   1 N i = 1 N [ y i log p i + 1 y i log ( 1 p i ) ]
I n t e r s e c t i o n o v e r U n i o n   I o U   L o s s = 1 N i = 1 N ( 1 B b o x g t 1   B b o x p r e d i B b o x g t 1   B b o x p r e d i )
where y , p are the ground truth mask class labels and predicted probabilities against observations belonging to a class. N is total number of observations. B b o x p r e d ,   B b o x g t   are predicted and ground-truth bounding boxes.

3.4. Inference

At inference, an unseen dataset containing 84 images which were collected from 21 stressed (treated) salmon specimens with each one photographed 4 times (left side: pre, post-stress; right side: pre, post-stress). Operculum regions from these images were extracted through binary masks obtained from the trained YOLOv8 polygon coordinates (Figure 7).

3.4.1. Operculum Regions Registration

The extracted operculum regions were sorted into pre-and post-stress groups and registered to ensure some degree of 1-to-1 correspondence between them for pre-and post-stress spot visual changes analysis later. The image registration pipeline is based on the OpenCV library (Bradski et al., 2000): i) Pre-and post-stress regions sidewise (left and right) are converted to grayscale, ii) Scale-invariant feature transform (SIFT) (Lowe et al., 2004) feature detector is used to detect and compute the feature keypoints and descriptors respectively in the pre-and post-stress grayscale regions, iii) Fast library for approximate nearest neighbors (FLANN) (Muja et al., 2009) based matcher is used to match the feature descriptors, iv) Lowe’s ratio test (Lowe et al., 2004) is applied to sample the best matches for corresponding keypoints extraction, v) the extracted keypoints with RANSAC (Fischler et al., 1981) are used to compute the homography matrix (Dubrofsky et al., 2009), vi) The matrix is applied to the pre-stress region and aligned with the post-stress region. The different stages of registration pipeline can be seen in Figure 8.

3.4.2. Operculum Regions Normalization

The unseen dataset was photographed in ambient lighting conditions similar to models training and evaluation datasets. The registered regions were normalized with white patch retinex algorithm (Ramanath et al., 2014): i) The eye and ID-tag regions of the salmon specimens were manually annotated with the smart polygon tool available in Roboflow (Dwyer et al., 2025), ii) Black pixels in segmented eye regions were sorted in ascending order and 20% of pixels were sampled, averaged, and used as reference black pixels, iii) White pixels in segmented ID-tag regions were sorted in descending order and 20% of pixels were sampled, averaged, and used as reference white pixels, iv) Each RGB salmon specimen operculum region image was converted to LAB color space, the luminance (L) channel was extracted, bias correction was applied using averaged black reference pixels while contrast was adjusted with averaged white reference pixels, v) Furthermore, the pixels in L-channel were clipped between 0 and 255, vi) Finally, the normalized L-channel of the image was merged back with the left out channels (i.e A and B) and converted back to RGB color space. The normalization algorithm output can be seen in Figure 9.

3.4.3. Operculum Region Spots Segmentation

Registered and normalized operculum region images were used to obtain spots localization information (bounding boxes, masks) from the fine-tuned SAM model. Spots (Figure 10) exhibiting specular highlights, complete occlusion due to mucus, and partial occlusion due to water droplets were removed from both pre-and post-stress operculum regions in Roboflow. Moreover, misaligned spot contours and missed spots were also manually corrected and annotated, respectively. The 676 uniquely identified spots fit for grayscale pixel intensity based spot-wise analysis were matched in both pre-and post-stress images using the following steps (Figure 11): i) Center point of each bounding box localizing the spot was computed, ii) Euclidian distances between pre-stress and post-stress spots were computed (in 1-to-many association), iii) Minimum distance between pairs of pre-and post-stress spots were computed and their corresponding labels (segmentation coordinates) were sorted accordingly, iv) Some of the spots with inaccuracies were handled manually for matching.

3.4.4. Feature Extraction

Matched and corrected spot Segmentation polygon coordinates were utilized in constructing binary spot masks and were further utilized in feature extraction using the following steps: i) Operculum images were converted to grayscale images, ii) Segmentation coordinates corresponding to spots in the operculum were used to construct spot binary masks, iii) Grayscale pixels localized by binary masks were sorted into 10 bins of size 0.1, iv) Individual spot pixel intensities in their respective bins as well as intensity means computed from all bins were associated with the respective salmon specimens.

3.4.5. Statistical Analysis (Grayscale Intensity)

Grayscale pixel intensity was analyzed as a repeated-measure outcome in individual fish, with measurements taken on two days and on both the left and right sides. Since the grayscale intensity is a continuous variable bounded between 0 and 1, and because repeated observations within fish are not independent, the data were modeled using a beta-mixed-effects model fit by maximum likelihood. The model included DAY, SIDE, and their interaction as fixed effects, with fish identity as a random intercept to account for within-fish correlation. The mean grayscale intensity increased from Day1 to Day 2 on both left (0.178 to 0.266) and right sides (0.189 to 0.267). Consistent with this pattern, there was a significant effect of Day ( β = 0.514 ,   S E = 0.101 ,   z = 5.15 , p v a l u e = 2.66 E 07 ( p < 0.001 ) ,   95 %   C I   0.320 0.714 ), indicating higher grayscale intensity at Day 2, whereas neither side ( β = 0.0585 ,   p = 0.582 ) nor the Day x Side interaction ( β = 0.0653 ,   p = 0.644 ) was statistically significant. Model-based estimated marginal means similarly showed higher grayscale intensity at Day 2 for both sides. Across paired observations, the mean absolute increase in grayscale intensity was 0.083 ( 95 %   C I   0.061 0.105 ), corresponding to a mean relative increase of 50.5% ( 95 %   C I   36.5 64.4 ). Together, these results indicate a clear temporal increase in grayscale intensity, with no evidence that the magnitude of change differed between the left and right sides. The likelihood-based mixed-model framework is appropriate for bounded repeated-measure data, the modest sample size and variability in individual-level change should be considered when interpreting the strength and generalizability of these findings. The individual trajectories and the boxplot of the grayscale intensities grouped by side and day can be seen in Figure 12 and Figure 13 respectively.

4. Results

4.1. Effect of Treatment on the Spot Pixel Intensities

The grayscale pixel intensities of the spots on Day 1 and Day 2 were averaged separately, and the change in mean pre-stress and post-stress spot pixel intensity in the operculum region was calculated using the following equation:
C h a n g e =   M e a n   P i x e l   I n t e n s i t y   D a y   2 M e a n   P i x e l   I n t e n s i t y   ( D a y   1 )   M e a n   P i x e l   I n t e n s i t y   ( D a y   1 )
As shown in Table 4, the response to treatment was heterogeneous across individuals. Several fish showed moderate increases in spot pixel intensity, whereas others showed very large increases, and a few showed negative changes, indicating reduced post-stress intensity relative to pre-stress values. The mean percentage change was positive on both sides, suggesting an overall treatment-associated increase in spot pixel intensity, the widespread values points to considerable inter-individual variation.

4.2. Grayscale Intensities of Treated and Control Groups

The spots sampled from 8 control fish and 21 treated fish were analyzed for outliers removal. The lower bound and upper bound of the extracted features were computed using the following steps: i) the Interquartile range (IQR) was computed utilizing the Q1 (25th percentile) and Q3 (75th percentile) of the data, ii) the lower (Q1—factor * IQR) and upper (Q3—factor * IQR) bounds were computed, with a factor of 1.5, iii) feature values falling below the lower bound or above the upper bound were identified and removed, iv) this process was repeated for 10 iterations to remove all of the outliers from the grayscale intensities. After the removal of outliers, the remaining grayscale intensities were plotted. We can observe in Figure 14 that the distribution of control and pre-stress spots exhibited a similar spread, with an interquartile range (IQR) of approximately 0.12. In contrast, increased variation is observed in the post-stress spots, indicating greater dispersion in grayscale intensity values. We can also observe that the mean value on Day 2 is higher than Day 1, indicating that on average the spots are becoming brighter on Day 2.

4.3. Neighborhood Based Grayscale Intensity Analysis

Grayscale pixel intensities of spot and its neighbors were analyzed on Day 1 (pre-stress) and Day 2 (post-stress) using the following steps: i) The center point of bounding boxes localizing spots were used to compute the Euclidean distances between a spot (source) and the remaining spots (targets) in each operculum region with respect to side and day (1: pre-stress, 2:post-stress), ii) Distances between source and target spots were sorted in ascending order and 4-neighbors per spots were extracted, iii) Coordinates of the source and neighboring spots were used to construct directed graph visualizations (Figure 15(a) (see the supplementary material)) for each operculum-day-side combination, following the tutorials provided by (Hagberg et al., 2008), iv) The differences in pixel intensities between source and target spots for each operculum-day combination were formulated in source wise 1-D vectors, v) Normalized-L1 distances (0:Similar, 1:Completely different) (Levy et al., 2024) between source vectors on day 1 and day 2 were computed. These normalized L1-distances were used to construct boxplots (Figure 15(b) (see the supplementary material)) for each salmon specimen. The variation observed within each specimen reflects the inter-individual differences in changes in spot intensities relative to their neighboring spots on day 2.

4.4. Manual Scoring of Spots on a Grayscale by Observers

A composite-image generation pipeline was developed to visualize annotated spot regions across matched pre-and post-stress image sets. For each fish sample, four (left day 1 and 2, right day 1 and 2) grayscale images were loaded together with their polygon-based spot annotations. A parameter (nspots) was defined to include all annotated spots from each grayscale image, such that the full set of available spot regions in each image was incorporated into mask generation. Spot annotation coordinates, originally represented in normalized form, were transformed into pixel-space coordinates using the dimensions (height,width) of the corresponding source image. These polygons were subsequently rasterized to produce binary masks for each of the four images. To ensure adequate spots region representation, the resulting masks were evaluated against a minimum pixel-area criterion before inclusion in the final visualizations. The four grayscale images and their corresponding binary masks were then assembled into a 2x2 stitched images. From these composites, a stitched green-background spot-placeholder image was rendered, with all annotated spots from the four images for each fish sample represented individually. The pre-stress spots on a green background can be seen in Figure 16.
The stitched images were printed on A4 sheets and two observers manually scored the spots on a grayscale (Figure 17). It can be observed in Table 4 (see the supplementary material) and 5 that manual and algorithm-driven grayscale change scores showed similar central tendencies on the left side, with mean values of 54.83 and 52.69 for the two observers and 54.5 for the algorithm. Agreement between the algorithm and manual scoring was moderate for left-sided spots, indicating that the automated method broadly captured the direction and magnitude of observer-assessed changes in the pre-and post-stress spots. In contrast, right-sided scores were substantially more variable, with wider ranges of values in the manual assessments. Although the algorithmic mean right-side value was 46.5 and the two manual means (52.52 and 65.15), concordance with manual scoring was weak. The algorithm appeared to compress the range of scores, tending to overestimate low values and underestimate extreme values relative to human observers.
Furthermore, to reduce inter-observer variability, grayscale absolute values from the left and right sides were averaged for each observer and compared both between observers and against the machine-derived grayscale values averaged across sides using Pearson correlation analysis.
Pearson correlation analysis showed a strong positive association between Observe E and Observer H measurements (r=0.864, p < 0.001), indicating strong inter-observer agreement. A stronger positive association was observed between the averaged observer measurements and machine-derived values (r=0.966, p < 0.001), suggesting that the machine measurements closely matched the consensus manual assessment. These relationships were consistent across Day 1 and Day 2. The scatterplots for both analyses can be seen in Figure 18.
Table 5. Percentage change in grayscale pixel intensity for each fish and side, as assessed manually by the observers.
Table 5. Percentage change in grayscale pixel intensity for each fish and side, as assessed manually by the observers.
Fish Sample Observer
(E)
Observer
(H)
Change left
(%)
Change right (%) Change left (%) Change right (%)
10 4.5 156.07 13.44 125.44
11 45.14 96.34 16.80 59.44
12 23.12 32.25 -5.66 31.19
13 36.5 3.41 65.47 -8.90
14 86.5 41.95 32.01 37.03
15 11.37 5.22 41.66 0.87
16 12.8 43.08 18.32 4.26
17 59.5 51.11 76 67.64
18 118.7 132.83 75 104.71
19 76.9 0 127.27 0
20 66.36 14.47 37.85 0
21 65.16 91.30 133.54 128.08
22 41.24 164.72 23.32 97.76
23 N/A N/A N/A N/A
24 57.47 61.29 25.31 94.66
25 N/A N/A N/A N/A
26 40.84 96.55 65.76 72
27 86.82 235.08 56.52 110
28 36.58 -4.54 36.60 63.20
29 59.01 0 28.04 0
30 113 16.66 133.91 9.68
Average 54.83 65.15 52.69 52.52

5. Discussion and Future Research

The main objective of this study was to investigate whether changes in the grayscale intensity of melanin-based skin spots on the operculum of Atlantic salmon could be detected using a semi-automated computer vision pipeline. The results indicate that such changes can be quantified from out-of-water images collected before and after the applied confinement episode. Across the analyzed fish, the mean grayscale intensity of opercular spots increased from Day 1 to Day 2, and the mixed-effects analysis supported a significant temporal effect. At the same time, responses varied substantially among individuals, indicating considerable heterogeneity in the magnitude and direction of change.
The findings suggest that the proposed methodology is sensitive to changes in spot appearance under the present experimental conditions. The confinement challenge was intended to elicit a stress response, but no independent physiological stress markers such as plasma, mucus, or fecal cortisol were reported in this study to validate the relationship between spot appearance and stress status (Keihani et al., 2024; Cao et al., 2017). In addition, the applied challenge may have involved confounding factors that could also have influenced pigmentation, including altered brightness conditions, body positioning, and context-dependent color responses. Fish coloration is known to vary for multiple reasons, including background adaptation, social signaling, and physiological state (Hoglund et al., 2000; Yasir et al., 2009; Leclercq et al., 2010; Yi et al., 2021). The present design does not fully separate these possible drivers. For this reason, the changes observed in this study should be interpreted as treatment-associated rather than as definitive evidence of a validated stress-specific response. Furthermore, another limitation concerns the control group. Although control fish were included, the sample size was small relative to the treated group, and the comparison between the two groups should therefore be interpreted cautiously. The present study is therefore best understood as a proof-of-principle investigation showing that opercular melanin-based spots appearances can change measurably across repeated imaging points under the applied confinement episode conditions.
Methodologically, the study demonstrates that semi-automated segmentation of the operculum and spot regions is feasible with high accuracy. The trained YOLOv8 model performed strongly for operculum segmentation, and the SAM-based approach enabled detailed spot extraction within the opercular region. These components were essential for constructing a pipeline capable of spot-level quantification. Similar developments in computer vision have already shown considerable promise in aquaculture applications, including wound detection, lice detection, fish monitoring, and pattern-based identification (Gupta et al., 2022; Zhang et al., 2024; Banno et al., 2022; Cisar et al., 2021; Ahmed et al., 2022; Zhou et al., 2022). The manual correction steps required during inference nevertheless highlight that the workflow is not yet fully automated. Occlusion caused by mucus, water droplets, and reflections still required manual intervention, which limits current scalability and real-time applicability.
Another important consideration is the sensitivity of the entire workflow to imaging conditions. Because the analysis relies on grayscale intensity, any variation in lighting, exposure, focus, viewing angle, or reflective artifacts may influence the extracted features. To reduce this problem, we applied image normalization and visually screened spots for suitability prior to analysis. We also restricted the study to opercular spots, as these were generally more prominent and more consistently visible than spots in other body regions. Despite these precautions, residual variation related to image acquisition cannot be excluded. The weaker right-side agreement suggests that some aspects of image registration, segmentation consistency, or visual scoring remain insufficiently controlled. At the same time, the positive correlation between the observers and between the observers and the machine-derived grayscale values suggest that the algorithm is generally able to track the direction of change and shows overall agreement with manual assessment.
The neighborhood-based analysis provided additional descriptive information by examining changes in spot intensity relative to neighboring spots within the same operculum. The variability observed within and among fish indicates that the response was not spatially uniform. This may reflect local pigmentation dynamics, technical variability in image acquisition and matching, or a combination of both. From a biological perspective, this interpretation remains tentative. Previous work has shown that melanin-based pigmentation in salmonids may be associated with differences in stress responsiveness and other aspects of individual phenotype (Kittilsen et al., 2009, 2012; Khan et al., 2016), but the mechanisms underlying short-term visual changes in discrete opercular spots are still not completely understood. Although the observed spatial heterogeneity is intriguing, the present study was not designed to resolve the biological basis of localized spot change.
Overall, the results support the view that image-based quantification of opercular spot appearance may be useful for welfare-related research in Atlantic salmon. The study does not establish opercular spot intensity as a validated biomarker for stress, but it does show that measurable changes in melanin-based spot appearance can be captured using computer vision methods under a defined experimental setup. In that sense, the work provides a methodological foundation for further studies on pigmentation dynamics in salmon.
In the future, we will focus on experimental validation and rigorous control of image acquisition conditions. This includes standardized lighting, camera settings, camera-to-subject distance, and fish positioning. Future experiments will also include larger and more balanced control groups, along with confinement conditions that minimize confounding effects from background adaptation and other coloration responses. In addition, the biological relevance of spot changes validated against established physiological stress indicators, such as cortisol and other endocrine or neurochemical markers. Finally, testing the method on more diverse datasets spanning multiple stocks, environments, and pigmentation phenotypes to access its generalizability and robustness.
In summary, this study should be viewed as proof-of-principle demonstration that melanin-based opercular spots in Atlantic salmon can be detected, segmented, and quantitatively analyzed using computer vision, and that their grayscale appearance may change following exposure to confinement episode. While the findings are promising, further validation is required before such changes can be interpreted confidently as a stress-specific welfare indicator or translated into practical monitoring applications.

6. Conclusion

This study provides a proof-of-principle demonstration that computer vision can be used to detect, segment, and quantify melanin-based opercular spots and treated associated changes in their grayscale intensities in Atlantic salmon. The observed changes in grayscale intensity between pre-and post-confinement images show that the method can capture temporal changes in spot appearance under present experimental conditions. However, the findings should be interpreted cautiously, as stress was not independently validated and alternative causes of color change were not excluded in this study. Our work establishes a methodological basis for future studies on opercular pigmentation as a potential non-invasive welfare related indicator.

Funding

The study is funded by Research Council of Norway—NFR VISSIGN (NMBU Category 3 scholarship is NMBU co-funding as pr contract with NFR- Project number: 324571).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors would like to thank the Institute of Marine Research (IMR), Bergen, Norway for providing the dataset that made this study possible and SINTEF—Applied research, technology and innovation, for their supervisory interest in the Project. In addition, generative AI tools such as ChatGPT were utilized during the preparation of this manuscript to assist with paraphrasing, grammar correction and improving overall readability. It was also used for code debugging. The content was reviewed and edited as necessary to ensure accuracy and alignment with the final published article.

Ethics Statement

The experiment was conducted in accordance with current local legislation governing the use of live animals in research and was approved by the Norwegian Food Safety Authority under FOTS application ID30246.

Conflicts of Interest

Author Evelina Andrea Losneslokken Green was employed by the company Stingray Marine Solutions. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Sommerset, I., Wiik-Nielsen, J., Moldal, T., Oliveira, V. H. S., Svendsen, J. C., Haukaas, A., & Brun, E. (2024). Fish health report 2023. Norwegian Veterinary Institute: Ås, Norway.
  2. Tvete, I.F.; Aldrin, M.; Jensen, B.B. Towards better survival: Modeling drivers for daily mortality in Norwegian Atlantic salmon farming. Prev. Veter- Med. 2022, 210, 105798. [CrossRef]
  3. Overton, K.; Dempster, T.; Oppedal, F.; Kristiansen, T.S.; Gismervik, K.; Stien, L.H. Salmon lice treatments and salmon mortality in Norwegian aquaculture: a review. Rev. Aquac. 2018, 11, 1398–1417. [CrossRef]
  4. Department of Energy, "Sustainable manufacturing and circular economy," Tech. Rep., 3 2023. https://www. energy.gov/eere/articles/sustainable.
  5. Stien, L.H.; Tørud, B.; Gismervik, K.; Lien, M.E.; Medaas, C.; Osmundsen, T.; Kristiansen, T.S.; Størkersen, K.V. Governing the welfare of Norwegian farmed salmon: Three conflict cases. Mar. Policy 2020, 117. [CrossRef]
  6. Keihani, R.; Gomes, A.S.; Balseiro, P.; Handeland, S.O.; Gorissen, M.; Arukwe, A. Evaluation of stress in farmed Atlantic salmon (Salmo salar) using different biological matrices. Comp. Biochem. Physiol. Part A: Mol. Integr. Physiol. 2024, 298, 111743. [CrossRef]
  7. E Adams, C.; Turnbull, J.F.; Bell, A.; E Bron, J.; A Huntingford, F. Multiple determinants of welfare in farmed fish: stocking density, disturbance, and aggression in Atlantic salmon (Salmo salar). Can. J. Fish. Aquat. Sci. 2007, 64, 336–344. [CrossRef]
  8. Volpato, G.L.; Gonçalves-De-Freitas, E.; Fernandes-De-Castilho, M. Insights into the concept of fish welfare. Dis. Aquat. Org. 2007, 75, 165–171. [CrossRef]
  9. Cao, Y.; Tveten, A.-K.; Stene, A. Establishment of a non-invasive method for stress evaluation in farmed salmon based on direct fecal corticoid metabolites measurement. Fish Shellfish. Immunol. 2017, 66, 317–324. [CrossRef]
  10. Leclercq, E.; Taylor, J.F.; Migaud, H. Morphological skin colour changes in teleosts. Fish Fish. 2010, 11, 159–193. [CrossRef]
  11. Césarini, J.; I.N.S.E.R.M Melanins and their possible roles through biological evolution. Adv. Space Res. 1996, 18, 35–40. [CrossRef]
  12. Riley, P. A. (1997). Melanin. The international journal of biochemistry & cell biology, 29(11), 1235-1239.
  13. A Mackintosh, J. The Antimicrobial Properties of Melanocytes, Melanosomes and Melanin and the Evolution of Black Skin. J. Theor. Biol. 2001, 211, 101–113. [CrossRef]
  14. Roulin, A. The evolution, maintenance and adaptive function of genetic colour polymorphism in birds. Biol. Rev. 2004, 79, 815–848. [CrossRef]
  15. E Hoekstra, H. Genetics, development and evolution of adaptive pigmentation in vertebrates. Heredity 2006, 97, 222–234. [CrossRef]
  16. Kittilsen, S.; Schjolden, J.; Beitnes-Johansen, I.; Shaw, J.; Pottinger, T.; Sørensen, C.; Braastad, B.; Bakken, M.; Øverli, Ø. Melanin-based skin spots reflect stress responsiveness in salmonid fish. Horm. Behav. 2009, 56, 292–298. [CrossRef]
  17. Kittilsen, S.; Johansen, I.B.; Braastad, B.O.; Øverli, Ø. Pigments, Parasites and Personalitiy: Towards a Unifying Role for Steroid Hormones?. PLOS ONE 2012, 7, e34281. [CrossRef]
  18. Khan, U.W.; Øverli, Ø.; Hinkle, P.M.; Pasha, F.A.; Johansen, I.B.; Berget, I.; Silva, P.I.M.; Kittilsen, S.; Höglund, E.; Omholt, S.W.; et al. A novel role for pigment genes in the stress response in rainbow trout (Oncorhynchus mykiss). Sci. Rep. 2016, 6, 28969. [CrossRef]
  19. Yi, M.; Lu, H.; Du, Y.; Sun, G.; Shi, C.; Li, X.; Tian, H.; Liu, Y. The color change and stress response of Atlantic salmon (Salmo salar L.) infected with Aeromonas salmonicida. Aquac. Rep. 2021, 20. [CrossRef]
  20. Milinski, M. (1990). Parasites and host decision-making.
  21. Höglund, E.; Balm, P.H.M.; Winberg, S. Skin Darkening, A Potential Social Signal in Subordinate Arctic Charr (Salvelinus Alpinus): The Regulatory Role of Brain Monoamines and Pro-Opiomelanocortin-Derived Peptides. J. Exp. Biol. 2000, 203, 1711–1721. [CrossRef]
  22. Maan, M.E.; Seehausen, O.; Söderberg, L.; Johnson, L.; Ripmeester, E.A.P.; Mrosso, H.D.J.; Taylor, M.I.; van Dooren, T.J.M.; van Alphen, J.J.M. Intraspecific sexual selection on a speciation trait, male coloration, in the Lake Victoria cichlidPundamilia nyererei. Proc. R. Soc. B: Biol. Sci. 2004, 271, 2445–2452. [CrossRef]
  23. Yasir, I.; Qin, J.G. Impact of Background on Color Performance of False Clownfish, Amphiprion ocellaris, Cuvier. J. World Aquac. Soc. 2009, 40, 724–734. [CrossRef]
  24. Logan, D. W., Burn, S. F., & Jackson, I. J. (2006). Regulation of pigmentation in zebrafish.
  25. melanophores. Pigment cell research, 19(3), 206-213.
  26. Mills, M.G.; Patterson, L.B. Not just black and white: Pigment pattern development and evolution in vertebrates. Semin. Cell Dev. Biol. 2009, 20, 72–81. [CrossRef]
  27. Ludwig, D.S.; Mountjoy, K.G.; Tatro, J.B.; Gillette, J.A.; Frederich, R.C.; Flier, J.S.; Maratos-Flier, E. Melanin-concentrating hormone: a functional melanocortin antagonist in the hypothalamus. Am. J. Physiol. Metab. 1998, 274, E627–E633. [CrossRef]
  28. Willard, D.H.; Bodnar, W.; Harris, C.; Kiefer, L.; Nichols, J.S.; Blanchard, S.; Hoffman, C.; Moyer, M.; Burkhart, W. Agouti structure and function: characterization of a potent.alpha.-melanocyte stimulating hormone receptor antagonist. Biochemistry 1995, 34, 12341–12346. [CrossRef]
  29. Kawauchi, H. Functions of melanin-concentrating hormone in fish. J. Exp. Zoöl. Part A: Comp. Exp. Biol. 2006, 305A, 751–760. [CrossRef]
  30. Gröneveld, D.; Balm, P.H.; Bonga, S.E.W. Biphasic effect of MCH on α-MSH release from the tilapia (Oreochromis mossambicus) pituitary. Peptides 1995, 16, 945–949. [CrossRef]
  31. Shimomura, Y.; Mori, M.; Sugo, T.; Ishibashi, Y.; Abe, M.; Kurokawa, T.; Onda, H.; Nishimura, O.; Sumino, Y.; Fujino, M. Isolation and Identification of Melanin-Concentrating Hormone as the Endogenous Ligand of the SLC-1 Receptor. Biochem. Biophys. Res. Commun. 1999, 261, 622–626. [CrossRef]
  32. Lu, D.; Willard, D.; Patel, I.R.; Kadwell, S.; Overton, L.; Kost, T.; Luther, M.; Chen, W.; Woychik, R.P.; Wilkison, W.O.; et al. Agouti protein is an antagonist of the melanocyte-stimulating-hormone receptor. Nature 1994, 371, 799–802. [CrossRef]
  33. Cerdá-Reverter, J.M.; Agulleiro, M.J.; R, R.G.; Sánchez, E.; Ceinos, R.; Rotllant, J. Fish melanocortin system. Eur. J. Pharmacol. 2011, 660, 53–60. [CrossRef]
  34. Cerdá-Reverter, J.M.; Haitina, T.; Schiöth, H.B.; Peter, R.E. Gene Structure of the Goldfish Agouti-Signaling Protein: A Putative Role in the Dorsal-Ventral Pigment Pattern of Fish. Endocrinology 2005, 146, 1597–1610. [CrossRef]
  35. Chaki, J.; Dey, N. A Beginner's Guide to Image Preprocessing Techniques; Taylor & Francis: London, United Kingdom, 2018; ISBN:.
  36. Mohanaiah, P., Sathyanarayana, P., & GuruKumar, L. (2013). Image texture feature extraction using GLCM approach. International journal of scientific and research publications, 3(5), 1-5.
  37. Meena, K.S.; Suriya, S. A Survey on Supervised and Unsupervised Learning Techniques. International Conference on Artificial Intelligence, Smart Grid and Smart City Applications. Cham: Springer International Publishing; pp. 627–644.
  38. Ahmed, M. S., Aurpa, T. T., & Azad, M. A. K. (2022). Fish disease detection using image based machine learning technique in aquaculture. Journal of King Saud University-Computer and Information Sciences, 34(8), 5170-5182.
  39. Shetty, A. K., Saha, I., Sanghvi, R. M., Save, S. A., & Patel, Y. J. (2021, April). A review: Object detection models. In 2021 6th International Conference for Convergence in Technology (I2CT) (pp. 1-8). IEEE.
  40. Zhang, C.; Bracke, M.; Torres, R.d.S.; Gansel, L.C. Rapid detection of salmon louse larvae in seawater based on machine learning. Aquaculture 2024, 592. [CrossRef]
  41. Wu, J. (2017). Introduction to convolutional neural networks. National Key Lab for Novel Software Technology. Nanjing University. China, 5(23), 495.
  42. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
  43. Gupta, A.; Bringsdal, E.; Knausgård, K.M.; Goodwin, M. Accurate Wound and Lice Detection in Atlantic Salmon Fish Using a Convolutional Neural Network. Fishes 2022, 7, 345. [CrossRef]
  44. Liang, X.; Hu, P.; Zhang, L.; Sun, J.; Yin, G. MCFNet: Multi-Layer Concatenation Fusion Network for Medical Images Fusion. IEEE Sensors J. 2019, 19, 7107–7119. [CrossRef]
  45. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N.,... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
  46. Huang, G. B., Liang, N. Y., Rong, H. J., Saratchandran, P., & Sundararajan, N. (2005). On-line sequential extreme learning machine. Computational Intelligence, 2005, 232-237.
  47. Huang, Y.-P.; Khabusi, S.P. A CNN-OSELM Multi-Layer Fusion Network With Attention Mechanism for Fish Disease Recognition in Aquaculture. IEEE Access 2023, 11, 58729–58744. [CrossRef]
  48. Bochkovskiy, A., Wang, C. Y., & Liao, H. Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
  49. Banno, K.; Kaland, H.; Crescitelli, A.; Tuene, S.; Aas, G.; Gansel, L. A novel approach for wild fish monitoring at aquaculture sites: wild fish presence analysis using computer vision. Aquac. Environ. Interactions 2022, 14, 97–112. [CrossRef]
  50. Mogdans, J.; Bleckmann, H. Coping with flow: behavior, neurophysiology and modeling of the fish lateral line system. Biol. Cybern. 2012, 106, 627–642. [CrossRef]
  51. Jocher, Glenn, et al. (2020). “ultralytics/yolov5: v3. 0.” Zenodo.
  52. Khanam, R., & Hussain, M. (2024). What is YOLOv5: A deep look into the internal features of the popular object detector. arXiv preprint arXiv:2407.20892.
  53. Yu, H.; Wang, Z.; Qin, H.; Chen, Y. An Automatic Detection and Counting Method for Fish Lateral Line Scales of Underwater Fish Based on Improved YOLOv5. IEEE Access 2023, 11, 143616–143627. [CrossRef]
  54. Ellis, T.; Berrill, I.; Lines, J.; Turnbull, J.F.; Knowles, T.G. Mortality and fish welfare. Fish Physiol. Biochem. 2011, 38, 189–199. [CrossRef]
  55. Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An Overview on Edge Computing Research. IEEE Access 2020, 8, 85714–85728. [CrossRef]
  56. Li, S., Xu, L. D., & Zhao, S. (2015). The internet of things: a survey. Information systems frontiers, 17(2), 243-259.
  57. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, BC, Canada, 17-24 June 2023; pp. 7464–7475.
  58. Ranjan, R.; Sharrer, K.; Tsukuda, S.; Good, C. MortCam: An Artificial Intelligence-aided fish mortality detection and alert system for recirculating aquaculture. Aquac. Eng. 2023, 102. [CrossRef]
  59. Farnoush, R., & ZAR, P. B. (2008). Image segmentation using Gaussian mixture model.
  60. He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
  61. Ding, S., Zhang, J., Xu, X., & Zhang, Y. (2016). A wavelet extreme learning machine. Neural Computing and Applications, 27(4), 1033-1040.
  62. Al Duhayyim, M.; Alshahrani, H.M.; Al-Wesabi, F.N.; Alamgeer, M.; Hilal, A.M.; Hamza, M.A. Intelligent Deep Learning Based Automated Fish Detection Model for UWSN. Comput. Mater. Contin. 2022, 70, 5871–5887. [CrossRef]
  63. O’shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.
  64. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [CrossRef]
  65. Cisar, P.; Bekkozhayeva, D.; Movchan, O.; Saberioon, M.; Schraml, R. Computer vision based individual fish identification using skin dot pattern. Sci. Rep. 2021, 11, 1–12. [CrossRef]
  66. Bekkozhayeva, D.; Cisar, P. Image-Based Automatic Individual Identification of Fish without Obvious Patterns on the Body (Scale Pattern). Appl. Sci. 2022, 12, 5401. [CrossRef]
  67. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [CrossRef]
  68. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L. (2014). Microsoft Coco: Common Objects in Context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014; pp. 740–755. [CrossRef]
  69. Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J.,... & Ferrari, V. (2020). The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. International journal of computer vision, 128(7), 1956-1981.
  70. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE on compute Rvision and pattern recognition, 2015, abs/1512.03385.
  71. Levy, A.; Shalom, B.R.; Chalamish, M. A guide to similarity measures and their data science applications. J. Big Data 2025, 12, 1–57. [CrossRef]
  72. Zhou, Z.; Hitt, N.P.; Letcher, B.H.; Shi, W.; Li, S. Pigmentation-based Visual Learning for Salvelinus fontinalis Individual Re-identification. 2022 IEEE International Conference on Big Data (Big Data); pp. 6850–6852.
  73. Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020, November). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597-1607). PmLR.
  74. Gehring, J.; Auli, M.; Grangier, D.; Dauphin, Y. A Convolutional Encoder Model for Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; pp. 123–135. [CrossRef]
  75. Shi, W.; Zhou, Z.; Letcher, B.H.; Hitt, N.; Kanno, Y.; Futamura, R.; Kishida, O.; Morita, K.; Li, S. Aging Contrast: A Contrastive Learning Framework for Fish Re-identification Across Seasons and Years. Australasian Joint Conference on Artificial Intelligence. Singapore: Springer Nature Singapore; pp. 252–264.
  76. Dwyer, B., Nelson, J., Hansen, T., et al. (2025). Roboflow (Version 1.0) [Software]. Available from https://roboflow.com. Computer vision.
  77. Hafiz, A.M.; Bhat, G.M. A survey on instance segmentation: state of the art. Int. J. Multimedia Inf. Retr. 2020, 9, 171–189. [CrossRef]
  78. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L.,... & Girshick, R. (2023). Segment anything. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4015-4026).
  79. Xu, M.; Yoon, S.; Fuentes, A.; Park, D.S. A Comprehensive Survey of Image Augmentation Techniques for Deep Learning. Pattern Recognit. 2023, 137. [CrossRef]
  80. OpenCV. Open Source Computer Vision Library, 2015. Available online: https://opencv.org (accessed on 14 December 2022).
  81. Busin, L., Vandenbroucke, N., & Macaire, L. (2008). Color spaces and image segmentation. Advances in imaging and electron physics, 151(1), 1.
  82. Nelson, J. (2020). The Importance of Blur as an Image Augmentation Technique. Roboflow Blog: https://blog.roboflow.com/using-blurin- computer-vision-preprocessing/.
  83. Azzeh, J.; Zahran, B.; Alqadi, Z. Salt and Pepper Noise: Effects and Removal. JOIV : Int. J. Informatics Vis. 2018, 2, 252–256. [CrossRef]
  84. Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
  85. Dwyer, B., Nelson, J., Hansen, T., et al. (2025). Image Augmentation. Roboflow Documentation. https://docs.roboflow.com/datasets/datasetversions/ image-augmentation/.
  86. Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). pmlr.
  87. Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 7(1), 5.
  88. Zeiler, M. D., Krishnan, D., Taylor, G. W., & Fergus, R. (2010, June). Deconvolutional networks. In 2010 IEEE Computer Society Conference on computer vision and pattern recognition (pp. 2528-2535). IEEE.
  89. Roodschild, M.; Sardiñas, J.G.; Will, A. A new approach for the vanishing gradient problem on sigmoid activation. Prog. Artif. Intell. 2020, 9, 351–360. [CrossRef]
  90. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [CrossRef]
  91. Ren, J., Bi, Z., Niu, Q., Liu, J., Peng, B., Zhang, S.,... & Liu, M. (2024). Deep Learning and Machine Learning--Object Detection and Semantic Segmentation: From Theory to Applications. arXiv preprint arXiv:2410.15584.
  92. Ravi, N., Gabeur, V., Hu, Y. T., Hu, R., Ryali, C., Ma, T.,... & Feichtenhofer, C. (2024). Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714.
  93. Ryali, C., Hu, Y. T., Bolya, D., Wei, C., Fan, H., Huang, P. Y.,... & Feichtenhofer, C. (2023, July).Hiera: A hierarchical vision transformer without the bells-and-whistles. In International conference on machine learning (pp. 29441-29454). PMLR.
  94. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T.,... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  95. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [CrossRef]
  96. Turner, R. E. (2023). An introduction to transformers. arXiv preprint arXiv:2304.10557.
  97. Shaw, P.; Uszkoreit, J.; Vaswani, A. Self-Attention with Relative Position Representations. Assoc. Comput. Linguist. 2018, 2, 464–468. [CrossRef]
  98. Gheini, M.; Ren, X.; May, J. Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation. arXiv preprint arXiv:2104.08771.
  99. Li, Q.; Yan, M.; Xu, J. Optimizing Convolutional Neural Network Performance by Mitigating Underfitting and Overfitting. 2021 IEEE/ACIS 19th International Conference on Computer and Information Science (ICIS). IEEE; pp. 126–131.
  100. Adam, K. D. B. J. (2014). A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 1412(6).
  101. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L. (2014). Microsoft Coco: Common Objects in Context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014; pp. 740–755. [CrossRef]
  102. Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and Flexible Image Augmentations. Information 2020, 11, 125. [CrossRef]
  103. Padilla, R.; Passos, W.L.; Dias, T.L.B.; Netto, S.L.; da Silva, E.A.B. A Comparative Analysis of Object Detection Metrics with a Companion Open-Source Toolkit. Electronics 2021, 10, 279. [CrossRef]
  104. Gallagher, J. How to Fine-Tune SAM-2.1 on a Custom Dataset. (2024). Roboflow Blog: https://blog.roboflow.com/fine-tune-sam-2- 1/.
  105. Muja, M.; Lowe, D.G. FAST APPROXIMATE NEAREST NEIGHBORS WITH AUTOMATIC ALGORITHM CONFIGURATION. International Conference on Computer Vision Theory and Applications. pp. 331–340.
  106. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [CrossRef]
  107. Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. In Readings in Computer Vision; Fischler, M.A., Firschein, O., Eds.; Morgan Kaufmann: San Francisco (CA), 1987; pp. 726–740 ISBN 978-0-08-051581-6.
  108. Dubrofsky, E. (2009). Homography estimation. Diplomová práce. Vancouver: Univerzita Britské Kolumbie, 5.
  109. Haralick, R. M., Shanmugam, K., & Dinstein, I. H. (2007). Textural features for image classification. IEEE Transactions on systems, man, and cybernetics, (6), 610-621.
  110. Van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N.,.. & Yu, T. (2014). scikit-image: image processing in Python. PeerJ, 2, e453.
  111. Hagberg, A.A.; Schult, D.A.; Swart, P.J. Exploring Network Structure, Dynamics, and Function using NetworkX. Python in Science Conference. pp. 11–15.
  112. Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52(3-4), 591-611.
  113. Marden, J.I. Positions and QQ Plots. Stat. Sci. 2004, 19, 606–614. [CrossRef]
  114. Hotelling, H. (1960). Contributions to probability and statistics: essays in honor of Harold Hotelling. Stanford University Press.
  115. Abdi, H. (2010). The greenhouse-geisser correction. Encyclopedia of research design, 1(1), 544-548.
  116. Ducrest, A.; Keller, L.; Roulin, A. Pleiotropy in the melanocortin system, coloration and behavioural syndromes. Trends Ecol. Evol. 2008, 23, 502–510. [CrossRef]
  117. Järvi, T.; Bakken, M. The function of the variation in the breast stripe of the great tit (Parus major). Anim. Behav. 1984, 32, 590–596. [CrossRef]
  118. Bagnara, J.T.; Matsumoto, J. (2006). Comparative anatomy and physiology of pigment cells in nonmammalian tissues. The pigmentary system: physiology and pathophysiology, 11-59.
  119. Sohan, M., Sai Ram, T., & Rami Reddy, C. V. (2024). A review on yolov8 and its advancements. In International Conference on Data Intelligence and Cognitive Informatics (pp. 529- 545). Springer, Singapore.
  120. Yoon, K.; Lim, C. LayerAct: Advanced Activation Mechanism for Robust Inference of CNNs. Proc. AAAI Conf. Artif. Intell. 2025, 39, 22200–22207. [CrossRef]
  121. Ramanath, R., & Drew, M. S. (2014). White balance. In Computer Vision (pp. 885-888). Springer, Boston, MA.
  122. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [CrossRef]
Figure 1. Quantification of visual changes in melanin-based skin spots under stress in Atlantic salmon (samo salar) methodology pipeline.
Figure 1. Quantification of visual changes in melanin-based skin spots under stress in Atlantic salmon (samo salar) methodology pipeline.
Preprints 207479 g001
Figure 4. (a) original image of salmon specimen (b,c,d,e) geometrically (reflection, crop, rotated, sheared) and (f,g,h,i) intensity-based (brightness, exposure, blur, noise) transformed augmented images.
Figure 4. (a) original image of salmon specimen (b,c,d,e) geometrically (reflection, crop, rotated, sheared) and (f,g,h,i) intensity-based (brightness, exposure, blur, noise) transformed augmented images.
Preprints 207479 g004
Figure 5. (a) Bounding box training and validation loss plots, (b) segmentation training and validation loss plots, (c) classification loss, and (d) distributed focal loss training and validation loss plots. Across all loss curves, the model exhibits underfitting, specifically the segmentation loss.
Figure 5. (a) Bounding box training and validation loss plots, (b) segmentation training and validation loss plots, (c) classification loss, and (d) distributed focal loss training and validation loss plots. Across all loss curves, the model exhibits underfitting, specifically the segmentation loss.
Preprints 207479 g005
Figure 6. SAM2.1 intersection-over-union, binary Cross entropy and mask segmentation loss plots.
Figure 6. SAM2.1 intersection-over-union, binary Cross entropy and mask segmentation loss plots.
Preprints 207479 g006
Figure 7. (a) input image of salmon specimen (b) detected operculum region by YOLOv8-seg (c) operculum binary mask generated from the polygon enclosing the region (d)masked and extracted operculum region from the input image.
Figure 7. (a) input image of salmon specimen (b) detected operculum region by YOLOv8-seg (c) operculum binary mask generated from the polygon enclosing the region (d)masked and extracted operculum region from the input image.
Preprints 207479 g007
Figure 8. (a,b) pre-stress left and post-stress left operculums (c,d) their grayscale versions (e,f) pre-stress left, and post-stress left operculums with detected SIFT keypoints (g) Best matches drawn after applying Lowe’s ratio test to detected keypoints (h) Aligned pre-stress region (left) with reference post-stress region (right).
Figure 8. (a,b) pre-stress left and post-stress left operculums (c,d) their grayscale versions (e,f) pre-stress left, and post-stress left operculums with detected SIFT keypoints (g) Best matches drawn after applying Lowe’s ratio test to detected keypoints (h) Aligned pre-stress region (left) with reference post-stress region (right).
Preprints 207479 g008
Figure 9. (a) annotated eye in green polygon and IDtag in red rectangle of salmon specimen (pre-stress and left side) (b) operculum region with un-normalized pixel intensities (c) Extracted luminance (L) channel after conversion of (b) from RGB color space to LAB color space (d) Normalized luminance (L) channel after applying the normalization algorithm (e) Normalized operculum region in RGB color space.
Figure 9. (a) annotated eye in green polygon and IDtag in red rectangle of salmon specimen (pre-stress and left side) (b) operculum region with un-normalized pixel intensities (c) Extracted luminance (L) channel after conversion of (b) from RGB color space to LAB color space (d) Normalized luminance (L) channel after applying the normalization algorithm (e) Normalized operculum region in RGB color space.
Preprints 207479 g009
Figure 10. salmon specimen with spots exhibiting specular highlights (1) and partial (2,3,4,6) and complete (5) occlusion due to mucus and water droplets.
Figure 10. salmon specimen with spots exhibiting specular highlights (1) and partial (2,3,4,6) and complete (5) occlusion due to mucus and water droplets.
Preprints 207479 g010
Figure 11. (a,b) pre-and post-stress spot segmentations (along with some false positives; highlighted operculum regions and portion of backgrounds) (c,d) manually curated and matched spots.
Figure 11. (a,b) pre-and post-stress spot segmentations (along with some false positives; highlighted operculum regions and portion of backgrounds) (c,d) manually curated and matched spots.
Preprints 207479 g011
Figure 12. Individual trajectories within-fish grayscale changes across day.
Figure 12. Individual trajectories within-fish grayscale changes across day.
Preprints 207479 g012
Figure 13. Boxplot distribution of grayscale intensity by day and side.
Figure 13. Boxplot distribution of grayscale intensity by day and side.
Preprints 207479 g013
Figure 14. Distribution and variation of spots grayscale values in control (left) and stressed fish (right). The central tendency and means are also mentioned.
Figure 14. Distribution and variation of spots grayscale values in control (left) and stressed fish (right). The central tendency and means are also mentioned.
Preprints 207479 g014
Figure 15. (a) For each spot in each fish, on each side and day, four nearest neighbors were identified and directed graphs were constructed. Grayscale differences between each spot and its neighbors were calculated for days 1 and 2 and utilized in L1 distances computation (0 = identical, 1 = completely different). (b) The boxplots show fish-wise variation in L1 values, with a median trendline indicating central tendency.
Figure 15. (a) For each spot in each fish, on each side and day, four nearest neighbors were identified and directed graphs were constructed. Grayscale differences between each spot and its neighbors were calculated for days 1 and 2 and utilized in L1 distances computation (0 = identical, 1 = completely different). (b) The boxplots show fish-wise variation in L1 values, with a median trendline indicating central tendency.
Preprints 207479 g015
Figure 16. Green-background image with all spots included for a fish sample (Pre-stress, Left side).
Figure 16. Green-background image with all spots included for a fish sample (Pre-stress, Left side).
Preprints 207479 g016
Figure 17. Grayscale with intervals between 0 (Black) and 1 (White).
Figure 17. Grayscale with intervals between 0 (Black) and 1 (White).
Preprints 207479 g017
Figure 18. Scatter plots with fitted regression lines showing the correlation between Observer E and Observer H, and Averaged Observer grayscale scoring and algorithm-derived grayscale scoring, with Pearson’s r and corresponding p-values indicated.
Figure 18. Scatter plots with fitted regression lines showing the correlation between Observer E and Observer H, and Averaged Observer grayscale scoring and algorithm-derived grayscale scoring, with Pearson’s r and corresponding p-values indicated.
Preprints 207479 g018
Table 1. Training data augmentations during training.
Table 1. Training data augmentations during training.
Augmentation Type Upper Limit Value
Translate—Translates the image horizontally and vertically by a fraction of the image size [0.0-1.0] 0.015
Scaling—Scales the image by a gain factor [0-1] 0.15
BGR channels alteration—Flips the image channels from RGB to BGR with the specified probability [0.0-1.0] 0.1
Image Mosaic—Combines four training images into one with the specified probability [0.0-1.0] 0.3
Flip Up and down—Flips the image upside down with the specified probability [0.0-1.0] 0.5
Flip right and left—Flips the image left to right with the specified probability [0.0-1.0] 0.5
Cutmix—Combines portions of two images with probability [0.0-1.0] 0.015
Copy_paste—Copies and pastes objects across images to increase object instances with probability [0.0-1.0] 0.0
Shearing – Shearing image randomly between [0o-180o] 3o
Degrees – Rotating image randomly between [0o,180o] 5o
Hue – hue of the image randomly between [0.0-1.0] 0.01
Saturation – saturation of the image randomly between [0.0-1.0] 0.5
Value – brightness of the image randomly between [0.0-1.0] 0.4
Table 2. Salmon operculum segmentation model training and validation metrics.
Table 2. Salmon operculum segmentation model training and validation metrics.
Model Precision (Bbox) Recall (Bbox) Precision (Mask) Recall (Mask) mAP50
(Mask)
mAP50-95
(Mask)
Training 0.95 0.97 0.95 0.97 0.995 0.796
Validation 0.998 1.00 0.998 1.00 0.99 0.76
Notes: YOLOv8 precision, recall, and mean-average precision with intersection-over-union (IoU) threshold range= [50,95] are computed utilizing bounding boxes and masks associated with the ground truths and predictions. There is only one —mentioned class i.e roi.
Table 3. SAM 2.1 Averaged Losses.
Table 3. SAM 2.1 Averaged Losses.
Mode BCE Loss IoU Loss Mask Loss
Training 0.008 0.1665 0.0025
Notes: All the losses are normalized in range [0-1] and averaged over total number of training epochs.
Table 4. Percentage Change in Grayscale Pixel intensities per fish and side.
Table 4. Percentage Change in Grayscale Pixel intensities per fish and side.
Fish Sample Change Left (%) Change Right (%)
10 16.01 135.41
11 33.11 67.19
12 20 20.87
13 48.98 5.19
14 53.33 38.88
15 -8.07 -0.8
16 15.41 24.56
17 34.70 6.6
18 83.16 60.33
19 111.18 -11.52
20 85.51 0.76
21 76.57 25.90
22 63.79 133.33
23 45.07 64.98
24 59.48 18.70
25 86.50 173.17
26 57.14 -7.69
27 76.53 85.89
28 13.88 -3.14
29 67.92 128.64
30 103.54 8.37
Average 54.46 46.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated