Preprint
Review

This version is not peer-reviewed.

State-of-the-Art of Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues

Submitted:

25 September 2024

Posted:

25 September 2024

You are already at the latest version

Abstract
Microscopic image segmentation (MIS) plays a pivotal role in various fields such as medical imaging and biology. With the advent of deep learning (DL), numerous methods have emerged for automating and improving the accuracy of this crucial image analysis task. This systematic literature review (SLR) aims to provide an exhaustive overview of the state-of-the-art DL methods employed for the segmentation of microscopic images. In this review, we analyze a diverse array of studies published in the last five years, highlighting their contributions, methodologies, datasets, and performance evaluations. We explore the evolution of DL techniques and their adaptation to specific segmentation challenges, from cell and nucleus segmentation to tissue analysis. This paper, through the integration of existing knowledge, provides valuable perspectives for researchers involved in the field of microscopic image segmentation.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Microscopic imaging serves as a fundamental tool in both research and diagnosis, particularly within fields like medical science and biology. It offers unparalleled insights into the intricate structures and processes at the cellular and subcellular levels [1]. Segmentation involves categorizing the image pixels based on their respective classes. The primary objective of image segmentation is to group all the pixels within an image [2]. Accurate segmentation of microscopic images is an essential component in the analysis of these images, enabling quantification, classification, and a deeper understanding of the underlying biological components [2]. Additionally, microscopic image segmentation (MIS) not only serves as a fundamental tool in scientific research but also plays a pivotal role in clinical diagnostics. It allows researchers and medical professionals to uncover the intricate world of cells, tissues, and subcellular structures, aiding in the diagnosis of diseases and the advancement of scientific knowledge [3].
Numerous manual segmentation techniques have been proposed, including methods involving feature extraction and region growing [4]. However, relying on manual segmentation, although traditional, presents significant challenges. This method is not only labor-intensive but also susceptible to human bias and inconsistencies. The process is time-consuming, and the precision of outcomes is heavily reliant on the proficiency of the annotators [4]. The infusion of artificial intelligence (AI) holds the potential to mitigate these challenges while augmenting the consistency of segmentation tasks.
Machine learning (ML) has instigated rapid advancements, particularly within the biomedical domain, primarily in the field of image segmentation. Deep learning (DL), a subset of ML with a primary focus on artificial neural networks (ANNs), acts as a driving force behind the increasing expansion of research in imaging sciences and computational pathology [5]. With the rise of DL, particularly the utilization of convolutional neural networks (CNNs), the realm of image segmentation has undergone a profound and transformative evolution. DL algorithms have showcased exceptional capabilities in automating this process, often surpassing traditional methods in terms of precision and efficiency. These progressions have the potential not only to speed up the analysis of microscopic images but also to make the results more accurate [5].
In the field of MIS, significant strides have been made through the evolution of DL architectures. The inception of Fully Convolutional Networks (FCNs) marked a crucial evolution in the field, as it introduced a paradigm shift towards end-to-end pixel-wise segmentation. Its predominant applications were in the segmentation of mitochondria [6] and microvasculature [7]. Building upon this foundation, the subsequent evolution of segmentation methodologies witnessed the emergence and widespread adoption of U-Net architecture [8]. U-Net, characterized by its unique encoder-decoder structure, has demonstrated superior performance in preserving fine-grained details crucial for microscopic imagery. Beginning with U-Net, numerous variants and networks were created for MIS, accompanied by the development of various tools and software tailored for achieving the overarching objective. To the best of our knowledge, current surveys focus on specific domains such as cells, nuclei, or tissues, or they provide summaries of existing tools [3].
Inspired by these premises, in this paper, we present a systematic literature review (SLR) providing a comprehensive survey of the state-of-the-art in DL methods for MIS. In this review, we delve into the evolution of DL techniques tailored to specific segmentation challenges, including cell and nucleus segmentation and tissue analysis. Our objective is to consolidate the knowledge accumulated over the last five years, categorizing contributions, methodologies, datasets, and performance evaluations.
To construct this review, we examined a total of 72 recent research articles published between 2018 and 2023, gathered from a diverse array of four article repositories, i.e. SpringerLink, IEEE Xplore, Science Direct and PubMed. The subsequent sections provide comprehensive insights into the methodologies employed and their applications through a SLR approach.
The subsequent sections of this article are structured as follows: Section II provides a background on prominent approaches proposed for image segmentation. Section III delineates the applied research methodology employed for synthesizing studies. Section IV presents an exposition of the reviewed works addressing the topic of MIS and ultimately, Section V delves into the discussion.

2. Background

Image segmentation involves partitioning an image into distinct regions based on specific properties of interest. Traditional segmentation techniques encompass approaches like edge detection, threshold processing, region growing, texture analysis, watershed algorithms, and others.
Nonetheless, each of these techniques comes with its own limitations. DL has emerged as a widely adopted approach for image segmentation across various domains. Image segmentation can be categorized into two main types: semantic-level segmentation and instance-level segmentation. Semantic segmentation classifies each pixel in an image into the foreground and background [3]. Instance-level segmentation is built upon target detection, a process that involves identifying and outlining individual objects within an image, providing unique labels for each instance [3].
In the dynamic field of DL for MIS, several architectural paradigms have emerged as powerful tools. This section spotlights three influential architectures: U-Net, Region-based Convolutional Neural Networks (R-CNN), and Generative Adversarial Networks (GANs).

2.1. U-net

U-Net, introduced by Ronneberger et al. in 2015 [8], stands as a seminal architecture in semantic segmentation, particularly in biomedical image analysis. Its U-shaped design, with skip connections, facilitates detailed feature extraction, making it well-suited for tasks like cell segmentation in microscopic images. Studies employing U-Net have significantly contributed to the precision and efficiency of segmentation outcomes.
The U-Net architecture consists of a U-shaped channel incorporating skip connections. The encoder comprises four submodules, each housing two convolutional layers, and after each submodule, downsampling is achieved through max pooling. The decoder, with four submodules, progressively increases resolution through upsampling, ultimately providing pixel-wise predictions. Illustrated in Figure 1, the network takes a 572 × 572 input and produces a 388 × 388 output. A distinctive feature is the utilization of skip connections, linking the output of a submodule in the encoder with the input of the corresponding submodule in the decoder, promoting seamless information transfer across network layers. In extending the capabilities of U-Net for more complex tasks, variations such as 3D U-Net and V-Net have emerged, showcasing adaptability to three-dimensional image data.

2.2. R-CNN

Introduced by Girshick et al. in 2014 [9], Region-based Convolutional Neural Networks (R-CNN) have revolutionized the field of object detection and segmentation. Through a two-step process involving region proposal generation and subsequent CNN-based feature extraction, R-CNN and its variants demonstrate exceptional accuracy in localizing objects. Their applications extend to MIS, showcasing their effectiveness.
The R-CNN architecture, as presented in [9], employs a region proposal network to generate bounding boxes using a selective search process. These region proposals undergo warping to standard squares and are then fed into a CNN to produce a feature vector map as the output. The output dense layer comprises features extracted from the image, subsequently utilized by a classification algorithm to classify objects within the region proposal network. Additionally, the algorithm predicts offset values to enhance the precision of the region proposal or bounding box. The sequential processes performed in the R-CNN architecture are visually depicted in Figure 2. Three major variations of the R-CNN model, including Fast R-CNN, Faster R-CNN, and Mask R-CNN, have been introduced in the literature. This progression signifies a refinement in object detection and segmentation methodologies.

2.3. GAN

Introduced by Goodfellow et al. in 2014 [10], Generative Adversarial Networks (GANs) have revolutionized the field of image synthesis and generation. Although originally designed for broader applications, GANs have found utility in the augmentation of microscopic image datasets, generating synthetic images for training segmentation models. This unconventional method of data augmentation has attracted interest due to potential to enhance the robustness of segmentation models.
The GAN structure, as illustrated in Figure 3, involves two integral components: the generator, responsible for creating images from random noise (Z), and the discriminator, designed to distinguish between real and synthetic images. Through a competitive training process, the generator aims to produce synthetic data that is virtually indistinguishable from real data, as determined by the discriminator (D).

3. Methodology

To restrict this article, we apply the SLR methodology as conducted by Brereton et al. [12]. The primary objective of this approach is to analyze data extracted from the selected studies in the article. The SLR for MIS unfolds through a three-step process:
  • Planning the Review: This step involves specifying the requirements for the review process and forming the questions necessary for the study.
  • Conducting the Review: This step includes finding relevant works and assessing the quality of the research.
  • Documenting the Review: This step involves reporting the selected studies in a paper.
  • Planning the Review
The research planning phase includes defining research questions. Our SLR aims to address the following research questions:
RQ1: What are the state-of-the-art deep learning techniques for microscopic image segmentation and their primary applications?
RQ2: How does microscopic image segmentation contribute to the analysis of cells, nuclei, and tissues in biomedical research and medical diagnosis?
RQ3: Are there software tools available for the automated segmentation microscopic images?
  • Conducting the Review
We selected articles from a variety of sources based on criteria including study titles, relevant keywords, abstracts, and conclusions. To ensure a full coverage of pertinent literature, we conducted an extensive search across multiple prominent publication databases. These databases include:
This approach ensured that we grouped a diverse and comprehensive selection of articles for our research review. This involved searching through multiple articles by incorporating specific keywords. The searches were conducted across four databases, utilizing the keywords "microscopic image," "segmentation," and "deep learning." We focused on articles published in English between 2018 and 2023 to ensure the inclusion of recent research findings.
Due to the extensive number of articles retrieved from our searches, we focused our evaluation on the first 72 articles presented in the search results. These articles were sorted by relevance to the research topic, ensuring that the most pertinent sources were considered for our systematic literature review. This comprehensive search strategy aimed to guarantee a diverse and extensive selection of articles, enabling us to conduct a rigorous and insightful analysis in our research review.
In addition, to extend the search and find more relevant literature, we also performed a hand-searching process where we combinate several keywords related to the underlying research.
To establish inclusion and exclusion criteria, we adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) procedures as outlined by Moher et al. in 2009 [13]. This approach ensured that we gathered a diverse and comprehensive selection of articles for our research review. Figure 4. illustrates the structured PRISMA framework that guided our process for selecting studies.
The criteria are integrated to ensure that the selected studies align with the specific boundaries and objectives of the research topic. All articles were included based on their relevance to the field of MIS, with a particular focus on DL methods. The selection criteria should be thoughtfully designed to effectively address the research questions, ensuring that they can be clearly interpreted and accurately categorize the pertinent studies.
In accordance with Figure 4, an initial total of 1069 articles were gathered from the chosen databases. Subsequently, we conducted a thorough search, which involved excluding articles published prior to 2018, leading to the removal of 63 articles. Moreover, 839 more articles were excluded due to various reasons, including duplication, lack of relevance to the research scope, content outside the medical domain, utilization of techniques other than DL, and issues related to their quality. Additionally, we have added 15 papers by hand searching. In this phase, the methods and conclusions sections of these articles were examined, leading to the exclusion of 110 articles.
Following this process, the remaining texts were subjected to an in-depth reading to achieve a more comprehensive understanding. In this final stage, a careful analysis was performed to validate the relevance of each study, ultimately leading to the examination of 72 articles.
As depicted in Figure 4, our research process involved a rigorous and exhaustive screening of available literature, followed by a meticulous analysis of each article. Through this diligent process, we identified a total of 72 articles that unequivocally met our predefined criteria, and these were consequently chosen for inclusion in this study. These selected articles, carefully curated to ensure their relevance and quality, serve as the foundation upon which our research findings are built.
In Section IV, we go a step further about documenting the review to address responses to our research questions.

4. Analysis of the Papers

In this section, we provide an analysis of the papers included in the review. Section IV.A will focus on papers that address RQ1 and RQ2, Section IV.B will cover papers addressing RQ3.
Table 1 contains a list of abbreviations used in this section.

4.1. RQ1 & RQ2

In this section, we address two key research questions to comprehensively explore the landscape of MIS in the context of DL techniques and their applications in biology research.
RQ1: What are the state-of-the-art deep learning techniques for microscopic image segmentation and their primary applications?
RQ2: How does microscopic image segmentation contribute to the analysis of cells, nuclei, and tissues in biology research?
In the context of MIS, there are three distinctive levels of analysis, each tailored to address specific research and diagnosis needs. The first level is cell segmentation, which involves the precise delineation and categorization of individual cells within an image. This allows researchers to study cell morphology, spatial distribution, and behavior under various conditions.
Moving deeper, nucleus segmentation represents the second level. Here, the focus is on accurately identifying and segmenting cell nucleus within each cell. This level of segmentation is pivotal in understanding genetic and cellular processes, as the nucleus houses the cell's genetic material.
The third level, tissue segmentation, extends the analysis to a broader scale. It entails the partitioning of an image into different tissue types or regions, providing insights into the composition and structure of the tissue sample. Tissue segmentation is vital in applications like disease diagnosis, histology, and pathology, enabling the identification of various tissue components, such as epithelial and connective tissues, blood vessels, and tumors.
These three levels of MIS, cell, nucleus, and tissue segmentation, collectively contribute to a comprehensive understanding of biological structures and processes, catering to diverse research objectives in fields such as biology, and pathology.
To cover these three levels, we provide an overview of various research findings specific to each level, as outlined in tables 2, 3, and 4 for cell segmentation, nucleus segmentation, and tissue segmentation, respectively.

4.1.1. Cell Segmentation

Table 2 provides an overview of selected studies focused on cell segmentation, reflecting the diversity of methodologies and applications in this domain. As depicted in the table, distinct studies present diverse approaches tailored for specific tasks, primarily emphasizing either semantic or instance segmentation, with some studies adopting a hybrid approach incorporating both techniques.
In this section, we will investigate into an examination of various studies, categorizing them based on their approaches to semantic and instance segmentation.
Starting with a focus on semantic segmentation, in their publication [14], the authors introduced a method derived from the GAN approach. One notable strong point of this method is its avoidance of the need for formulating a loss function during the optimization process. This approach demonstrates promising segmentation results on real fluorescent microscopy data called H1299 dataset [15]. The code for this work is openly accessible at: https://github.com/arbellea/DeepCellSeg.git.
Additionally, in [16], the authors introduced a workflow employing DNN for cell segmentation specifically applied to PCI. The proposed pipeline involves three stages: the first stage focuses on formulating PCI, the second stage utilizes DNN for image restoration, and the third stage highlights the advantages of artifact-free images for segmentation. The evaluation was conducted on an adapted dataset of phase-contrast microscopy image sequences. The results demonstrated favorable outcomes compared to some SOTA approaches in terms of ACC, achieving a value of 0.908, IoU with a score of 0.4698, and Dice coefficient, attaining a value of 0.6859.
In [17], the authors introduced an improved U-net algorithm named McbUnet, which incorporates mixed convolution blocks, combining the advantages of U-Net and residual learning. The effectiveness of this proposed approach was validated using the 2018 Data Science Bowl. McbUnet demonstrated outstanding performance compared to standard U-net, MultiResUNet [18] and CE-NET [19]. Notably, the method achieved notable results in terms of ACC with a value of 0.956 and IoU with a value of 0.816.
In [20], DLOG-NeXt was introduced in the context of cell contour segmentation. DLOG-NeXt incorporates SE-Net-driven ConvNeXt architecture, coupled with multi-scale feature aggregation through the DOLG module. This is followed by the inclusion of a channel attention mechanism, aiming to capture high-level feature representations that preserve both spatial and channel information. DLOG-NeXt demonstrated superior performance compared to other SOTA architectures, including U-Net and Transformer-based variants, across four benchmark public datasets representing electron microscopy, colonoscopy, fluorescence, and retinal modalities. On the ISBI 2012 [21], CVC-ClinicDB 2018 [22], 2018 Data Science Bowl [23], and DRIVE datasets, DLOG-NeXt achieved remarkable dice scores of 0.958, 0.951, 0.947, 0.848, along with mean IoU scores of 0.901, 0.918, 0.889, and 0.735, respectively.
In [24], a different application, which centers on semantic cell segmentation, is introduced as GRUU-Net. This framework integrates the iterative refinement of feature maps through GRU with multi-scale feature aggregation using a U-net. For more enhancement of training robustness and segmentation performance, the authors introduced a novel normalized focal loss designed for a momentum-based optimizer. Despite being characterized by a reduced number of parameters, the proposed network achieved superior or competitive results across the majority of the used datasets. Notably, the authors trained the network using only a few example images and did not employ hand-crafted weighting of the cross-entropy loss. For instance, in the case of glioblastoma dataset [25]. GRUU-Net demonstrated a favorable Dice score of 0.933, surpassing the performance of both U-net and ASPP-Net [26].
In [27], the authors presented SBU-net, aiming to improve segmentation performance by incorporating perceptual features such as saliency and ballness. This innovative approach yields superior results when applied to bright-field microscopic images. In-depth insights into its effectiveness were gained through a comprehensive evaluation, comparing SBU-net with established models including U-net, U-net++ [28], Link-net [29], and Attention U-net. Experimental results highlight the exceptional performance of SBU-net, demonstrated by significant enhancements in both IoU and Dice metrics compared to SOTA models. It attained a mean IoU of 0.804 and 0.829, along with mean Dice scores of 0.891 and 0.906, respectively, on two publicly available bright-field datasets of T cells and pancreatic cancer cells. To evaluate the model's ability to generalize across different microscopy types, the authors conducted tests on a fluorescence dataset.
Additionally, the incorporation of pretrained networks proves advantageous in the case of semantic cell segmentation. In their work [30], the authors proposed Aura-net, which integrates a pre-trained ResNet-18 with an Attention U-net and undergoes training utilizing an AC loss. The authors conducted experiments on three publicly available PC microscopy image datasets. The results showcased that Aura-net outperformed SOTA approaches such as the standard U-net, CE-net, Attention-net [31]. The proposed method achieved Dice scores of 0.846 for the first database, 0.877 for the second database and 0.818 for the third database. The source code for this proposed approach is available for public access: https://github.com/uhlmanngroup/AURA-Net.
As presented in Table 2, some studies have suggested various approaches for cell (nuclei) segmentation, indicating the segmentation process for both entire cells and their respective nuclei. In this context, starting with the study outlined in [32], the authors introduced AS-UNet tailored for this segmentation task. This framework consists of three parts: encoder module, decoder module and atrous convolution module. Their experimentation focused on two datasets, namely the MOD dataset [33] and the BNS dataset [34]. Comparative analyses were conducted, pitting the AS-UNet method against other published SOTA models, such as PSPNET [35], ENET [36], SegNET [37], and Link-net. The outcomes underscored the superiority of the AS-UNet algorithm, particularly excelling in scenarios involving multi-cell adhesions and small-sized cells. Particularly on the MOD, it achieved an ACC of 0.928. Moreover, when assessed on the BNS dataset, the AS-UNet algorithm achieved an ACC of 0.968.
In the same context, a feedback attention network called FANet [38] was proposed. it utilizes information from each training epoch to refine the prediction maps in subsequent epochs, allowing the architecture to self-rectify predicted masks. This self-correction mechanism contributes to accurate and consistent segmentation results across diverse datasets. The performance of FANet is comprehensively evaluated against SOTA DL methods on seven publicly available biomedical imaging datasets. The source code for FANet is openly accessible at: https://github.com/nikhilroxtomar/FANet.
In the cited reference [39], the authors introduced UNet++, an algorithm presenting a deeply-supervised encoder-decoder network architecture tailored for medical image segmentation. This architecture establishes connections between the encoder and decoder sub-networks through nested, dense skip pathways, aimed at minimizing the semantic gap in their feature maps. The performance of UNet++ was assessed in contrast to U-Net and wide U-Net architectures across various medical image segmentation tasks. These tasks encompassed nodule segmentation in low-dose CT scans of the chest, nuclei segmentation in microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. The paper extensively compared the performance of UNet++ with U-Net and wide U-Net, highlighting the superior segmentation results achieved by UNet++. The code is available: https://github.com/Nested-UNet.
Now, with regard to instance segmentation. In [40], authors proposed GeneSegNet. It employs a recursive training strategy to handle noisy training labels. In this study, GeneSegNet's performance was systematically assessed by benchmarking it against five alternative methods: the Watershed algorithm, Cellpose [41], JSTA [42], Baysor [43] and Baysor(prior) [43]. The comparative analysis revealed that GeneSegNet outperforms existing methods in cell segmentation by effectively leveraging gene expression information and optimizing the use of imaging data. GeneSegNet produces more accurate cell boundaries, encompassing a greater number of RNA reads within cells, while mitigating the issue of oversegmentation. The code is available: https://github.com/BoomStarcuc/GeneSegNet.
In [44], a robust framework for cell instance segmentation was introduced. This framework, built upon Mask R-CNN, is designed to generate cell segmentation without the need for additional post-processing steps. To enhance the model's capability in learning segmentation boundaries, the authors incorporated the use of Shape-Aware Loss, a distance-based pixelwise weighted cross-entropy loss. The proposed framework exhibits strong performance, surpassing other models mentioned in the paper, achieving IoU values of 0.919 and 0.949 for the DICC2DH-HeLa and PhCC2DH-U373 datasets, respectively available at: http://celltrackingchallenge.net/2d-datasets.
In [45], an approach integrates convolutional LSTM with the U-Net architecture used for instance cell segmentation and tracking in time-lapse microscopy. The integration of spatio-temporal considerations in this method enhances its capability to accurately delineate and track individual cells over consecutive frames in time-lapse microscopy sequences. The method's performance was evaluated using the Cell Tracking Challenge, resulting in SOTA outcomes. It achieved the top position for the Fluo-N2DH-SIM+ dataset and the second position for the DIC-C2DLHeLa datasets. The code for this work is freely accessible at: https://github.com/arbellea/LSTM-UNet.git.
Authors in [46] proposed a method, attentive instance segmentation, that combines a single shot multi-box detector (SSD) and a U-Net. It employs attention mechanisms in both the detection and segmentation modules to focus on useful features. Quantitative and qualitative results show that the proposed approach achieves higher ACC and faster speed compared to the SOTA methods. The code of this work is available at: https://github.com/yijingru/ANCIS-Pytorch.
The study outlined in [47] introduces Cell T-Net. The efficacy of this approach was assessed using the liveCELL and Sartorius datasets. To demonstrate the prowess of Cell T-Net, the authors replicated several SOTA object detection and segmentation models using the Detectron framework. Specifically, they recreated four one-stage models—SSD [48], RefineDet [49], RetinaNet [50], and CornerNet [51]—as well as three two-stage models, including Faster RCNN, Mask-RCNN, and Cascade Mask-RCNN [52]. The results indicate that Cell T-Net surpasses SOTA models, particularly in addressing challenges inherent in the characteristics of cell datasets.
In the study [53], a DL model, leveraging cGANs, was introduced for instance cell segmentation. This approach involves creating synthetic masks through a GAN, specifically StyleGAN2-ada, and generating corresponding synthetic microscopy images using image-to-image translation (pix2pix). This method explicitly generates labeled masks, providing versatility for use in various tasks beyond instance segmentation.
Authors in [54] introduced an algorithm that integrates DL with thresholding and watershed-based segmentation. This strategy resulted in an 86% similarity to the ground truth segmentation in the identification and separation of cells and a good average ACC (0.84). However, the algorithm exhibited varying performance levels across different datasets, especially in cases where lower segmentation quality was observed due to increased variability in cell shape and appearance.
The paper referenced [55] introduces a box-based cell instance segmentation method that integrates keypoint detection with individual cell segmentation. The framework consists of two main branches: a keypoints detection branch and an individual cell segmentation branch, employing a ResNet-50 Conv1-4 as the backbone network. The method identifies five pre-defined points of a cell through keypoints detection, and these points are then organized using a keypoint graph to derive the bounding box for each cell. Within these bounding boxes, cell segmentation is executed on the feature maps. The effectiveness of the proposed method is validated on two cell datasets exhibiting distinct object shapes, showcasing its superior performance compared to other instance segmentation techniques. Qualitative results further affirm the efficacy of the proposed method. The code is available at: https: //github.com/yijingru/KG_Instance_Segmentation.
Next, we will introduce the selected studies that have proposed various architectures for both semantic and instance segmentation. Furthermore, the study explored various U-Net architectures [56], including Attention and Residual Attention U-Net, to identify the most suitable architecture for living cell segmentation. The dataset used in this research comprises bright-field transmitted light microscopy images of HeLa cells acquired from different time-lapse experiments. The Residual Attention U-Net demonstrated the best performance, achieving a (mean-IoU) of 0.953 and a mean Dice coefficient (mean-Dice) of 0.975.
In [57], the authors introduced the 3D CellSeg framework designed for both semantic and instance cell segmentation. Experiments on cell segmentation were conducted across four distinct cell datasets. The results demonstrate that 3D CellSeg surpasses the baseline models on the ATAS [58], HMS, and LRP [59] datasets, achieving overall accuracies of 95.6%, 76.4%, and 74.7%, respectively. Additionally, the framework achieves ACC comparable to baselines on the Ovules dataset [60], achieving an overall ACC of 82.2%. the code is available: n https://github.com/AntonotnaWang/3DCellSeg.
Additionally, the CS-Net network [61] was applied. Comparative results with leading lightweight models reveal that the proposed model achieves a more favorable balance between segmentation performance and computational complexity. The code is available at: https://github.com/luozhengrong/CS-Net.
Moreover, it is crucial to underscore pertinent research studies that have employed segmentation techniques for cell counting. Cell counting, defined as the process of determining the number of cells within an image or a designated region of interest, plays a pivotal role in various scientific investigations. For instance, the work in [62], the authors introduced MSCA-U-Net, a cell segmentation method specifically tailored for the application of automatic cell counting, utilizing density regression. To demonstrate the effectiveness of their algorithm, the authors conducted a thorough comparison with SOTA methods. The evaluation encompassed three datasets: VGG CELL [63], MBM CELL [64], and ADI CELL [65]. In preparing their inputs for the fully convolutional network, the authors employed preprocessing techniques, including resizing and patch division. These measures were crucial when dealing with input images of varying dimensions and cell densities, especially in the context of extremely high-resolution images featuring high cell densities. Data augmentation was additionally applied to enhance the model's robustness to different cell orientations. For the evaluation, MAE was employed as a metric for cell tracking. Notably, the proposed network demonstrated the lowest MAE values (2.4, 8.0, 11.5) for the VGG, MBM, and ADI datasets, respectively. In terms of cell detection, the proposed method achieved commendable Precision, Recall, and F1-Score values.
[66] introduces SAU-Net as an innovative approach for cell counting, specifically designed for application in both 2D and 3D Microscopy Images. The network extends the U-Net architecture by incorporating a Self-Attention module and integrating Batch Normalization after every convolution and deconvolution layer in U-Net. SAU-Net's versatility in handling both 2D and 3D images is a notable enhancement. The evaluation of SAU-Net encompassed five public datasets: VGG, MBM, ADI, DCC, and MBC. The proposed method demonstrated impressive precision values (99.94, 88.76, 88.57, 99.52, and 92.52) for VGG, MBM, ADI, DCC, and MBC datasets, respectively. The source code for this work is accessible at: https://github.com/mzlr/sau-net.
Table 2. Summary of Cell Segmentation Studies in the Literature.
Table 2. Summary of Cell Segmentation Studies in the Literature.
Reference Publication year Method Task Dataset Instance/Semantic/Both Code availability
[14] 2018 GAN Cell segmentation H1299 data set Semantic
[16] 2023 DNN Cell segmentation Phase contrast microscopy image sequence “mouse muscle progenitor cells” Semantic ×
[17] 2020 McbUnet Cell segmentation 2018 Data Science Bowl dataset Semantic ×
[20] 2023 DOLG-NeXt Cell contour segmentation DRIVE
CVC-ClinicDB
2018 Data Science Bowl
ISBI 2012
Semantic ×
[24] 2019 GRUU-Net Cell segmentation DIC-C2DH-HeLa
Fluo-C2DL-MSC
Fluo-N2DH-GOWT1
Fluo-N2DH-HeLa
PhC-C2DH-U373
PhC-C2DL-PSC
Semantic ×
[27] 2023 SBU-net Cell segmentation Mouse CD4 + T cells
Pancreatic cancer cells
MCF10DCIS.com cells labeled with Sir-DNA
Semantic ×
[30] 2021 Aura-net Cell segmentation Microscopy image datasets from the Boston University Biomedical Image Library Semantic
[32] 2019 AS-UNet Cell (nuclei) segmentation MOD dataset
BNS dataset
Semantic ×
[38] 2022 FANet Cell (nuclei) segmentation Kvasir-SEG
CVC-ClinicDB Dataset:
2018 Data Science Bowl
ISIC 2018 Dataset
DRIVE Dataset
CHASE-DB1 Dataset
EM Dataset
Semantic
[39] 2018 UNet++ Cell (nuclei) segmentation Microscopy images
Colonoscopy videos
Liver in CT scans
Lung nodule
Semantic
[40] 2023 GeneSegNet Cell segmentation Real dataset of human non-small-cell lung cancer (NSCLC)
Real dataset of mouse hippocampal
Area CA1 (hippocampus)
Instance
[44] 2021 Mask RCNN and Shape-Aware Loss Cell segmentation DIC-C2DH-HeLa dataset
PhC-C2DH-U373 dataset
Instance ×
[45] 2019 C-LSTM with the U-Net Cell segmentation Fluo-N2DH-SIM
DIC-C2DH-HeLa
PhC-C2DH-U373x
Instance
[46] 2019 Attentive neural cell instance segmentation method Cell segmentation 644 neural cell images from a collection of timelapse microscopic videos of rat CNS stem cells Instance
[47] 2023 CellT-Net Cell segmentation LiveCELL
Sartorius datasets
Instance ×
[53] 2023 Deep learning based on cGANs Cell segmentation Salivary gland tumor
Fallopian tube biopsy
Instance ×
[54] 2018 SCWCSA Cell segmentation Dataset containing images of five cellular assays in 96-well microplates Instance ×
[55] 2019 Box-based method Cell segmentation 644 images sampled from time-lapse microscopic videos of rat CNS stem cells Instance
[56] 2023 Residual Attention U-Net Cell and tissue segmentation Bright-field transmitted light microscopy images Both ×
[57] 2022 3DCellSeg pipeline Cell segmentation ATAS
HMS
LRP
Ovules
Both
[61] 2021 CS-Net Cell segmentation EPFL dataset
Kasthuri++ dataset
CPM-17
Both
[62] 2023 MSCA-UNet based on density regression Cell counting Synthetic bacterial
Modified bone marrow
Human subcutaneous adipose tissue
- ×
[66] 2022 SAU-Net Cell counting Synthetic fluorescence microscopy
Modified Bone Marrow
Human subcutaneous adipose tissue
Dublin Cell Counting
3D mouse blastocyst
-
[67] 2021 Concatenated fully convolutional regression network Cell counting Synthetic bacterial cells
Bone marrow cells
Colorectal cancer cells
Human embryonic stem cells
- ×
In [67], an application of cell counting was explored via concatenated fully convolutional regression network. Experimental studies conducted on four datasets, including Synthetic bacterial cells [68], Bone marrow cells [63], Colorectal cancer cells [69], and Human embryonic stem cells [65], highlight the superior performance of the proposed method.
To sum up, this section delved into various studies on cell segmentation, which can broadly be categorized into semantic and instance cell segmentation. In the context of semantic segmentation, AS-UNet stands out, excelling in scenarios with multi-cell adhesions and small-sized cells, achieving ACCs of 0.968 and 0.928 on BNS and MOD datasets, respectively. GAN-inspired methods presented a novel approach, avoiding the need for a formulated loss function during optimization. MSCA-UNet specifically targeted automatic cell counting, demonstrating effectiveness across various datasets. A DNN workflow applied to Phase Contrast Imaging showcased favorable outcomes in ACC, IoU, and Dice coefficient. SAU-Net introduced an innovative approach for cell counting in both 2D and 3D Microscopy Images, exhibiting versatility across different datasets.
On the other hand, in the context of instance cell segmentation, a Mask R-CNN-based framework proved robust, achieving high IoU values on diverse datasets. LSTM-UNet integrated convolutional LSTM with U-Net, achieving SOTA results on the Cell Tracking Challenge. GAN-based methods for instance cell segmentation utilized regular and conditional GANs, effectively simulating distribution, shape, and appearance of objects.

4.1.2. Nucleus Segmentation

Table 3 offers a summary of the selected studies focused on nucleus segmentation, showcasing the range of methodologies and applications for this investigation.
In this section, we will investigate into an examination of various studies, categorizing them based on their approaches to semantic and instance segmentation.
Starting with semantic segmentation, in [70], NucleiSegNet, a semantic architecture specifically designed for nucleus segmentation in H&E stained liver cancer histopathology images, was introduced. This DL architecture exhibited superior performance, as indicated by higher F1-score and JI scores, in comparison to some recent SOTA models. The source code for the proposed model is accessible at https://github.com/shyamfec/NucleiSegNet.
In [71], a recent network called SAC-Net was introduced for semantic nucleus segmentation on histopathology image datasets, utilizing point annotations. The network exhibited highly competitive performance in cell nuclei segmentation across three public datasets. The source code for SAC-Net is available at: https://github.com/RuoyuGuo/MaskGA_Net.
In [72], the authors introduced GSN-HVNET, a model designed for semantic segmentation and classification. Experimental results showcased the superiority of the proposed model over other SOTA models such as Hover-Net [73], Micro-Net [74], DIST [75], and Mask-RCNN. GSN-HVNET demonstrated improvements in both segmentation and classification ACC while also maintaining high computational efficiency.
An effective method for semantic nucleus segmentation, FRE-NET, was introduced in [76]. The proposed approach demonstrated outstanding performance across all four datasets, with Dice coefficients reaching 0.8563, 0.8183, 0.9222, and 0.9220 on the TNBC, MoNuSeg, KMC, and Glas datasets, respectively. Notably, the method exhibited superior boundary ACC and reduced instances of sticking compared to other end-to-end segmentation methods. These results underscore the capability of FRE-NET method to outperform other SOTA segmentation methods. The code is available at: https:// github.com/hxp2396/FRE-Net.
In [77], the authors introduced Kidney-SegNet for semantic nucleus segmentation in histology images. The experiments showed that Kidney-SegNet exhibited very efficient computational complexity and memory requirements compared to existing SOTA DL methods. The source code of the proposed network is available at https://github.com/Aaatresh/Kidney-SegNet.
In [78], the authors introduced AlexSegNet for instance nucleus segmentation. AlexSegNet is constructed upon the AlexNet model's Encoder-Decoder framework. In the Encoder section, it combines feature maps along the channel dimension to accomplish feature fusion. The Decoder section employs a skip structure to integrate low- and high-level features, ensuring effective nucleus segmentation. The experimental findings demonstrated that AlexSegNet exhibited superior performance, particularly in terms of Recall, precision, and F1-score. For the 2018 Data Science Bowl dataset, it achieved values of 0.931, 0.923, and 0.916, respectively. In the case of the TNBC dataset, it attained values of 0.542, 0.886, and 0.6688, respectively
Additionally, various methodologies have been suggested in the context of instance segmentation.
In [79], the authors introduced ASW-Net for nucleus segmentation. The experiments were conducted using a Benchmark dataset, specifically the BBBC039 dataset [80], along with a ganglioneuroblastoma image set [81]. To assess the prediction performance of this method, ASW-Net, comparisons were made against CellProfiler [82], U-Net, and SW-Net (ASW-Net without attention gates), using ground truth as a baseline. The experimental results indicated that ASW-Net achieved satisfactory ACC in classification, even when confronted with an insufficient number of labeled training samples. The pre-trained model and accompanying resources are available at: https://github.com/Liuzhe30/ASW-Net.
In [83], a pipeline called FPN with U-net was introduced. It underwent evaluation and demonstrated superior performance compared to SOTA methods on two datasets, namely 2018 Data Science Bowl and MonuSeg. The source code for the proposed method will be made available at: https://github.com/QUAPNH/Nucleiseg.
In [84], a benchmark for instance nucleus segmentation was introduced. The authors conducted a comparative analysis of the segmentation effectiveness of five DL architectures and two conventional algorithms for segmenting nuclear images of immunofluorescence-stained samples. The DL architectures were categorized into two groups: U-Net architectures (U-Net, U-Net with a ResNet34 backbone (U-Net ResNet), U-Net based on transformed image representation (Cellpose)), and instance-aware segmentation architectures (Mask R-CNN , KG instance segmentation). The code is available: https://github.com/perlfloccri/NuclearSegmentationPipeline.
In [85], the authors introduced VRegNet, a Fully Convolutional Regression Network designed for nucleus detection in a cardiac embryonic dataset. This approach presented a combination of nuclei segmentation and centroid-regression networks, aiming to enhance the detection of nuclei in large 3D fluorescence datasets. it demonstrated high ACC in detecting centroids in both intact quail embryonic hearts and the mouse brain stem. Notably, this success was achieved even in tissues with clustered nuclei of diverse shapes, sizes, and fluorescent intensity. The performance of VRegNet was compared with different methods. The architecture demonstrated a good performance of 0.950, 0.935, 0.942 in terms of precision, recall, F1-score.
In [86], the authors introduced RIC-Unet, a network designed for instance nucleus segmentation. RIC-Unet was compared with two traditional segmentation methods, CP and Fiji, as well as two original CNN methods, CNN2 and CNN3. Additionally, a comparison with the original U-Net was conducted using The TCGA dataset.
In [87], the authors introduced TSFD, a network designed for instance nucleus segmentation. This proposed network demonstrated superior performance compared to SOTA networks, including StarDist, Micro-Net, Mask-RCNN, Hover-Net, and CPP-Net, on the PanNuke dataset. The PanNuke dataset comprises 19 different tissue types and 5 clinically important tumor classes. TSFD achieved good mean and binary panoptic quality scores of 50.4% and 63.77%, respectively. The code for TSFD is available at: https://github.com/MrTalhaIlyas/TSFD.
In [88], the authors introduced the NuClick network designed for interactive segmentation of objects in histology images. The applicability of NuClick was demonstrated across various datasets, including the Gland dataset, Nuclei dataset, and cell dataset. The code for NuClick is accessible at: https://github.com/navidstuv/NuClick.
Additionally, in [89], was introduced for instance nucleus segmentation. Comprehensive experiments conducted on three benchmark histopathology datasets (2018 Data Science Bowl, MoNuSeg, and TNBC) showcased the exceptional segmentation performance of the proposed method, achieving dice scores of 0.908, 0.857, and 0.785, respectively. The implementation of the proposed architecture is available at: https://github.com/tamjidimtiaz/BAWGNet.
Additionally, [90] introduced ASPU-Net, a model designed for segmenting instance nuclei. This architecture employs a modified U-Net with atrous spatial pyramid pooling. Experimental results demonstrated that incorporating the ASPPU-Net model with a concave point detection approach resulted in improved ACC for delineating both individual and interconnected nuclei in histopathological images.
In [91], a region-based convolutional network was introduced to address nucleus detection and segmentation challenges. The proposed approach incorporates a GA-RPN module that integrates guided anchoring (GA) into the region proposal network (RPN) to generate candidate proposals optimized for nuclei detection. Additionally, a new branch is introduced to regress the IoU between the detection boxes and their corresponding ground truth, facilitating precise bounding box localization. To address challenges related to undetected adhered and clustered nuclei, a fusioned box score (FBS) is introduced and passed into soft non-maximum suppression (SoftNMS) to retain true positive candidate boxes.
Table 3. Summary of Nucleus Segmentation Studies in the Literature.
Table 3. Summary of Nucleus Segmentation Studies in the Literature.
Reference Publication year Method Task Dataset Instance/Semantic Code availability
[70] 2021 NucleiSegNet Nucleus segmentation KMC liver
Kumar dataset
Semantic
[71] 2023 SAC-Net Nucleus segmentation MoNuSeg
TNBC
Semantic
[72] 2023 GSN-HVNET Nucleus segmentation and classification CoNSeP
Kumar
CPM-17
Semantic ×
[76] 2023 FRE-Net Nucleus segmentation TNBC
MoNuSeg
KMC
Glas
Semantic
[77] 2021 Kidney-SegNet Nucleus segmentation Dataset of H&E images of kidney tissue
TNBC Breast dataset
Semantic
[78] 2023 AlexSegNet Nucleus segmentation 2018 Data Science Bowl
TNBC dataset.
Semantic ×
[79] 2022 ASW-Net Nucleus segmentation BBBC039 dataset
Ganglioneuroblastoma image set
Instance
[83] 2020 FPN with a U-net Nucleus segmentation 2018 Data Science Bow
MoNuSeg
Instance
[84] 2021 Benchmark of DL architectures Nucleus segmentation Annotated fluorescence image dataset Instance
[85] 2021 VRegNet Nucleus detection Cardiac embryonic dataset Instance ×
[86] 2019 RIC-Unet Nucleus segmentation TCGA (The Cancer Genomic Atlas) dataset instance ×
[87] 2022 TSFD-Net Nucleus segmentation PanNuke dataset Instance
[88] 2020 NuClick Nucleus and cell segmentation Gland dataset
Nuclei dataset
Cell dataset
Instance
[89] 2023 BAWGNet Nucleus segmentation 2018 Data Science Bowl
MoNuSeg
TNBC
Instance
[90] 2020 ASPPU-Net Nucleus segmentation TNBC dataset
TCGA
Instance ×
[91] 2022 CNN Nucleus detection and segmentation 2018 Data Science Bowl
MoNuSeg dataset
Instance
[92] 2020 cGAN Nucleus segmentation Annotations of 30 1000 × 1000 pathology images from seven
different organs (bladder, colon, stomach, breast, kidney, liver, and prostate
Instance
[93] 2019 DL Strategies Nucleus segmentation Fluorescence Images Instance
[94] 2019 CIA-Net Nucleus MoNuSeg dataset with seven different organs Instance ×
[95] 2020 Bending loss regularized network Nucleus MoNuSeg Instance ×
[96] 2020 Instance-aware Self-supervised Learning for Nuclei Segmentation Nucleus MoNuSeg 2018 Dataset
Instance ×
[97] 2020 Triple U-net Nucleus MoNuSeg

CoNSeP

CPM-17
Instance ×
[98] 2022 Contour Proposal
Network
Cell detection Cell segmentation NCB - Neuronal Cell Bodies

BBBC039 - Nuclei of U2OS cells

BBBC041 - P. vivax (malaria)

SYNTH - Synthetic shapes.
Instance
The experiments were conducted on two challenging public datasets designed to evaluate an algorithm's generalization across different varieties. The empirical results showcase that the proposed method exhibits superior detection and segmentation capabilities compared to existing SOTA methods. The source code is available at: https://github. com/QUAPNH/NucleiDetSeg.
Alternative DL strategies, such as cGAN [92], have been put forward for instance nucleus segmentation. Experimental findings indicate that employing a cGAN trained with a combination of synthetic and real data can substantially enhance the ACC of nuclei segmentation in histopathology images. The code of this work is available at: http://github.com/mahmoodlab/NucleiSegmentation.
Furthermore, a comprehensive evaluation framework was introduced in [93], aiming to measure ACC, identify types of errors, and assess computational efficiency. This framework was employed to compare DL strategies for nucleus segmentation in Fluorescence images with classical approaches. the code is available: https://github.com/carpenterlab/2019_caicedo_cytometryA.
In the referenced work [94], the authors introduced the CIA-Net, a deep neural network designed for nuclei instance segmentation. The paper introduces an Information Aggregation Module (IAM) that facilitates collaborative refinement of nuclei and contour details by leveraging spatial and texture dependencies through bi-directional feature aggregation. Additionally, a novel smooth truncated loss function is proposed to modulate the perturbation of outliers in loss calculation, enhancing the network's focus on learning informative samples and improving generalization capability. Experimental validation on the 2018 MICCAI challenge of Multi-Organ-Nuclei-Segmentation demonstrates the effectiveness of CIA-Net, surpassing all other 35 competitive teams by a significant margin. The CIA-Net achieves a noteworthy F1-score of 0.8485, outperforming other architectures proposed in the literature, CNN3, and PA-Net.
In [95], an effective approach is presented, introducing a bending loss regularized network tailored for nuclei segmentation in histopathology images. The bending loss is a key component, imposing penalties based on the curvature of contour points. Notably, it assigns higher penalties to points with large curvatures and smaller penalties to points with small curvature, mitigating the generation of contours that span multiple nuclei. The proposed method is rigorously validated on the MoNuSeg and showcases superior performance when compared to six SOTA approaches. The comparison involves six recently published approaches, namely FCN8, U-Net, SegNet, DCAN, DIST, and HoVer-net, using metrics such as AJI, Dice, RQ, SQ, and PQ scores. The proposed approach attains the highest overall performance when benchmarked against these methods on a public dataset. The efficacy of the proposed bending loss regularized network is evident in its accurate segmentation and localization of overlapped or touching nuclei, as validated on the MoNuSeg dataset.
In the referenced work [96], the authors introduced a novel instance-aware self-supervised learning framework for nuclei segmentation, aiming to eliminate the need for manual annotations in DCNNs. To assess the effectiveness of the proposed proxy task, the authors conducted experiments using the publicly available MoNuSeg dataset. The experimental outcomes highlight the substantial improvement achieved by the self-supervised learning approach in enhancing the ACC of nuclei instance segmentation. Notably, the self-supervised ResUNet-101 achieved a new SOTA average Aggravated JI of 0.706, showcasing the efficacy of the proposed method.
The paper cited as [97] introduces a Hematoxylin-aware CNN model designed for nuclei segmentation, eliminating the need for color normalization. Structured as a Triple U-net, the model comprises an RGB branch, a Hematoxylin branch, and a segmentation branch. The proposed method is assessed on three nuclei segmentation datasets — MoNuSeg, CoNSeP, and CPM-17 dataset. Ablation studies are carried out to assess the efficacy of the Hematoxylin-aware model and to understand the impact of various loss configurations.
In the provided reference [98], the paper introduces the, a framework designed for object instance segmentation utilizing fixed-size representations based on Fourier Descriptors. CPN is flexible, incorporating various backbone networks and is trainable end-to-end. The CPN architecture comprises five fundamental building blocks, involving the generation of dense feature maps, object detection through a classifier head, and the creation of explicit contour representations via regression heads. In experimental evaluations on diverse datasets, the CPN framework demonstrates superior instance segmentation ACC compared to U-Net, Mask R-CNN, and StarDist. Particularly, CPN with local refinement achieves the highest scores across all datasets. The local refinement additionally enhances average F1-scores, particularly for high thresholds, contributing to improved contour quality. An implementation of the CPN model architecture in PyTorch is made available at: https://github.com/FZJ-INM1-BDA/celldetection.
In the realm of nucleus segmentation, various methodologies have been explored, categorizing into both semantic and instance segmentation. On the semantic segmentation front, NucleiSegNet is tailored for nuclei segmentation in H&E stained liver cancer histopathology images, surpassing recent SOTA models. DenseRes-Unet presents a robust semantic nucleus segmentation model with notable performance on the MoNuSeg dataset. Additionally, innovative strategies like CIA-Net demonstrate excellence in nuclei instance segmentation, showcasing effectiveness in the 2018 MICCAI challenge. These advancements collectively contribute to pushing the boundaries of ACC and efficiency in nucleus segmentation tasks, fostering progress in biomedical image analysis. For instance segmentation, approaches like FPN with U-net and RIC-Unet demonstrate effective instance nucleus segmentation, outperforming traditional methods and even competing CNN models. Benchmarking instance nucleus segmentation involves a comparative analysis of DL architectures, encompassing various U-Net variants and instance-aware segmentation architectures such as Mask R-CNN. Furthermore, introduces TSFD, excelling on the PanNuke dataset with notable mean and binary panoptic quality scores.

4.1.3. Tissue Segmentation

Table 4 presents a summary of some studies about tissue segmentation studies identified in the literature. Our exploration begins with various approaches studied for semantic segmentation
In [99], the authors introduced a novel network that integrates image processing techniques, including geometric augmentations and color augmentations, with a modified DL-based U-Net approach. The purpose of this combined approach is for semantic blood vessel segmentation.
A novel image segmentation technique, named RINGS [100], was introduced for the segmentation of prostate glands in histopathological images. Notably, the RINGS algorithm represents the first fully automated method capable of maintaining high sensitivity even in the presence of severe glandular degeneration. The proposed method aims to accurately detect prostate glands, providing valuable assistance to pathologists in making precise diagnoses and treatment decisions. The RINGS algorithm achieved a dice score of 0.9016.
In [101], the authors presented ER-Net, a method specifically designed for 3D vessel segmentation. Notably, the ER-Net incorporates a feature selection module that adaptively selects discriminative features from both an encoder and decoder simultaneously. This selective feature process aims to enhance the importance of edge voxels, leading to a significant improvement in segmentation performance. The effectiveness of the proposed method was thoroughly validated across four improvement in segmentation performance. The effectiveness of the proposed method was thoroughly validated across four publicly accessible datasets. The experimental results indicate that ER-Net generally outperforms other SOTA algorithms across various metrics. The implementation code for this method is available at: https://github.com/iMED-Lab/ERNet.
In [102], the authors introduced MDC-Net, a technique designed for nucleus segmentation in digital pathology images. This method employs a deep fully convolutional neural network and integrates distance maps and contour information to effectively segment nuclei that may be touching. The results of the investigated experiments conducted on different datasets demonstrate the superiority of MDC-Net in terms of metrics such as AJI, F1-score, and Hausdorff distance.
In the context of instance segmentation, the work described in [73] introduces HoVer-Net, a method designed for simultaneous nuclear segmentation and classification in histology images. HoVer-Net capitalizes on the instance-rich details embedded in the vertical and horizontal distances from nuclear pixels to their centers of mass. This approach proves beneficial in distinguishing clustered nuclei and ensuring precise segmentation, particularly in regions with overlapping instances. Authors were made code available at: https://github.com/vqdang/hover_net.
To sum up the analyzed studies about tissue segmentation, the pursuit of accurate and efficient methods has resulted in significant advancements in both semantic and instance segmentation.
For semantic segmentation, the focus on semantic tissue segmentation is evident in the work of ER-Net, a method specifically designed for 3D vessel segmentation. ER-Net incorporates a feature selection module that adaptively selects discriminative features, leading to improved segmentation performance.
In recent studies, two networks, Hover-Net and MDC-Net, were proposed for instance and semantic nucleus segmentation, addressing simultaneous segmentation and classification of nuclei in multi-tissue histology images. These works contribute to advancing the capabilities of both instance and semantic segmentation in the challenging context of tissue analysis.
Notable contributions include the introduction of a network of vessel U-net, which integrates image processing techniques with a modified U-Net for semantic blood vessel segmentation. An additional innovative method, the RINGS algorithm, was introduced as the initial fully automated technique for segmenting the prostate gland in histopathological images. It demonstrates notable sensitivity, particularly when faced with significant glandular degeneration.

4.2. RQ3

In this section, we will examine multiple papers that discuss various tools proposed in the literature for MIS and present a summarized overview of these studies in Table 5. The table includes details for each software/tool, such as the corresponding reference, microscopy image type, website, and associated task.
In [103], the paper introduces DeLTA 2.0, a Python-based workflow that employs DCNNs to analyze images of individual cells on two-dimensional surfaces, facilitating the quantification of gene expression and cell growth. Once trained, this workflow operates autonomously without requiring human input and demonstrates accurate processing of two-dimensional movies, effectively capturing spatial dynamics in a high-throughput manner. The algorithm leverages the U-Net neural network architecture for both segmentation and tracking models. The tracking model utilizes a sigmoid function as the final activation layer and employs a pixel-wise weighted binary cross-entropy loss function to generate a single grayscale output image, where 1's represent tracked cells and 0's denote the background and cells that did not track to the input cell.
The DeepCell application [104] serves as a web-based tool, offers a scalable and cost-effective solution for conducting DL-powered cellular image analysis. This enables researchers to efficiently analyze extensive imaging datasets. Addressing the challenges posed by DL in biological image analysis, such as the requirement for extensive training data and substantial computational resources, the DeepCell Kiosk provides a solution. It enables the efficient allocation of resources and scalability according to the demand for data analysis, thereby diminishing analysis time and managing costs effectively.
Moving on, CellPose [41] is another pipeline that facilitates nuclear and cytoplasmic segmentation, available as a web app or for local installation, complete with integrated annotation tools for training. CellPose features a graphical user interface (GUI) with various preprocessing and postprocessing configuration options; however, command line usage is necessary for tasks like training or batch testing on user-specific data.
Another noteworthy tool is the DeepImageJ plug-in [105], providing a framework for testing models on researchers' individual datasets. It offers an accessible solution designed for non-expert users to execute standard image processing tasks in life-science research. This is achieved by utilizing pre-trained DL models within the ImageJ platform like BioImage Model Zoo, enhancing user-friendly interactions. While it facilitates the user-friendly sharing of models, DeepImageJ currently only grants access to pre-trained models such as the BioImage Model Zoo, lacking a mechanism for users to train their models using their data. This limitation may pose challenges if existing pre-trained models prove insufficient.
CDeep3M [106] stands out as a cloud-based tool specifically designed for image semantic segmentation tasks, offering pretrained models tailored for electron micrographs. DeepMIB [107], on the other hand, is a deep-learning–based image segmentation plug-in designed for both two- and three-dimensional datasets. It is integrated with the Microscopy Image Browser (MIB), an open-source MATLAB-based image analysis application for light microscopy and electron microscopy. DeepMIB allows users to load datasets, test pretrained models, or even train a model using a graphical user interface (GUI).
HistomicsML2 [108] is an interactive segmentation tool designed specifically for Whole Slide Images (WSIs), tailored to enhance the ACC of semantic segmentation. This tool is dedicated to facilitating the segmentation process. HistomicsML2 is packaged as a Docker container, accessible through a web browser with a GUI. In the web-based GUI of HistomicsML2, biologists can annotate their data by dragging and dropping selected patches into corresponding classes. These annotations serve as the training data for a DL model employed in image segmentation. This process forms an active loop, wherein the DL model is trained using the initial annotations, applied to new data, and refined iteratively based on the ongoing annotations. Following each training step, regions of high uncertainty are displayed as a heatmap, enabling users to annotate these regions for further training to improve segmentation ACC. HistomicsML2 allows users to export results as HDF5 files, which can be further analyzed using other command line tools.
InstantDL [109] is a Python-based pipeline designed for segmentation and classification tasks. On the other hand, NucleAIzer [110] specializes in nuclear segmentation across various types and offers both web-based and local applications. Both tools utilize command line scripts for image processing, allowing users to configure parameters and execute tasks. While CellPose [41] includes a GUI with preprocessing and postprocessing configuration options, command line usage is necessary for tasks like training or batch testing on data.
ZeroCostDL4Mic [111] is a compilation of readily available Google Colab notebooks designed for various image analysis tasks. This resource offers a range of Colab notebooks that facilitate the training of models across different tasks and image types.
Ilastik [112], an open-source toolkit for interactive ML, has introduced a beta version for image segmentation using pre-trained deep learning models. Although the installation process for utilizing neural networks with Ilastik is more intricate than typical usage, ongoing documentation efforts aim to simplify this procedure. Furthermore, the Ilastik team is actively working on enhancing capabilities for neural network training.
Scellseg [113] represents an adaptive pipeline tailored for cell segmentation algorithms. It features a style-aware cell segmentation architecture that leverages attention mechanisms and hierarchical information. This unique approach is crafted to optimize the extraction and utilization of style features. Scellseg has proven its state-of-the-art transferability, showcasing advancements over previous tools within the field.
DeepSea [114] stands out as Annotation Software, a MATLAB-based tool crafted for the cropping and labeling of cell and subcellular bodies in cell microscopy images. Specifically designed for segmentation and tracking tasks, DeepSea serves as an effective resource for annotating and processing microscopy data. Additionally, MIA [115] emerges as an open-source DL application tailored for microscopic image analysis. It encompasses three primary applications: segmentation, object detection, and classification.
The U-Net plugin [116] stands out as a DL software dedicated to cell counting, detection, and morphometry. Additionally, 3DeeCellTracker [117] introduces a DL-based pipeline designed for segmenting and tracking cells in 3D time-lapse images. Moreover, Stardist [118] demonstrates its efficacy in localizing cell nuclei using star-convex polygons, providing a superior shape representation compared to bounding boxes and eliminating the need for shape refinement.
The paper cited in reference [119] introduces SAM (Segment Anything for Microscopy), a tool derived from the vision foundation model known as Segment Anything for image segmentation. SAM enhances the model's capabilities by developing dedicated models for microscopy data, thereby enhancing segmentation ACC. Additionally, it incorporates annotation tools for interactive segmentation and tracking, leading to accelerated data annotation compared to existing tools. The entire software is encapsulated within a unified Python library, encompassing both training and inference functionalities.
Lastly, The BioImage Model Zoo [119] is a centralized repository containing a diverse collection of pre-trained deep learning models specifically designed for bioimage analysis. This resource simplifies access to SOTA models across various applications, from image segmentation to object detection. Researchers benefit from the convenience of integrating these models into their projects, eliminating the necessity for extensive training efforts. This centralized hub fosters collaboration and accelerates progress in automated analysis techniques for intricate biological images.

5. Discussion and Conclusions

The present SLR revealed a comprehensive landscape of methodologies and tools employed in the domain of Medical Image Segmentation (MIS). Reviewing over 72 articles, the studies covered a broad spectrum of applications, ranging from cell and nucleus segmentation to tissue segmentation, each posing unique challenges and requiring specialized techniques.
  • Cell segmentation
A majority of the reviewed studies focused on cell segmentation, reflecting the critical role of this task in various biological and medical applications. This choice is supported by their proven ability to deliver unparalleled ACC and operational efficiency in intricate image segmentation tasks. Remarkable strides are evidenced in specific methodologies such as the AS-UNet [32] algorithm, showcasing its exceptional performance on the BNS dataset. Equally notable is the SAU-Net [66], which extends the U-Net framework through the incorporation of self-attention modules, thereby elevating its capability to handle both 2D and 3D microscopy images. These advancements collectively contribute to the refinement of cell segmentation methodologies.
  • Nucleus Segmentation
The literature revealed a growing emphasis on nucleus segmentation, considering its significance in pathological analysis and understanding cellular behavior. Several studies introduced novel architectures such as DLOG-NeXt [20], which outperformed SOTA U-Net and Transformer-based variants across multiple datasets. Equally important, the integration of attention networks, as exemplified in [79], showcased promising results in semantic nucleus segmentation, providing a foundation for further research in this direction.
  • Tissue Segmentation
The review also addressed tissue segmentation, crucial for pathology and histology studies. ER-Net, proposed for 3D vessel segmentation, stood out for its adaptive feature selection module, significantly enhancing segmentation performance [101]. Furthermore, RINGS demonstrated a breakthrough in fully automated prostate gland segmentation [100].
  • Integration of DL Tools
The integration of DL tools into existing platforms, as observed in DeepImageJ [105], offered researchers flexibility and accessibility. Nevertheless, limitations, such as the absence of a mechanism for user-specific model training, were identified. Additionally, tools like ZeroCostDL4Mic [111] provided readily available Google Colab notebooks for diverse image analysis tasks, democratizing access to DL capabilities. Moreover, a recent tool, SAM [119], showcases its efficacy in segmenting various microscopy data.
  • Challenges and Future Directions
Even though microscopy image analysis has improved a lot, there are still challenges that show we need more focused research in the future. One big problem is not having enough labeled datasets, which slows down the progress of DL models in this area. In many cases, these datasets are also highly imbalanced, leading to biased models that may not generalize well to unseen data. Leveraging data augmentation techniques and synthetic data generation could help mitigating these issues. This emphasizes the importance of generating larger, more diverse, and well-explained datasets. Furthermore, the interpretability of increasingly complex models remains a crucial concern, demanding research into methodologies that enhance the transparency and comprehension of the decision-making processes within these models. Techniques such as explainable AI (XAI), which includes methods like saliency maps or class activation mapping (gradients, etc.), could be used to make these "black-box" models more understandable, helping researchers to see which features influence decisions. It would also help in improving these models according to such study about features extracted throughout the neural networks.
Additionally, addressing the challenge of achieving generalizability across diverse microscopy images calls for the exploration of innovative techniques to adapt models to the inherent variations in imaging conditions. Transfer learning, where a model trained on one type of data is fine-tuned for the use on another one, and domain adaptation methods may provide promising avenues for increasing the generalizability of DL models across various microscopy settings.
Looking ahead, a forward-thinking approach in future research should prioritize the establishment of standardized benchmarks, streamlining fair comparisons and systematic evaluations to ultimately drive progress in the resilience and applicability of microscopy image analysis techniques.
Our SLR has endeavored to provide a comprehensive synthesis of the available evidence on MIS. However, it is crucial to acknowledge the potential influence of publication bias on the observed results, as is inherent in the nature of SLRs. Specifically, studies with statistically significant results are often more likely to be published, while studies with negative or non-significant findings may be overlooked, which can skew the overall findings of this review. Addressing this bias in future research will require more transparency in the publication process and greater recognition of studies that report neutral or unexpected results. Similar to all SLRs, our findings could be susceptible to the preferential publication of studies exhibiting positive or statistically significant outcomes, potentially introducing a bias that may distort the overarching interpretation of the available evidence.
To sum up, this SLR provides a comprehensive analysis of the present status in the field of MIS, emphasizing the efficacy of DL methodologies in addressing intricate challenges. The integration of DL with classic image processing techniques could also offer hybrid approaches that combine the strengths of both methods. Noteworthy achievements underscore the increasing reliance on DL for precise and efficient segmentation tasks. Challenges identified underscore the need for ongoing research. As the field moves forward, further development of user-friendly tools and open-source software will be critical to democratizing access to advanced image analysis techniques for broader scientific and medical communities.

Aauthor Contributions

[Fatma Krikid] led the conceptualization and drafting of the manuscript, [Hugo Rositi] provided critical insights and revisions, while [Antoine Vacavant] contributed to literature review and approved the final version of the manuscript.

Data Availability Statement

The systematic review and meta-analysis conducted in this study did not involve the generation or analysis of specific datasets. Instead, it utilized existing literature sources as the basis for analysis. All methodologies, search strategies, inclusion and exclusion criteria, as well as statistical analyses, are fully documented within the main body of this article.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Kherlopian, A.R.; Song, T.; Duan, Q.; Neimark, M.A.; Po, M.J.; Gohagan, J.K.; Laine, A.F. A review of imaging techniques for systems biology, BMC Syst. Biol. 2008, 2, 74. [Google Scholar] [CrossRef]
  2. Cover, G.S.; Herrera, W.G.; Bento, M.P.; Appenzeller, S.; Rittner, L. Computational methods for corpus callosum segmentation on MRI: A systematic literature review, Comput. Methods Programs Biomed. 2018, 154, 25–35. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, Z.; Jin, L.; Chen, J.; Fang, Q.; Ablameyko, S.; Yin, Z.; Xu, Y. A survey on applications of deep learning in microscopy image analysis, Comput. Biol. Med. 2021, 134, 104523. [Google Scholar] [CrossRef]
  4. Chapaliuk, B.; Zaychenko, Y. Medical image segmentation methods overview, Syst. Res. Inf. Technol. (2018) 72–81. [CrossRef]
  5. Haq, I.U. An overview of deep learning in medical imaging, 2022. [CrossRef]
  6. Xiao, C.; Peng, Z.; Chen, F.; Yan, H.; Zhu, B.; Tai, Y.; Qiu, P.; Liu, C.; Song, X.; Wu, Z.; Chen, L. Mutation analysis of 19 commonly used short tandem repeat loci in a Guangdong Han population, Leg. Med. 2018, 32, 92–97. [Google Scholar] [CrossRef]
  7. Kassim, Y.; Prasath, S.; Glinskii, O.; Glinsky, V.; Huxley, V.; Palaniappan, K. Microvasculature segmentation of arterioles using deep CNN, 2017. [CrossRef]
  8. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation, 2015. [CrossRef]
  9. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation, 2014. [CrossRef]
  10. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y.; Networks, G.A. 2014. [CrossRef]
  11. Liu, X.; Song, L.; Liu, S.; Zhang, Y. A Review of Deep-Learning-Based Medical Image Segmentation Methods, Sustainability. 2021, 13, 1224. [CrossRef]
  12. Brereton, P.; Kitchenham, B.A.; Budgen, D.; Turner, M.; Khalil, M. Lessons from applying the systematic literature review process within the software engineering domain, J. Syst. Softw. 2007, 80, 571–583. [Google Scholar] [CrossRef]
  13. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, T.P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement, PLOS Med. 2009, 6, e1000097. [CrossRef]
  14. Arbelle, A.; Raviv, T.R. Microscopy Cell Segmentation via Adversarial Neural Networks, 2018. http://arxiv.org/abs/1709.05860 (accessed November 10, 2023). 10 November.
  15. Cohen, A.A.; Geva-Zatorsky, N.; Eden, E.; Frenkel-Morgenstern, M.; Issaeva, I.; Sigal, A.; Milo, R.; Cohen-Saidon, C.; Liron, Y.; Kam, Z.; Cohen, L.; Danon, T.; Perzov, N.; Alon, U. Dynamic Proteomics of Individual Cancer Cells in Response to a Drug, Science. 2008, 322, 1511–1516. [CrossRef]
  16. Han, L.; Su, H.; Yin, Z. Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks, IEEE Trans. Med. Imaging. 2023, 42, 1068–1082. [Google Scholar] [CrossRef]
  17. Huang, C.; Ding, H.; Liu, C. Segmentation of Cell Images Based on Improved Deep Learning Approach, IEEE Access. 2020, 8, 110189–110202. [CrossRef]
  18. Ibtehaz, N.; Rahman, M.S. MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation, Neural Netw. 2020, 121, 74–87. [CrossRef]
  19. Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. CE-Net: Context Encoder Network for 2D Medical Image Segmentation, IEEE Trans. Med. Imaging. 2019, 38, 2281–2292. [Google Scholar] [CrossRef]
  20. Ahmed, M.R.; Fahim, M.A.I.; Islam, A.K.M.M.; Islam, S.; Shatabda, S. DOLG-NeXt: Convolutional neural network with deep orthogonal fusion of local and global features for biomedical image segmentation, Neurocomputing. 2023, 546, 126362. [CrossRef]
  21. Cardona, A.; Saalfeld, S.; Preibisch, S.; Schmid, B.; Cheng, A.; Pulokas, J.; Tomancak, P.; Hartenstein, V. An Integrated Micro- and Macroarchitectural Analysis of the Drosophila Brain by Computer-Assisted Serial Section Electron Microscopy, PLOS Biol. 2010, 8, e1000502. [CrossRef]
  22. Bernal, J.; Tajkbaksh, N.; Sánchez, F.J.; Matuszewski, B.J.; Chen, H.; Yu, L.; Angermann, Q.; Romain, O.; Rustad, B.; Balasingham, I.; Pogorelov, K.; Choi, S.; Debard, Q.; Maier-Hein, L.; Speidel, S.; Stoyanov, D.; Brandao, P.; Córdova, H.; Sánchez-Montes, C.; Gurudu, S.R.; Fernández-Esparrach, G.; Dray, X.; Liang, J.; Histace, A. Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge, IEEE Trans. Med. Imaging. 2017, 36, 1231–1249. [Google Scholar] [CrossRef]
  23. Caicedo, J.C.; Goodman, A.; Karhohs, K.W.; Cimini, B.A.; Ackerman, J.; Haghighi, M.; Heng, C.; Becker, T.; Doan, M.; McQuin, C.; Rohban, M.; Singh, S.; Carpenter, A.E. Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl, Nat. Methods. 2019, 16, 1247–1253. [Google Scholar] [CrossRef]
  24. Wollmann, T.; Gunkel, M.; Chung, I.; Erfle, H.; Rippe, K.; Rohr, K. GRUU-Net: Integrated convolutional and gated recurrent neural network for cell segmentation, Med. Image Anal. 2019, 56, 68–79. [Google Scholar] [CrossRef]
  25. Baltissen, D.; Wollmann, T.; Gunkel, M.; Chung, I.; Erfle, H.; Rippe, K.; Rohr, K. Comparison of segmentation methods for tissue microscopy images of glioblastoma cells, in: 2018 IEEE 15th Int. Symp. Biomed. Imaging ISBI 2018, 2018: pp. 396–399. [CrossRef]
  26. Wollmann, T.; Ivanova, J.; Gunkel, M.; Chung, I.; Erfle, H.; Rippe, K.; Rohr, K. Multi-channel Deep Transfer Learning for Nuclei Segmentation in Glioblastoma Cell Tissue Images, in: A. Maier, T.M. Deserno, H. Handels, K.H. Maier-Hein, C. Palm, T. Tolxdorff (Eds.), Bildverarb. Für Med. 2018, Springer, Berlin, Heidelberg, 2018: pp. 316–321. [CrossRef]
  27. Asha, S.B.; Gopakumar, G.; Subrahmanyam, G.R.K.S. Saliency and ballness driven deep learning framework for cell segmentation in bright field microscopic images, Eng. Appl. Artif. Intell. 2023, 118, 105704. [Google Scholar] [CrossRef]
  28. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation, IEEE Trans. Med. Imaging. 2020, 39, 1856–1867. [Google Scholar] [CrossRef] [PubMed]
  29. Chaurasia, A.; Culurciello, E. LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation, in: 2017 IEEE Vis. Commun. Image Process. VCIP, 2017: pp. 1–4. [CrossRef]
  30. Cohen, E.; Uhlmann, V. aura-net : robust segmentation of phase-contrast microscopy images with few annotations, (2021). http://arxiv.org/abs/2102.01389 (accessed November 15, 2023).
  31. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; Glocker, B.; Rueckert, D. Attention U-Net: Learning Where to Look for the Pancreas, (2018). [CrossRef]
  32. Pan, X.; Li, L.; Yang, D.; He, Y.; Liu, Z.; Yang, H. An Accurate Nuclei Segmentation Algorithm in Pathological Image Based on Deep Semantic Network, IEEE Access. 2019, 7, 110674–110686. [CrossRef]
  33. Kumar, N.; Verma, R.; Sharma, S.; Bhargava, S.; Vahadane, A.; Sethi, A. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology, IEEE Trans. Med. Imaging. 2017, 36, 1–1. [Google Scholar] [CrossRef] [PubMed]
  34. Naylor, P.; Lae, M.; Reyal, F.; Walter, T. Nuclei segmentation in histopathology images using deep neural networks, in: 2017 IEEE 14th Int. Symp. Biomed. Imaging ISBI 2017, IEEE, Melbourne, Australia, 2017: pp. 933–936. [CrossRef]
  35. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network, in: 2017 IEEE Conf. Comput. Vis. Pattern Recognit. CVPR, IEEE, Honolulu, HI, 2017: pp. 6230–6239. [CrossRef]
  36. Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, (2016). http://arxiv.org/abs/1606.02147 (accessed November 10, 2023).
  37. Badrinarayanan, V.; Handa, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling, (2015). http://arxiv.org/abs/1505.07293 (accessed November 10, 2023).
  38. Tomar, N.K.; Jha, D.; Riegler, M.A.; Johansen, H.D.; Johansen, D.; Rittscher, J.; Halvorsen, P.; Ali, S. FANet: A Feedback Attention Network for Improved Biomedical Image Segmentation, (2022). [CrossRef]
  39. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation, (2018). [CrossRef]
  40. Wang, Y.; Wang, W.; Liu, D.; Hou, W.; Zhou, T.; Ji, Z. GeneSegNet: a deep learning framework for cell segmentation by integrating gene expression and imaging, Genome Biol. 2023, 24, 235. [CrossRef]
  41. C. Stringer, T. C. Stringer, T. Wang, M. Michaelos, M. Pachitariu, Cellpose: a generalist algorithm for cellular segmentation, (n.d.).
  42. Littman, R.; Hemminger, Z.; Foreman, R.; Arneson, D.; Zhang, G.; Gómez-Pinilla, F.; Yang, X.; Wollman, R. Joint cell segmentation and cell type annotation for spatial transcriptomics, Mol. Syst. Biol. 2021, 17, e10108. [Google Scholar] [CrossRef]
  43. Zhong, Y.; Ren, X. Cell segmentation and gene imputation for imaging-based spatial transcriptomics, (2023) 2023. [CrossRef]
  44. Lin, S.; Norouzi, N. An Effective Deep Learning Framework for Cell Segmentation in Microscopy Images, in: 2021 43rd Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBC, IEEE, Mexico, 2021: pp. 3201–3204. [CrossRef]
  45. Arbelle, A.; Raviv, T.R. Microscopy Cell Segmentation via Convolutional LSTM Networks, (2019). [CrossRef]
  46. Yi, J.; Wu, P.; Jiang, M.; Huang, Q.; Hoeppner, D.J.; Metaxas, D.N. Attentive neural cell instance segmentation, Med. Image Anal. 2019, 55, 228–240. [Google Scholar] [CrossRef]
  47. Wan, Z.; Li, M.; Wang, Z.; Tan, H.; Li, W.; Yu, L.; Samuel, D. CellT-Net: A Composite Transformer Method for 2-D Cell Instance Segmentation, IEEE J. Biomed. Health Inform 2023. [CrossRef]
  48. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector, in: B. Leibe, J. Matas, N. Sebe, M. Welling (Eds.), Comput. Vis. – ECCV 2016, Springer International Publishing, Cham, 2016: pp. 21–37. [CrossRef]
  49. Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Single-Shot Refinement Neural Network for Object Detection, (2018). http://arxiv.org/abs/1711.06897 (accessed November 10, 2023). 10 November.
  50. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection, (2018). [CrossRef]
  51. Law, H.; Deng, J. CornerNet: Detecting Objects as Paired Keypoints, (2019). [CrossRef]
  52. Cai, Z.; Vasconcelos, N. Cascade R-CNN: High Quality Object Detection and Instance Segmentation, (2019). [CrossRef]
  53. Tasnadi, E.; Sliz-Nagy, A.; Horvath, P. Structure preserving adversarial generation of labeled training samples for single-cell segmentation, Cell Rep. Methods. 2023, 3, 100592. [Google Scholar] [CrossRef]
  54. Al-Kofahi, Y.; Zaltsman, A.; Graves, R.; Marshall, W.; Rusu, M. A deep learning-based algorithm for 2-D cell segmentation in microscopy images, BMC Bioinformatics. 2018, 19, 365. [CrossRef]
  55. Yi, J.; Wu, P.; Huang, Q.; Qu, H.; Liu, B.; Hoeppner, D.J.; Metaxas, D.N. Multi-scale Cell Instance Segmentation with Keypoint Graph based Bounding Boxes, (2019). [CrossRef]
  56. Ghaznavi, A.; Rychtáriková, R.; Saberioon, M.; Štys, D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line, Comput. Biol. Med. 2022, 147, 105805. [Google Scholar] [CrossRef]
  57. Wang, A.; Zhang, Q.; Han, Y.; Megason, S.; Hormoz, S.; Mosaliganti, K.R.; Lam, J.C.K.; Li, V.O.K. A novel deep learning-based 3D cell segmentation framework for future image-based disease detection, Sci. Rep. 2022, 12, 342. [Google Scholar] [CrossRef]
  58. Willis, L.; Refahi, Y.; Wightman, R.; Landrein, B.; Teles, J.; Huang, K.C.; Meyerowitz, E.M.; Jönsson, H. Cell size and growth regulation in the Arabidopsis thaliana apical stem cell niche, Proc. Natl. Acad. Sci. 2016, 113, E8238–E8246. [Google Scholar] [CrossRef] [PubMed]
  59. Barro, A.V.; Stöckle, D.; Thellmann, M.; Ruiz-Duarte, P.; Bald, L.; Louveaux, M.; von Born, P.; Denninger, P.; Goh, T.; Fukaki, H.; Vermeer, J.E.M.; Maizel, A. Cytoskeleton Dynamics Are Necessary for Early Events of Lateral Root Initiation in Arabidopsis, Curr. Biol. 2019, 29, 2443–2454.e5. [Google Scholar] [CrossRef]
  60. Tofanelli, R.; Vijayan, A.; Scholz, S.; Schneitz, K. Protocol for rapid clearing and staining of fixed Arabidopsis ovules for improved imaging by confocal laser scanning microscopy, Plant Methods. 2019, 15, 120. [CrossRef]
  61. Peng, J.; Luo, Z. CS-Net: Instance-aware cellular segmentation with hierarchical dimension-decomposed convolutions and slice-attentive learning, Knowl. -Based Syst. 2021, 232, 107485. [Google Scholar] [CrossRef]
  62. Qian, L.; Qian, W.; Tian, D.; Zhu, Y.; Zhao, H.; Yao, Y. MSCA-UNet: Multi-Scale Convolutional Attention UNet for Automatic Cell Counting Using Density Regression, IEEE Access. 2023, 11, 85990–86001. [CrossRef]
  63. Kainz, P.; Urschler, M.; Schulter, S.; Wohlhart, P.; Lepetit, V. You Should Use Regression to Detect Cells, in: N. Navab, J. Hornegger, W.M. Wells, A.F. Frangi (Eds.), Med. Image Comput. Comput.-Assist. Interv. – MICCAI 2015, Springer International Publishing, Cham, 2015: pp. 276–283. [CrossRef]
  64. Cohen, J.P.; Boucher, G.; Glastonbury, C.A.; Lo, H.Z.; Bengio, Y. Count-ception: Counting by Fully Convolutional Redundant Counting, in: 2017 IEEE Int. Conf. Comput. Vis. Workshop ICCVW, 2017: pp. 18–26. [CrossRef]
  65. Minn, K.T.; Fu, Y.C.; He, S.; Dietmann, S.; George, S.C.; Anastasio, M.A.; Morris, S.A.; Solnica-Krezel, L. High-resolution transcriptional and morphogenetic profiling of cells from micropatterned human ESC gastruloid cultures, eLife. 2020, 9, e59445. [CrossRef]
  66. Guo, Y.; Krupa, O.; Stein, J.; Wu, G.; Krishnamurthy, A. SAU-Net: A Unified Network for Cell Counting in 2D and 3D Microscopy Images, IEEE/ACM Trans. Comput. Biol. Bioinform. PP (2021) 1–1. [CrossRef]
  67. He, S.; Minn, K.T.; Solnica-Krezel, L.; Anastasio, M.A.; Li, H. Deeply-supervised density regression for automatic cell counting in microscopy images, Med. Image Anal. 2021, 68, 101892. [Google Scholar] [CrossRef] [PubMed]
  68. Lempitsky, V.; Zisserman, A. Learning To Count Objects in Images, in: Adv. Neural Inf. Process. Syst. Curran Associates, Inc. 2010. https://proceedings.neurips.cc/paper/2010/hash/fe73f687e5bc5280214e0486b273a5f9-Abstract.html (accessed November 10, 2023).
  69. Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.-W.; Snead, D.R.J.; Cree, I.A.; Rajpoot, N.M. Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images, IEEE Trans. Med. Imaging. 2016, 35, 1196–1206. [Google Scholar] [CrossRef]
  70. Lal, S.; Das, D.; Alabhya, K.; Kanfade, A.; Kumar, A.; Kini, J. NucleiSegNet: Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images, Comput. Biol. Med. 2021, 128, 104075. [Google Scholar] [CrossRef]
  71. Guo, R.; Xie, K.; Pagnucco, M.; Song, Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation, Med. Image Anal. 2023, 86, 102790. [Google Scholar] [CrossRef]
  72. Zhao, T.; Fu, C.; Tian, Y.; Song, W.; Sham, C.-W.; Lightweight, A. Multi-Task Deep Learning Framework for Nuclei Segmentation and Classification, Bioengineering. 2023, 10, 393. [CrossRef]
  73. Graham, S.; Vu, Q.D.; Raza, S.E.A.; Azam, A.; Tsang, Y.W.; Kwak, J.T.; Rajpoot, N. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images, Med. Image Anal. 2019, 58, 101563. [Google Scholar] [CrossRef]
  74. Raza, S.E.A.; Cheung, L.; Shaban, M.; Graham, S.; Epstein, D.; Pelengaris, S.; Khan, M.; Rajpoot, N.M. Micro-Net: A unified model for segmentation of various objects in microscopy images, Med. Image Anal. 2019, 52, 160–173. [Google Scholar] [CrossRef]
  75. Naylor, P.; Lae, M.; Reyal, F.; Walter, T. Segmentation of Nuclei in Histopathology Images by Deep Regression of the Distance Map, IEEE Trans. Med. Imaging. 2019, 38, 448–459. [Google Scholar] [CrossRef]
  76. Huang, X.; Chen, J.; Chen, M.; Wan, Y.; Chen, L. FRE-Net: Full-region enhanced network for nuclei segmentation in histopathology images, Biocybern. Biomed. Eng. 2023, 43, 386–401. [Google Scholar] [CrossRef]
  77. Aatresh, A.A.; Yatgiri, R.P.; Chanchal, A.K.; Kumar, A.; Ravi, A.; Das, D.; Bs, R.; Lal, S.; Kini, J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images, Comput. Med. Imaging Graph. 2021, 93, 101975. [Google Scholar] [CrossRef] [PubMed]
  78. Singha, A.; BHOWMIK, M. AlexSegNet: an accurate nuclei segmentation deep learning model in microscopic images for diagnosis of cancer, Multimed. Tools Appl. 82 2022. [CrossRef]
  79. Pan, W.; Liu, Z.; Song, W.; Zhen, X.; Yuan, K.; Xu, F.; Lin, G.N. An Integrative Segmentation Framework for Cell Nucleus of Fluorescence Microscopy, Genes. 2022, 13, 431. [CrossRef]
  80. Ljosa, V.; Sokolnicki, K.L.; Carpenter, A.E. Annotated high-throughput microscopy image sets for validation, Nat. Methods. 2012, 9, 637–637. [Google Scholar] [CrossRef]
  81. Kromp, F.; Bozsaky, E.; Rifatbegovic, F.; Fischer, L.; Ambros, M.; Berneder, M.; Weiss, T.; Lazic, D.; Dörr, W.; Hanbury, A.; Beiske, K.; Ambros, P.F.; Ambros, I.M.; Taschner-Mandl, S. An annotated fluorescence image dataset for training nuclear segmentation methods, Sci. Data. 2020, 7, 262. [Google Scholar] [CrossRef]
  82. McQuin, C.; Goodman, A.; Chernyshev, V.; Kamentsky, L.; Cimini, B.A.; Karhohs, K.W.; Doan, M.; Ding, L.; Rafelski, S.M.; Thirstrup, D.; Wiegraebe, W.; Singh, S.; Becker, T.; Caicedo, J.C.; Carpenter, A.E. CellProfiler 3. 0: Next-generation image processing for biology, PLOS Biol. 2018, 16, e2005970. [Google Scholar] [CrossRef]
  83. Cheng, Z.; Qu, A. A Fast and Accurate Algorithm for Nuclei Instance Segmentation in Microscopy Images, IEEE Access. 2020, 8, 158679–158689. [CrossRef]
  84. Kromp, F.; Fischer, L.; Bozsaky, E.; Ambros, I.M.; Dorr, W.; Beiske, K.; Ambros, P.F.; Hanbury, A.; Taschner-Mandl, S. Evaluation of Deep Learning Architectures for Complex Immunofluorescence Nuclear Image Segmentation, IEEE Trans. Med. Imaging. 2021, 40, 1934–1949. [Google Scholar] [CrossRef]
  85. Lapierre-Landry, M.; Liu, Z.; Ling, S.; Bayat, M.; Wilson, D.L.; Jenkins, M.W. Nuclei Detection for 3D Microscopy With a Fully Convolutional Regression Network, IEEE Access Pract. Innov. Open Solut. 2021, 9, 60396–60408. [Google Scholar] [CrossRef]
  86. Zeng, Z.; Xie, W.; Zhang, Y.; Lu, Y. RIC-Unet: An Improved Neural Network Based on Unet for Nuclei Segmentation in Histology Images, IEEE Access. 2019, 7, 21420–21428. [CrossRef]
  87. Ilyas, T.; Mannan, Z.I.; Khan, A.; Azam, S.; Kim, H.; De Boer, F. TSFD-Net: Tissue specific feature distillation network for nuclei segmentation and classification, Neural Netw. 2022, 151, 1–15. [CrossRef]
  88. Koohbanani, N.A.; Jahanifar, M.; Tajadin, N.Z.; Rajpoot, N. NuClick: A deep learning framework for interactive segmentation of microscopic images, Med. Image Anal. 2020, 65, 101771. [Google Scholar] [CrossRef]
  89. Imtiaz, T.; Fattah, S.A.; Kung, S.-Y. BAWGNet: Boundary aware wavelet guided network for the nuclei segmentation in histopathology images, Comput. Biol. Med. 2023, 165, 107378. [Google Scholar] [CrossRef]
  90. Wan, T.; Zhao, L.; Feng, H.; Li, D.; Tong, C.; Qin, Z. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement, Neurocomputing. 2020, 408, 144–156. [CrossRef]
  91. Liang, H.; Cheng, Z.; Zhong, H.; Qu, A.; Chen, L. A region-based convolutional network for nuclei detection and segmentation in microscopy images, Biomed. Signal Process. Control. 2022, 71, 103276. [Google Scholar] [CrossRef]
  92. Mahmood, F.; Borders, D.; Chen, R.J.; Mckay, G.N.; Salimian, K.J.; Baras, A.; Durr, N.J. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images, IEEE Trans. Med. Imaging. 2020, 39, 3257–3267. [Google Scholar] [CrossRef] [PubMed]
  93. Caicedo, J.C.; Roth, J.; Goodman, A.; Becker, T.; Karhohs, K.W.; Broisin, M.; Molnar, C.; McQuin, C.; Singh, S.; Theis, F.J.; Carpenter, A.E. Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images, Cytometry A. 2019, 95, 952–965. [CrossRef]
  94. Zhou, Y.; Onder, O.F.; Dou, Q.; Tsougenis, E.; Chen, H.; Heng, P.-A. CIA-Net: Robust Nuclei Instance Segmentation with Contour-aware Information Aggregation, (2019). http://arxiv.org/abs/1903.05358 (accessed November 16, 2023).
  95. Wang, H.; Xian, M.; Vakanski, A. Bending Loss Regularized Network for Nuclei Segmentation in Histopathology Images, in: 2020 IEEE 17th Int. Symp. Biomed. Imaging ISBI, IEEE, Iowa City, IA, USA, 2020: pp. 1–5. [CrossRef]
  96. Xie, X.; Chen, J.; Li, Y.; Shen, L.; Ma, K.; Zheng, Y. Instance-aware Self-supervised Learning for Nuclei Segmentation, (2020). [CrossRef]
  97. Zhao, B.; Chen, X.; Li, Z.; Yu, Z.; Yao, S.; Yan, L.; Wang, Y.; Liu, Z.; Liang, C.; Han, C. Triple U-net: Hematoxylin-aware nuclei segmentation with progressive dense feature aggregation, Med. Image Anal. 2020, 65, 101786. [Google Scholar] [CrossRef] [PubMed]
  98. Upschulte, E.; Harmeling, S.; Amunts, K.; Dickscheid, T. Contour Proposal Networks for Biomedical Instance Segmentation, (2021). [CrossRef]
  99. Maurya, A.; Stanley, R.J.; Lama, N.; Jagannathan, S.; Saeed, D.; Swinfard, S.; Hagerty, J.R.; Stoecker, W.V. A deep learning approach to detect blood vessels in basal cell carcinoma, Skin Res. Technol. 2022, 28, 571–576. [Google Scholar] [CrossRef]
  100. Salvi, M.; Bosco, M.; Molinaro, L.; Gambella, A.; Papotti, M.; Acharya, U.R.; Molinari, F. A hybrid deep learning approach for gland segmentation in prostate histopathological images, Artif. Intell. Med. 2021, 115, 102076. [Google Scholar] [CrossRef]
  101. Xia, L.; Zhang, H.; Wu, Y.; Song, R.; Ma, Y.; Mou, L.; Liu, J.; Xie, Y.; Ma, M.; Zhao, Y. 3D vessel-like structure segmentation in medical images by an edge-reinforced network, Med. Image Anal. 2022, 82, 102581. [Google Scholar] [CrossRef]
  102. Liu, X.; Guo, Z.; Cao, J.; Tang, J. MDC-net: A new convolutional neural network for nucleus segmentation in histopathology images with distance maps and contour information, Comput. Biol. Med. 2021, 135, 104543. [Google Scholar] [CrossRef]
  103. O’Connor, O.M.; Alnahhas, R.N.; Lugagne, J.-B.; Dunlop, M.J. DeLTA 2. 0: A deep learning pipeline for quantifying single-cell spatial and temporal dynamics, PLOS Comput. Biol. 2022, 18, e1009797. [Google Scholar] [CrossRef]
  104. Bannon, D.; Moen, E.; Schwartz, M.; Borba, E.; Kudo, T.; Greenwald, N.; Vijayakumar, V.; Chang, B.; Pao, E.; Osterman, E.; Graf, W.; Van Valen, D. DeepCell Kiosk: scaling deep learning-enabled cellular image analysis with Kubernetes, Nat. Methods. 2021, 18, 43–45. [Google Scholar] [CrossRef]
  105. Gómez-de-Mariscal, E.; García-López-de-Haro, C.; Ouyang, W.; Donati, L.; Lundberg, E.; Unser, M.; Muñoz-Barrutia, A.; Sage, D. DeepImageJ: A user-friendly environment to run deep learning models in ImageJ, Nat. Methods. 2021, 18, 1192–1195. [Google Scholar] [CrossRef]
  106. Haberl, M.G.; Churas, C.; Tindall, L.; Boassa, D.; Phan, S.; Bushong, E.A.; Madany, M.; Akay, R.; Deerinck, T.J.; Peltier, S.T.; Ellisman, M.H. CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation, Nat. Methods. 2018, 15, 677–680. [Google Scholar] [CrossRef]
  107. Belevich, I.; Jokitalo, E. DeepMIB: User-friendly and open-source software for training of deep learning network for biological image segmentation, PLOS Comput. Biol. 2021, 17, e1008374. [Google Scholar] [CrossRef] [PubMed]
  108. Lee, S.; Amgad, M.; Mobadersany, P.; McCormick, M.; Pollack, B.P.; Elfandy, H.; Hussein, H.; Gutman, D.A.; Cooper, L.A.D. Interactive Classification of Whole-Slide Imaging Data for Cancer Researchers, Cancer Res. 2021, 81, 1171–1177. [CrossRef]
  109. Waibel, D.J.E.; Boushehri, S.S.; Marr, C. InstantDL: an easy-to-use deep learning pipeline for image segmentation and classification, BMC Bioinformatics. 2021, 22, 103. [CrossRef]
  110. Hollandi, R.; Szkalisity, A.; Toth, T.; Tasnadi, E.; Molnar, C.; Mathe, B.; Grexa, I.; Molnar, J.; Balind, A.; Gorbe, M.; Kovacs, M.; Migh, E.; Goodman, A.; Balassa, T.; Koos, K.; Wang, W.; Caicedo, J.C.; Bara, N.; Kovacs, F.; Paavolainen, L.; Danka, T.; Kriston, A.; Carpenter, A.E.; Smith, K.; Horvath, P. nucleAIzer: A Parameter-free Deep Learning Framework for Nucleus Segmentation Using Image Style Transfer, Cell Syst. 2020, 10, 453–458.e6. [CrossRef]
  111. von Chamier, L.; Jukkala, J.; Spahn, C.; Lerche, M.; Hernández-Pérez, S.; Mattila, P.K.; Karinou, E.; Holden, S.; Solak, A.C.; Krull, A.; Buchholz, T.-O.; Jug, F.; Royer, L.A.; Heilemann, M.; Laine, R.F.; Jacquemet, G.; Henriques, R. ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy, (2020) 2020. 03.20.00 0133. [CrossRef]
  112. Berg, S.; Kutra, D.; Kroeger, T.; Straehle, C.N.; Kausler, B.X.; Haubold, C.; Schiegg, M.; Ales, J.; Beier, T.; Rudy, M.; Eren, K.; Cervantes, J.I.; Xu, B.; Beuttenmueller, F.; Wolny, A.; Zhang, C.; Koethe, U.; Hamprecht, F.A.; Kreshuk, A. ilastik: interactive machine learning for (bio)image analysis, Nat. Methods. 2019, 16, 1226–1232. [Google Scholar] [CrossRef] [PubMed]
  113. Xun, D.; Chen, D.; Zhou, Y.; Lauschke, V.M.; Wang, R.; Wang, Y. Scellseg: A style-aware deep learning tool for adaptive cell instance segmentation by contrastive fine-tuning, iScience. 2022, 25, 105506. [CrossRef]
  114. Zargari, DeepSea: An efficient deep learning model for automated cell segmentation and tracking, (n.d.).
  115. Körber, N. MIA is an open-source standalone deep learning application for microscopic image analysis, Cell Rep. Methods. 2023, 3, 100517. [Google Scholar] [CrossRef]
  116. Falk, T.; Mai, D.; Bensch, R.; Çiçek, Ö.; Abdulkadir, A.; Marrakchi, Y.; Böhm, A.; Deubner, J.; Jäckel, Z.; Seiwald, K.; Dovzhenko, A.; Tietz, O.; Bosco, C.D.; Walsh, S.; Saltukoglu, D.; Tay, T.L.; Prinz, M.; Palme, K.; Simons, M.; Diester, I.; Brox, T.; Ronneberger, O. U-Net: deep learning for cell counting, detection, and morphometry, Nat. Methods. 2019, 16, 67–70. [Google Scholar] [CrossRef]
  117. Wen, C.; Miura, T.; Voleti, V.; Yamaguchi, K.; Tsutsumi, M.; Yamamoto, K.; Otomo, K.; Fujie, Y.; Teramoto, T.; Ishihara, T.; Aoki, K.; Nemoto, T.; Hillman, E.M.; Kimura, K.D. ; DeeCellTracker, a deep learning-based pipeline for segmenting and tracking cells in 3D time lapse images, eLife. 2021, 10, e59187. [CrossRef]
  118. Weigert, M.; Schmidt, U.; Haase, R.; Sugawara, K.; Myers, G. Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy, in: 2020 IEEE Winter Conf. Appl. Comput. Vis. WACV, 2020: pp. 3655–3662. [CrossRef]
  119. Archit, A.; Nair, S.; Khalid, N.; Hilt, P.; Rajashekar, V.; Freitag, M.; Gupta, S.; Dengel, A.; Ahmed, S.; Pape, C. Segment Anything for Microscopy, Bioinformatics, 2023. [CrossRef]
  120. Ouyang, W.; Beuttenmueller, F.; Gómez-de-Mariscal, E.; Pape, C.; Burke, T.; Garcia-López-de-Haro, C.; Russell, C.; Moya-Sans, L.C. de-la-Torre-Gutiérrez; Schmidt, D.; Kutra, D.; Novikov, M.; Weigert, M.; Schmidt, U.; Bankhead, P.; Jacquemet, G.; Sage, D.; Henriques, R.; Muñoz-Barrutia, A.; Lundberg, E.; Jug, F. Kreshuk, A. BioImage Model Zoo: A Community-Driven Resource for Accessible Deep Learning in BioImage Analysis, 2022. [CrossRef]
Figure 1. The structure of U-net [8].
Figure 1. The structure of U-net [8].
Preprints 119321 g001
Figure 2. The structure of R-CNN [9].
Figure 2. The structure of R-CNN [9].
Preprints 119321 g002
Figure 3. The structure of GAN [11].
Figure 3. The structure of GAN [11].
Preprints 119321 g003
Figure 4. PRISMA - Papers Selection Process Summary.
Figure 4. PRISMA - Papers Selection Process Summary.
Preprints 119321 g004
Table 1. List of the abbreviations used in section IV.
Table 1. List of the abbreviations used in section IV.
Abbreviation Meaning Abbreviation Meaning
ACC Accuracy H&E Hematoxylin and Eosin
AJI Aggregated Jaccard Index IoU Intersection over union
AS-UNet U-Net with atrous depthwise separable convolution JI Jaccard index
ASPPU-Net Atrous spatial pyramid pooling U-Net MAE Mean absolute error
ASW-Net Attention-enhanced Simplified W-Net McbUnet Mixed convolution blocks
BAWGNet Boundary aware wavelet guided network MDC-Net Multiscale connected segmentation network with distance map and contour information
cGAN Conditional generative adversarial network MoNuSeg Multi-Organ Nuclei Segmentation
CIA-Net Contour-aware Informative Aggregation Network PCI Phase contrast Image
C-LSTM Convolutional Long Short-Term Memory Res-UNet-H Residual U-net for human sample
CPN Contour Proposal Network Res-UNet-R Residual U-net for rat sample
CS-Net Cellular Segmentation Network RIC-Unet Residual Inception-Channel attention-Unet
DCNN Deep convolutional neural network RINGS Rapid Identification of Glandural Structures
DCNNs Deep convolutional neural network SAU-Net Self-Attention-Unet
DDeep3M Docker-powered deep learning SAM Segment any model
DeLTA Deep Learning for Time-lapse Analysis SBU-net Saliency and Ballness driven U-shaped Network
DOLG Deep orthogonal fusion of local and global SCWCSA Single-channel whole cell segmentation algorithm
ER-Net Edge-reinforced neural network SSD Single-shot detector
FCRN Fully Convolutional Regression Network TCGA Cancer genome atlas
FRE-Net Full-region enhanced network TSFD-Net Tissue Specific Feature Distillation Network
FPN Feature Pyramid Network W-Net Cascaded U-Net
GAN Generative Adversarial Networks WSI Multiresolution whole slide images
GRU Gated Recurrent Unit
Table 4. SUMMARY OF NUCLEUS SEGMENTATION STUDIES IN THE LITERATURE.
Table 4. SUMMARY OF NUCLEUS SEGMENTATION STUDIES IN THE LITERATURE.
Reference Publication year Method Task Dataset Instance/Semantic Code availability
[99] 2022 Vessel U-Net model Blood cell vessels HAM10000 data set
NIH studies R43 CA153927-01
CA101639-02A2
Semantic ×
[100] 2021 RINGS Tissue (prostate) segmentation Dataset of 1500 H&E (hematoxylin & eosin)
stained images of prostate tissue
Semantic ×
[101] 2022 ER-Net 3D vessel segmentation Cerebrovascular datasets
Nerve datasets
Semantic
[73] 2019 Hover-Net Tissue (nucleus) segmentation and classification CoNSeP dataset Instance
[102] 2021 MDC-Net Tissue (nucleus) segmentation DATA ORGANS
DATA BREAST
Semantic
Table 5. Tools for microscopy image segmentation.
Table 5. Tools for microscopy image segmentation.
Reference Software/tool Microscopy image type Website Tool structure Task
[103] DeLTA 2.0 Time lapse microscopy data. https://gitlab.com/dunloplab/delta
https://delta.readthedocs.io/en/latest/
Web-based application Cell segmentation and tracking
[104] Deepcell Fluorescence https://deepcell.org/
https://github.com/vanvalenlab/kiosk-console
Web-based application
Wrapper script
Docker Container
Cell segmentation and tracking
[41] CellPose Fluorescence
brightfield
https://www.cellpose.org/ Web-based application
Jupyter notebook
Cell and nucleus
[105] DeepImageJ PCI https://deepimagej.github.io/ ImageJ plug-in Cell segmentation
[106] CDeep3M Light
X-ray microCT
electron microscopy
https://cdeep3m-viewer.crbs.ucsd.edu/cdeep3m_result/view/6447 Web-based application
Google Colab
Docker Container
AWS cloud
Singularity
Cell segmentation
[107] DeepMIB 2D and 3D electron and multicolor light microscopy d http://mib.helsinki.fi
https://github.com/Ajaxels/MIB2
Matlab GUI Cell
[108] HistomicsML2 WSI https://histomicsml2.readthedocs.io/en/latest/index.html
https://github.com/CancerDataScience/HistomicsML2
Docker Container Cell/ nucleus /Tissue
[109] InstantDL Brightfeld
CT scans
https://github.com/marrlab/InstantDL Docker Container Cell nucleus segmentation
[110] NucleAlzer Fluorescence
Histology
www.nucleaizer.org Web-based application Nucleus segmentation
[111] ZeroCostDL4Mic Pseudo-fluorescence
Brightfeld
https://github.com/HenriquesLab/ZeroCostDL4Mic Google Colab Cell segmentation
[112] Ilastik Electron microscopy https://github.com/ilastik Python Script Nucleus segmentation
[113] Scellseg Phase-contrast https://github.com/cellimnet/scellseg-publish GUI Cell/Tissue segmentation
[114] DeepSea Time-lapse https://deepseas.org/software MATLAB software tool Cell segmentation
[115] MIA Phase-contrast
Histology
https://doi.org/10.5281/zenodo.7970965 Python script Image classification, object detection, semantic segmentation and tracking
[116] U-Net plugin Fluorescence
DIC
Phase contrast
Brightfield
electron microscopy
https://lmb.informatik.uni-freiburg.de/resources/opensource/unet/ Caffe framework
AWS cloud
Cell detection and segmentation
[117] 3DeeCellTraker 3D time lapse https://github.com/WenChentao/3DeeCellTracker Python script Cell segmentation and tracking
[118] Stardist Brightfield
Fluorescence
https://github.com/mpicbg-csbd/stardist Docker Container Cell/ nucleus segmentation
[119] SAM Brightfield https://github.com/computational-cell-analytics/micro-sam Python script Cell segmentation and tracking
[120] BioImage Model Zoo Microscopy images https://bioimage.io/#/ Web-based application Livecell segmentation
Cell segmentation
Nucleus
segmentation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated