Preprint
Article

This version is not peer-reviewed.

Early Detection of Cervical Adenocarcinoma Using Immunohistochemical Staining Patterns Analyzed through Computer Vision Technology

Submitted:

26 August 2024

Posted:

28 August 2024

You are already at the latest version

Abstract
This paper explores the application of machine learning (ML) in predicting functional recovery in patients with ischemic stroke. As technology advances, ML shows significant potential in the field of stroke medicine, especially in the areas of big data analytics and personalized medicine. Studies have shown that ML algorithms can improve the accuracy of stroke image analysis, subtype classification, risk assessment, treatment guidance, and prognosis prediction. However, the widespread use of ML still faces challenges such as data standardization, model validation, privacy, and bias. This paper reviews the current application status of ML in the field of stroke, discusses the challenges faced, and looks forward to the future development direction, aiming to promote the practical application of ML technology in the diagnosis and treatment of stroke to improve the prognosis and quality of life of patients.
Keywords: 
;  ;  ;  

1. Introduction

Recent advancements by researchers at the National Institutes of Health (NIH) have introduced a computer algorithm capable of identifying and diagnosing cervical cancer through the analysis of digital images of the cervix. This algorithm, called Automated Visual Assessment, promises to revolutionize cervical cancer screening in resource-limited environments. Cervical cancer remains a leading cause of death among women in areas with inadequate medical infrastructure. Traditional screening in these regions often involves Visual Inspection with Acetic Acid (VIA), where diluted acetic acid is applied to the cervix to identify potential abnormalities indicated by white spots. Despite its convenience and low cost, VIA [1,2] screening has limited accuracy.
To address these limitations, researchers have utilized extensive datasets to train machine learning algorithms to recognize complex visual patterns in medical images. They used over 60,000 cervical images collected from a screening study in Costa Rica during the 1990s, involving more than 9,400 women with an 18-year follow-up period. These images were digitized and used to train the algorithm.
Figure 1. The algorithm outperformed all standard screening tests in predicting cervical cancer.
Figure 1. The algorithm outperformed all standard screening tests in predicting cervical cancer.
Preprints 116333 g001
When trained with this data, the results indicate that the algorithm outperforms all standard screening tests in predicting cervical cancer. [2,3]The accuracy of the Automated Visual Assessment algorithm in detecting precancerous lesions reached 91%, surpassing the performance of human experts (69%) and conventional cytology (71%). This method, which can be executed using a smartphone or similar imaging device, offers a straightforward screening process and requires minimal training, making it ideal for regions with limited medical resources. The research team plans to train the algorithm further using representative images of cervical pre-cancerous lesions and normal tissue from diverse populations worldwide, with various types of cameras and imaging techniques. The ultimate goal is to develop a more universal and open optimal algorithm. Researchers believe that integrating this algorithm with HPV [4]vaccines, new cervical cancer detection technologies, and improved treatment methods could effectively control cervical cancer even in resource-scarce settings.

2. Computer Vision Technology and Medical Image Detection

Medical anomaly detection is mainly used to identify and locate anomalies in medical image data, which is the key to preventing misdiagnosis and promoting early intervention. Since medical image data varies greatly between different imaging modes and anatomical regions, models must perform well on various data types. Small-sample anomaly detection methods attempt to generalize the model with scarce training data, but each new anomaly detection task still requires lightweight fine-tuning or distribution adjustment. Large-scale pre-trained visual language models (VLMs) [5,6]bring new opportunities for robust and generalizable anomaly detection. One notable attempt is directly using CLIP, a large-scale visual language pre-trained model, for exception detection by elaborating text prompts. However, given the domain differences between natural and medical images, these methods do not work well on medical images, let alone generalize to unknown imaging modes or anatomical regions.
Figure 2. Anomaly classification and segmentation of zero-sample/small-sample medical images based on visual language model.
Figure 2. Anomaly classification and segmentation of zero-sample/small-sample medical images based on visual language model.
Preprints 116333 g002

2.1. Current Incidence of Cervical Cancer

Cervical cancer is one of the most common malignant tumors in the female reproductive system. According to the Global Cancer Data Report 2020, the incidence and mortality of cervical cancer rank 7th and 9th among all malignant tumors, respectively, seriously affecting women’s health [7]. With the popularization of cervical cancer screening, the incidence of cervical squamous cell carcinoma has decreased significantly, while the incidence of cervical adenocarcinoma and its precancerous lesions has gradually increased [8,9,10].
Cervical adenocarcinoma in situ (CAIS) is a precancerous lesion occurring in the glandular epithelium of the cervix, and its histological manifestations are replaced by neoplastic epithelium on the surface of the cervical canal and the normal epithelium of the internal glands. It is more common in women aged 30 to 40 years, and has a high incidence of aggressive high-grade squamous intraepithelial lesions [11,12,13,14,15].
The onset of CAIS is occult with atypical clinical symptoms. Histopathological examination is the gold standard for diagnosis of CAIS. At present, combined screening methods such as cervical cytology, colposcopic multi-point biopsy and cervical tube curettage are often used to achieve early diagnosis of CAIS [16,17]. However, the efficiency of manual film reading for pathological diagnosis is low, and the high-load work is easy to cause visual fatigue of film readers, resulting in missed diagnosis and misdiagnosis, which affects the accuracy of diagnostic results.
In recent years, with the rise of the field of medical and industrial integration, computer-aided diagnosis and image recognition technology have been widely used in pathology research [18]. Many researchers at home and abroad have made significant research progress in improving the intelligent diagnosis of cervical cytology [19,20,21]. This study discussed the feasibility of applying a deep learning algorithm to build an intelligent recognition model of CAIS pathological images so as to assist clinicians in CAIS pathological diagnosis and reduce the rate of misdiagnosis and missed diagnosis.

2.2. Computer Vision Technology Assists in Medical Imaging Diagnosis

Computer vision is the science of using computers to imitate human visual systems so that computers can extract, understand, process, and analyze images and image sequences similar to humans. According to different solutions, computer vision can be divided into five categories: computer imaging, image understanding, three-dimensional vision, dynamic vision, and video codec. [22]It is also one of the AI segments with the earliest application of artificial intelligence and mature technical systems. In recent years, according to the working principle of bionics to mimic the retina of the human eye, video acquisition equipment gradually has the functions of fast information response, redundant data filtering, low power consumption, large dynamic range, etc., the function is closer to the human eye, and as we all know, the human eye perceives information mainly from: Space, color, shape, and motion are carried out in several dimensions. With the optimization of AI algorithms and optical and electronic components, computer vision technology has initially possessed the working ability of human eyes and has initially possessed the complex visual perception ability of human eyes, such as perceiving distance, shape features, identifying targets, spatial position and motion information, thus enabling artificial intelligence technology to do auxiliary work[23].
Computer vision technology also takes the lead in making great breakthroughs. The application scenarios are broad and clear, occupying an important position in artificial intelligence technology, and the market is broad. This paper thoroughly reviews computer vision’s technical principles and wide applications in many fields such as smart city, finance, Internet, new retail, smart transportation, smart medical treatment, and smart industry.
1. Visual perception of computer
Computer vision means that computers have visual perception. Like carbon-based life (us), our brains also rely on facial features and body skin to perceive the outside world. [24,25,26]Billions of neurons transmit various information like computer cables, and the brain responds. Being bitten by a bug will be painful and itchy, so the palm of the hand drives away mosquitoes. To drive mosquitoes, always see where the mosquitoes are. Humans follow vision.
The computer to feel the outside world mainly relies on plug-ins; just like computers, mobile phones need a keyboard, mouse, touch screen operation, and information input, processing, and output are the main functions of the computer. Computer vision provides a pair of eyes to the computer, and it relies on artificial intelligence algorithms to teach the computer how to use the eyes to obtain useful information. [27]Through visual observation, understanding of the world, the ability to adapt to the environment, recognition (retrieval, cross-modal), detection, segmentation, and tracking column algorithm control, in short, computer vision is a kind of video information input, It can enable computers equipped with AI algorithms to work semi-autonomously or fully autonomously, [28]with basic characteristics such as perception, decision-making, and execution, which can assist humans to improve work efficiency and quality, serve human life, and expand or extend the scope of human activities and capabilities.
Its principle is mainly inseparable from the imaging principle, digitalization, image processing to the extraction of perceptual information and information processing, in which the imaging principle is from the hole imaging - > frame sampling - > digital image - > computer processing - > Let the computer obtain perceptual video information, that is to say, computer vision is a combination of image processing, pattern recognition, and artificial intelligence technology[29]. Focuses on computer analysis of one or more images. Computer vision technology is engineered to automatically acquire and analyze specific images to control the corresponding behavior.
Computer vision includes image processing, mechanical engineering techniques, control, electrical lighting, optical imaging, sensors, analog and digital video technology, and computer hardware and software technology (image enhancement and analysis algorithms, image cards, I/O cards, etc.). [30]A typical computer vision application system includes image capture, a light source system, an image digitization module, a digital image processing module, an intelligent judgment and decision module, and a mechanical control and execution module. The core algorithm is also image processing to extract information and computer vision, no matter how the hardware builds the realization of specific functions, is mainly an image perception information extraction algorithm.
2. Image processing technology
Image processing is the core of machine vision inspection. When using machine vision to detect products, the following steps are needed to realize the processing of product images.
(1) Image acquisition
Image acquisition is obtaining scene images from the work site, which is the first step of machine vision. Most of the acquisition tools are CCD or CMOS cameras. The camera captures a single image and can capture continuous live photos. In the case of an image, it is a projection of a three-dimensional scene on a two-dimensional image plane, and the color (brightness and chromaticity) of a point in the image reflects the color of the corresponding point in the scene. This is the fundamental basis for the fact that we can use captured images to replace real scenes[31].
If the camera is an analog signal output, the analog image signal needs to be digitized and sent to a computer (including an embedded system) for processing. Most cameras can now output digital image signals directly, eliminating the step of analog-to-digital conversion. Not only that, but now the camera’s digital output interface is also standardized, such as USB, VGA, 1394, HDMI, WiFi, Blue Tooth interface, etc., can be directly sent to the computer for processing in order to avoid the trouble of adding an image capture card between the image output and the computer. The subsequent image processing work is often carried out by the computer or embedded system in software[32].
(2) Image preprocessing
For the digital field images collected, due to the influence of equipment and environmental factors, they are often subject to different degrees of interference, such as noise, geometric deformation, color imbalance, etc., which will hinder the next processing link. [33]Therefore, the acquired images must be preprocessed. Common preprocessing includes noise cancellation, geometric correction, histogram equalization, etc.
Time domain or frequency domain filtering is usually used to remove the noise in the image. The method of geometric transformation is used to correct the geometric distortion of the image. Histogram equalization and homomorphic filtering are used to reduce the color deviation of the image. In short, through this series of image preprocessing technologies, the acquired images are “processed” to provide “better” and “more useful” images for volume machine vision applications.
(3) Image segmentation
Image segmentation is according to the application requirements; the image is divided into each characteristic region from which the object of interest is extracted. The common features in the image are grayscale, color, texture, edge, corner, and so on. [34]For example, the image of the automobile assembly line is divided into the background region and workpiece region, which is provided to the subsequent processing unit to process the installation part of the workpiece.
Image segmentation has been a difficult problem in image processing for many years. So far, there are many segmentation algorithms, but the results are often not ideal. Recently, deep learning methods based on neural networks have been used for image segmentation, and their performance is better than traditional algorithms.
(4) Target recognition and classification
In industries such as manufacturing or security, machine vision is inseparable from the identification and classification of the target of the input image so as to complete subsequent judgments and operations on this basis. Recognition and classification techniques have a lot in common; often, after the completion of target identification, the category of the target is clear. Recently, image recognition technology is overstepping traditional methods and forming intelligent image recognition methods with neural networks as the mainstream, such as convolutional neural networks (CNN)[35], regression neural networks (RNNs), and other methods with superior performance.
(5) Target positioning and measurement
In intelligent manufacturing, the most common work is to install the target workpiece. Still, the target often needs to be positioned before installation, and the target needs to be measured after installation. Both installation and measurement must maintain high accuracy and speed, such as millimeter accuracy (or even smaller) and millisecond speed. [36]This kind of high-precision and high-speed positioning and measurement makes it difficult to rely on the usual mechanical or manual methods. In machine vision, the image processing method is used to process the installation site image according to the complex mapping relationship between the target and the image to quickly and accurately complete the positioning and measurement tasks.
(6) Target detection and tracking
Moving target detection and tracking in image processing detect whether there is a moving target in the scene image captured by the camera in real time and predict its next movement direction and trend, that is, tracking. These motion data are submitted to subsequent analysis and control processing in time to form corresponding control actions. [37]Generally, a single camera is used for image acquisition, and two cameras can be used if necessary to obtain three-dimensional information about the scene by imitating human binocular vision, which is more conducive to target detection and tracking processing.

3. Computer Vision Technology in the Medical Field

In recent years, with the remarkable improvement of medical image acquisition technology, medical equipment can acquire many medical images and sensor data in real-time with a faster image frame rate, higher image resolution, and communication technology. The medical image interpretation method based on image processing technology is also eager to be solved. In medical image processing, [38]GPUs were first introduced for segmentation and reconstruction and then for machine learning. Machine vision is mainly used for medically assisted diagnosis in the medical field. First of all, MRI, ultrasound, laser, X-ray, gamma ray, and other images of the human body examination records, using digital image processing technology information fusion technology to analyze, describe, and identify these medical images, and finally obtain relevant information, which has played an important role in assisting doctors in diagnosing the size, shape, and abnormality of the human disease source, and carry out effective treatment. Different medical imaging equipment can obtain biological tissue images with various characteristics, such as X-rays reflecting bone tissue and MRI images reflecting organic tissue images. And doctors often need to consider the relationship between bone and organic tissue, so it is necessary to use digital image processing technology to properly superposition the two images for medical analysis.

3.1. Detection of Disease Mutation

Lesion examination for disease prevention, including whether there is a lesion or not, pathological type, is the basic task of health examination. Computer-based disease detection is a major manifestation of computer vision technology in smart medicine and is very suitable for introducing deep learning. In computer-based disease detection methods, the features of body parts or organs in a healthy state are calculated and extracted, usually through supervised learning methods or classical image processing techniques (such as filtering and mathematical morphology). Among them, the machine learning method based on supervised learning uses training data samples that require comprehensive pathological images provided by professional physicians and manually labeled. The classifier generated by the feature engineering computation process maps the feature vector to the candidate to detect the probability of the actual lesion.
Figure 3. Detection of disease mutation characteristics.
Figure 3. Detection of disease mutation characteristics.
Preprints 116333 g003
Lesion detection systems based on convolutional neural networks (CNNS) have improved accuracy by 13-34%, an almost impossible improvement to achieve using non-deep learning classifiers (such as support vector machines). The CNN consists of an input layer, two hidden layers, and an output layer and is used for backpropagation.

3.2. Pathological Image Segmentation

Image segmentation is a process that divides the image into several homogeneous regions according to the similarity calculation in the image and qualitatively classifies each region. In pathological image segmentation, traditional methods only use simple features such as color, and region-based segmentation methods and boundary-based segmentation methods are developed. The former relies on local spatial features of the image, such as the uniformity of grayscale, texture, and other pixel statistical characteristics. At the same time, the latter mainly uses gradient information to determine the boundary of the target. The traditional methods make insufficient use of the rich information in the image. Most classification methods are based on simple methods such as clustering, which has defects in low accuracy and a small adaptation range. The multi-node and multi-level CNN model extracts as many potential features in the image as possible and uses PCA (Primary Component Analysis, principal component selection method) to reduce the dimensionality of these features and select key features. Then, combined with SVM (Support Vector Machine), pixel segmentation of pathological images was performed. This method can make greater use of the information of the image itself and improve the accuracy of cell classification in the image. The computer vision technology based on convolutional neural networks greatly enhances the efficiency and quality of pathological image segmentation.

3.3. Pathological Image Registration

Image registration is the premise of multi-image fusion and 3D modeling and is the key technology to determine the development of medical image fusion technology. In image cognition, a single modal image can only provide a single-dimensional perspective, and the spatial information in the image is difficult to display in an all-around way. Multiple patterns or images of the same pattern can enhance the information of the region of interest and complete the context information through registration fusion. Doctors can make a more accurate diagnosis or develop a more appropriate treatment by representing information from multiple imaging sources in a single image. The medical image registration process includes a variety of image processing methods, such as positioning, rotation, size scaling, and topological transformation; that is, by looking for a spatial transformation model, the corresponding points of two images can achieve the mapping of spatial position and anatomical structure. If this mapping process is a one-to-one correspondence, that is, in the overlapping region, any pixel in one image has a corresponding point in another image, we call it registration. Currently, the image registration model based on scale-invariant feature transformation and convolutional neural network is the main way of pathological image registration.

3.4. Three-Dimensional Modeling and Simulation Based on Pathological Images

Traditional pathological detection often needs cutting samples from the patient, which is often time-consuming and laborious. It also damages the health of the patient, resulting in more serious treatment tasks. Three-dimensional modeling and visualization based on pathological images can improve the process of pathological examination and eliminate the image of the patient during examination. The core problem of image-based modeling is image-based geometric modeling. It studies how to recover real-time 3D information of organ tissues from images and build geometric models for 3D rendering and editing. On the basis of image registration, 3D modeling methods based on images mainly include contour, brightness, motion, and texture methods. All these methods need to use image pixel calculation and extract image features. The former consists of a large number of traditional image processing operations, such as point-by-point image processing, weighted summation of the gray values of the corresponding pixels of two images, and gray scale-up or gray scale-down operations. Based on deep learning, the latter carries out image feature extraction, target segmentation, and other processing, which is more versatile. The 3D model and simulation modeling based on pathological images can provide more comprehensive and accurate clinical diagnosis and treatment data by combining valuable physiological function information with accurate anatomical structure.

4. Immunohistochemistry Analyzed the Expression Of FASN in Cervical Cancer

4.1. Role of FASN in Cervical Cancer

Cervical cancer (CC) is one of the most common malignancies in the female reproductive system and is a leading cause of cancer-related deaths among women. Persistent Human Papillomavirus (HPV) infection, especially HPV16 and HPV18, is the primary cause of CC. The high mortality rate is largely attributed to cancer cell metastasis, particularly lymph node metastasis (LNM). In early-stage CC, determining LNM is crucial for postoperative treatment decisions, often involving systematic lymph node resection. However, this approach can lead to unnecessary surgeries for patients without LNM, increasing surgical risks.
Figure 4. FASN was highly expressed in CC tissues compared to paracancerous tissues.
Figure 4. FASN was highly expressed in CC tissues compared to paracancerous tissues.
Preprints 116333 g004
Current treatment strategies for CC involve surgery, radiotherapy, and chemotherapy, either alone or in combination. Despite their effectiveness, these treatments can cause significant systemic toxicity and potential drug resistance, adversely impacting patient outcomes. Additionally, a substantial percentage of CC patients experience metastasis within two years of initial treatment. Thus, accurate screening and evaluation of effective LNM and CC therapy targets are essential for developing safer and more effective treatment strategies.

4.2. Immunohistochemical Analysis of FASN in Cervical Cancer

Fatty Acid Synthase (FASN) is a key enzyme in de novo fatty acid synthesis in the cytoplasm. It catalyzes the formation of long-chain fatty acids using acetyl-CoA, malonyl-CoA, and reduced nicotinamide adenine dinucleotide phosphate. Although FASN expression is typically low in most tissues, it is frequently overexpressed in various tumors, including colorectal adenocarcinoma, osteosarcoma, nephroblastoma, and epithelial ovarian cancer. FASN’s role extends to influencing tumor invasion, metastasis, and chemotherapy resistance, often correlating with poor prognosis.
FASN has been shown to significantly impact LNM in CC by promoting epithelial-mesenchymal transition and lymphangiogenesis through lipid accumulation. It also enhances cell migration and invasion by regulating cholesterol reprogramming and activating the c-Src/AKT/FAK signaling pathway.
Figure 5. FASN expression level correlates with pathological grading and LNM in CC patients.
Figure 5. FASN expression level correlates with pathological grading and LNM in CC patients.
Preprints 116333 g005
Furthermore, FASN-induced lymphangiogenesis is associated with the secretion of PDGF-AA/IGFBP3. Despite extensive international research on FASN in malignant tumors, studies on its expression and clinical significance in CC remain limited. Immunohistochemical analysis of FASN in CC can provide valuable insights into its genomic and molecular pathogenesis, identify effective biomarkers, and guide the development of new therapeutic targets and more precise treatment approaches.

4.3. Experimental Status

The expression and clinical significance of FASN in CC tissue were verified by immunohistochemistry. Increased expression of FASN in CC tissues is correlated with clinical characteristics such as LNM, pathological grade, BMI, and HPV infection status of patients and may affect the prognosis of patients with CC. In addition, this study preliminarily explored the molecular mechanism by which FASN affects the occurrence and development of CC, indicating that FASN may be a potential molecular marker for diagnosing CC and a target for targeted therapy. FASN gene is located on human chromosome 17q, which is not only a common site of gene rearrangement but also the location of many oncogenes amplification [39]. Compared with normal cells, tumor cells consume a large amount of glucose and produce lactic acid under aerobic conditions [40], during which a high level of carbon flux and an increase in de novo synthesis of endogenous fatty acids may affect the mutation of tumor cell gene-phenotype [41]. FASN regulates lipid metabolism in tumor cells by participating in the mitogen-activated protein kinase pathway or PI3K/Akt signaling pathway [42], and its overexpression usually occurs in many epithelial cancers and their precancerous lesions and is associated with LNM and recurrence of cancer [43]. LNM often leads to poor prognosis in cancer patients, and cancer patients with LNM have a higher recurrence rate and distant metastasis rate [44]. Relevant studies have shown that FASN can inhibit cell apoptosis in mouse melanoma models, thereby promoting melanoma cell growth and LNM[45]. In addition, FASN is overexpressed in nearly half of breast cancer cells and is significantly correlated with LNM and distant metastasis in patients .High expression of estrogen-related receptor α can promote de novo synthesis of lipids, and the expression level of FASN also increases, thus promoting lipid reprogramming. This mechanism is associated with LNM in estrogen-dependent endometrial carcinoma.

4.4. Discussion and Conclusion

In this study, it was concluded that the expression level of FASN was significantly correlated with LNM in patients with CC, and the results of immunohistochemical experiments confirmed this association. Therefore, considering the gene enrichment analysis in this study, the expression level of FASN played an important role in the occurrence and development of CC. Previous relevant studies also pointed out that the expression level of FASN is an independent prognostic factor for patients with CC, and its expression level is closely related to the LNM of patients with CC, which is consistent with our findings. In recent years, the relationship between lipid metabolism and tumors has attracted wide attention, and FASN plays a key role in lipid metabolism . Multiple studies have shown that FASN is a highly concerned target for cancer therapy FASN inhibitors have become the focus of extensive research. The FASN inhibitor TVB-2640, which is in Phase II clinical trial (NCT03179904), has shown great clinical conversion potential, and the combination of trastuzumab and paclitaxel has shown effect in HER2+ advanced breast cancer patients. The FASN inhibitor orlistat is originally a weight loss drug. Still, in gynecological tumors such as endometrial cancer, it inhibits the growth of endometrial cancer cells by blocking the synthesis of FASN. Studies have shown that FASN inhibitors C75 and Cerulenin reduce the LNM of CC in both in vivo and in vitro experiments, providing new ideas for treating metastatic and recurrent CC. In addition, FASN inhibitors can slow down the proliferation of four CC cell lines (C-33A, ME-180, HeLa, and SiHa cells) and induce apoptosis. Compared with C-33A and ME-180 cell lines, FASN inhibitors can induce apoptosis. Hpv16-positive SiHa cells and HPV18-positive HeLa cells showed greater decreased cell proliferation. This study suggests that HPV infection status significantly differs in the level of FASN expression, which requires further attention to the relationship between HPV infection and FASN expression.

References

  1. Li, B., Zhang, K., Sun, Y., & Zou, J. (2024). Research on Travel Route Planning Optimization based on Large Language Model.
  2. Li, B., Zhang, X., Wang, X. A., Yong, S., Zhang, J., & Huang, J. (2019, April). A Feature Extraction Method for Daily-periodic Time Series Based on AETA Electromagnetic Disturbance Data. In Proceedings of the 2019 4th International Conference on Mathematics and Artificial Intelligence (pp. 215-219).
  3. Huang, D., Liu, Z., & Li, Y. (2024). Research on Tumors Segmentation based on Image Enhancement Method. arXiv preprint arXiv:2406.05170. [CrossRef]
  4. Huang, D., Xu, L., Tao, W., & Li, Y. (2024). Research on Genome Data Recognition and Analysis based on.
  5. Jin, Y., Shimizu, S., Li, Y., Yao, Y., Liu, X., Si, H., ... & Xiao, W. (2023). Proton therapy (PT) combined with concurrent chemotherapy for locally advanced non-small cell lung cancer with negative driver genes. Radiation Oncology, 18(1), 189. [CrossRef]
  6. Nitta, H., Mizumoto, M., Li, Y., Oshiro, Y., Fukushima, H., Suzuki, R., ... & Sakurai, H. (2024). An analysis of muscle growth after proton beam therapy for pediatric cancer. Journal of Radiation Research, 65(2), 251-255. [CrossRef]
  7. Nakamura, M., Mizumoto, M., Saito, T., Shimizu, S., Li, Y., Oshiro, Y., ... & Sakurai, H. (2024). A systematic review and meta-analysis of radiotherapy and particle beam therapy for skull base chondrosarcoma: TRP-chondrosarcoma 2024. Frontiers in Oncology, 14, 1380716. [CrossRef]
  8. Li, Y., Mizumoto, M., Oshiro, Y., Nitta, H., Saito, T., Iizumi, T., ... & Sakurai, H. (2023). A retrospective study of renal growth changes after proton beam therapy for Pediatric malignant tumor. Current Oncology, 30(2), 1560-1570. [CrossRef]
  9. Shimizu, S., Mizumoto, M., Okumura, T., Li, Y., Baba, K., Murakami, M., ... & Sakurai, H. (2021). Proton beam therapy for a giant hepatic hemangioma: A case report and literature review. Clinical and Translational Radiation Oncology, 27, 152-156. [CrossRef]
  10. Restrepo, D., Wu, C., Cajas, S. A., Nakayama, L. F., Celi, L. A. G., & Lopez, D. M. (2024). Multimodal Deep Learning for Low-Resource Settings: A Vector Embedding Alignment Approach for Healthcare Applications. medRxiv, 2024-06.
  11. Zhang, X., Xu, L., Li, N., & Zou, J. (2024). Research on Credit Risk Assessment Optimization based on Machine Learning. [CrossRef]
  12. Wang, H., Li, J., & Li, Z. (2024). AI-Generated Text Detection and Classification Based on BERT Deep Learning Algorithm. arXiv preprint arXiv:2405.16422.
  13. Lai, S., Feng, N., Sui, H., Ma, Z., Wang, H., Song, Z., ... & Yue, Y. (2024). FTS: A Framework to Find a Faithful TimeSieve. arXiv preprint arXiv:2405.19647.
  14. Liu, H., Shen, F., Qin, H., & Gao, F. (2024). Research on Flight Accidents Prediction based Back Propagation Neural Network. arXiv preprint arXiv:2406.13954.
  15. Li, J., Wang, Y., Xu, C., Liu, S., Dai, J., & Lan, K. (2024). Bioplastic derived from corn stover: Life cycle assessment and artificial intelligence-based analysis of uncertainty and variability. Science of The Total Environment, 174349. [CrossRef]
  16. Liu, H., Xie, R., Qin, H., & Li, Y. (2024). Research on Dangerous Flight Weather Prediction based on Machine Learning. arXiv preprint arXiv:2406.12298.
  17. Li, S., & Tajbakhsh, N. (2023). Scigraphqa: A large-scale synthetic multi-turn question-answering dataset for scientific graphs. arXiv preprint arXiv:2308.03349.
  18. Li, S., Lin, R., & Pei, S. (2024). Multi-modal preference alignment remedies regression of visual instruction tuning on language model. arXiv preprint arXiv:2402.10884.
  19. Wang, D. (Ed.). (2016). Information Science and Electronic Engineering: Proceedings of the 3rd International Conference of Electronic Engineering and Information Science (ICEEIS 2016), January 4-5, 2016, Harbin, China. CRC Press.
  20. Dhand, A., Reeves, M. J., Mu, Y., Rosner, B. A., Rothfeld-Wehrwein, Z. R., Nieves, A., ... & Sheth, K. N. (2024). Mapping the Ecological Terrain of Stroke Prehospital Delay: A Nationwide Registry Study. Stroke, 55(6), 1507-1516. [CrossRef]
  21. Dhand, A., Reeves, M. J., Mu, Y., Rosner, B. A., Rothfeld-Wehrwein, Z. R., Nieves, A., ... & Sheth, K. N. (2024). Mapping the Ecological Terrain of Stroke Prehospital Delay: A Nationwide Registry Study. Stroke, 55(6), 1507-1516.
  22. Bi, S., & Bao, W. (2024). Innovative Application of Artificial Intelligence Technology in Bank Credit Risk Management. arXiv preprint arXiv:2404.18183. [CrossRef]
  23. Chung, T. K., Doran, G., Cheung, T. H., Yim, S. F., Yu, M. Y., Worley Jr, M. J., ... & Wong, Y. F. (2021). Dissection of PIK3CA aberration for cervical adenocarcinoma outcomes. Cancers, 13(13), 3218.
  24. Yu, C., Jin, Y., Xing, Q., Zhang, Y., Guo, S., & Meng, S. (2024). Advanced User Credit Risk Prediction Model using LightGBM, XGBoost and Tabnet with SMOTEENN. arXiv preprint arXiv:2408.03497.
  25. Zheng, Q., Yu, C., Cao, J., Xu, Y., Xing, Q., & Jin, Y. (2024). Advanced Payment Security System: XGBoost, CatBoost and SMOTE Integrated. arXiv preprint arXiv:2406.04658.
  26. Kumada, H., Li, Y., Yasuoka, K., Naito, F., Kurihara, T., Sugimura, T., ... & Sakae, T. (2022). Current development status of iBNCT001, demonstrator of a LINAC-based neutron source for BNCT. Journal of Neutron Research, 24(3-4), 347-358. [CrossRef]
  27. Allman, R., Mu, Y., Dite, G. S., Spaeth, E., Hopper, J. L., & Rosner, B. A. (2023). Validation of a breast cancer risk prediction model based on the key risk factors: family history, mammographic density and polygenic risk. Breast Cancer Research and Treatment, 198(2), 335-347. [CrossRef]
  28. Shimizu, S., Nakai, K., Li, Y., Mizumoto, M., Kumada, H., Ishikawa, E., ... & Sakurai, H. (2023). Boron neutron capture therapy for recurrent glioblastoma multiforme: imaging evaluation of a case with long-term local control and survival. Cureus, 15(1). [CrossRef]
  29. Gupta, S., Motwani, S. S., Seitter, R. H., Wang, W., Mu, Y., Chute, D. F., ... & Curhan, G. C. (2023). Development and validation of a risk model for predicting contrast-associated acute kidney injury in patients with cancer: evaluation in over 46,000 CT examinations. American Journal of Roentgenology, 221(4), 486-501. [CrossRef]
  30. Weng A. Depression and Risky Health Behaviors[J]. Available at SSRN 4843979.
  31. Rosner, B., Glynn, R. J., Eliassen, A. H., Hankinson, S. E., Tamimi, R. M., Chen, W. Y., ... & Tworoger, S. S. (2022). A multi-state survival model for time to breast cancer mortality among a cohort of initially disease-free women. Cancer Epidemiology, Biomarkers & Prevention, 31(8), 1582-1592. [CrossRef]
  32. Yaghjyan, L., Heng, Y. J., Baker, G. M., Bret-Mounet, V., Murthy, D., Mahoney, M. B., ... & Tamimi, R. M. (2022). Reliability of CD44, CD24, and ALDH1A1 immunohistochemical staining: Pathologist assessment compared to quantitative image analysis. Frontiers in Medicine, 9, 1040061. [CrossRef]
  33. Zhou, Q. (2024). Portfolio Optimization with Robust Covariance and Conditional Value-at-Risk Constraints. arXiv preprint arXiv:2406.00610.
  34. Zhou, Q. (2024). Application of Black-Litterman Bayesian in Statistical Arbitrage. arXiv preprint arXiv:2406.06706.
  35. Haowei, M., Ebrahimi, S., Mansouri, S., Abdullaev, S. S., Alsaab, H. O., & Hassan, Z. F. (2023). CRISPR/Cas-based nanobiosensors: A reinforced approach for specific and sensitive recognition of mycotoxins. Food Bioscience, 56, 103110. [CrossRef]
  36. Zhang, J., Cao, J., Chang, J., Li, X., Liu, H., & Li, Z. (2024). Research on the Application of Computer Vision Based on Deep Learning in Autonomous Driving Technology. arXiv preprint arXiv:2406.00490.
  37. Rosner, B., Tamimi, R. M., Kraft, P., Gao, C., Mu, Y., Scott, C., ... & Colditz, G. A. (2021). Simplified breast risk tool integrating questionnaire risk factors, mammographic density, and polygenic risk score: development and validation. Cancer Epidemiology, Biomarkers & Prevention, 30(4), 600-607. [CrossRef]
  38. Sarkis, R. A., Goksen, Y., Mu, Y., Rosner, B., & Lee, J. W. (2018). Cognitive and fatigue side effects of anti-epileptic drugs: an analysis of phase III add-on trials. Journal of neurology, 265(9), 2137-2142. [CrossRef]
  39. Li, B., Jiang, G., Li, N., & Song, C. (2024). Research on Large-scale Structured and Unstructured Data Processing based on Large Language Model.
  40. Yaghjyan, L., Heng, Y. J., Baker, G. M., Bret-Mounet, V., Murthy, D., Mahoney, M. B., ... & Tamimi, R. M. (2022). Reliability of CD44, CD24, and ALDH1A1 immunohistochemical staining: Pathologist assessment compared to quantitative image analysis. Frontiers in Medicine, 9, 1040061. [CrossRef]
  41. Li, Y., Matsumoto, Y., Chen, L., Sugawara, Y., Oe, E., Fujisawa, N., ... & Sakurai, H. (2023). Smart Nanofiber Mesh with Locally Sustained Drug Release Enabled Synergistic Combination Therapy for Glioblastoma. Nanomaterials, 13(3), 414. [CrossRef]
  42. Yu, C., Xu, Y., Cao, J., Zhang, Y., Jin, Y., & Zhu, M. (2024). Credit card fraud detection using advanced transformer model. arXiv preprint arXiv:2406.03733.
  43. Chen, Z., Ge, J., Zhan, H., Huang, S., & Wang, D. (2021). Pareto self-supervised training for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13663-13672).
  44. Zhang, Y., Qu, T., Yao, T., Gong, Y., & Bian, X. (2024). Research on the application of BIM technology in intelligent building technology. Applied and Computational Engineering, 61, 29-34. [CrossRef]
  45. Yang, J., Qin, H., Por, L. Y., Shaikh, Z. A., Alfarraj, O., Tolba, A., ... & Thwin, M. (2024). Optimizing diabetic retinopathy detection with inception-V4 and dynamic version of snow leopard optimization algorithm. Biomedical Signal Processing and Control, 96, 106501. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated