1. Introduction
Percutaneous biopsy has been established as a safe, effective procedure for cancer diagnosis. The success rate of the biopsy is measured by the ability to collect sufficient viable material for molecular genetical and histological analysis [
1,
2]. However, due to the heterogeneity of the tumor tissue, biopsy sensitivity/specificity varies within a relatively large range (65% to 95%) [
3,
4,
5,
6]. Therefore, proper biopsy guidance has become a real clinical need.
Although radiologic imaging is used to guide biopsy needle placement within the tumor, it does not provide sufficient resolution to assess tissue cellularity, which is defined as the ratio between viable tumors and benign stroma constituents. For example, high resolution ultrasound (US) has been used to provide biopsy guidance, but still does not provide the expected results as its resolution is not sufficient to resolve tissue morphology at the micron scale, which is needed to properly assess its cellularity [
7,
8]. US is both operator dependent and needs a radiologist experienced in sonography to correctly interpret imaging findings [
9,
10].
Without proper guidance, biopsy often needs to be repeated, leading to significant cost to the health care system [
11,
12]. Considering that millions of core needle biopsies are performed annually in the US [
13,
14,
15], if on average 20% of these procedures need to be repeated [
12], the additional costs to the health care system become immense.
Besides the financial implications, inadequate quality of the biopsy specimens can have a negative impact on downstream molecular pathology and can delay pathway-specific targeted therapy [
16]. Furthermore, as novel therapeutics are routinely introduced with companion biomarkers, biomarker testing is expected to become the standard of care within the very near future. Towards this end, FDA has mandated that targeted therapies shall be accompanied by companion diagnostic tests, patient tailored [
17,
18]. As a result, it is envisioned that image-guided biopsies will start playing a significant role in oncologic clinical trials. Thus, techniques able to provide reliable assessment of tissue at the cellular scale, at the time of sampling, will be essential to reliably obtain adequate amounts of viable tumor tissue for biomarker analysis. Biopsy cores with large amounts of necrotic or non-tumor tissue are not suitable for such tests.
Various optical technologies have been explored to guide biopsy and improve biopsy sampling. Such technologies include Raman spectroscopy, dynamic light scattering, optical coherence tomography (OCT), etc. [
19,
20,
21,
22,
23,
24,
25,
26,
27,
28]. Among them, OCT has shown significant promise due to its ability to assess true tissue morphology within relatively large volumes of tissue, like the size of the biopsy cores, at a higher speed than of the other modalities. OCT is routinely used for differentiating between normal tissue and cancer in various organs [
19,
20]. However, the interpretation of the OCT images can be highly subjective, as the readers can have a different understanding of tissue morphology shown by the images. Furthermore, when performing a biopsy procedure, the interventional radiologist must decide in real-time about the biopsy location. Therefore, we investigated using user-assisted deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, for rapid image analysis. AI has made remarkable breakthroughs in medical imaging, especially for image classification and pattern recognition [
29,
30,
31,
32,
33,
34,
35]. Studies showed that OCT image evaluation by DL algorithms has achieved good performance for disease detection, prognosis prediction, and image quality control, suggesting that the use of DL technology could potentially enhance the efficiency of clinical workflow [
36,
37].
This paper presents a novel AI-OCT approach for real-time guidance of core needle biopsy procedures. A hand-held OCT probe was developed to collect in vivo images from a rabbit model of cancer. Selected OCT images were used to train an AI model for tissue-type differentiation. The images were selected based on the pathologist's feedback, with annotated normal and tumor tissue areas. The performance of the AI model was assessed against the annotations performed by trained OCT image readers. The AI model showed similar results to those of the humans performing the same classification tasks. Specifically, tissue boundary segmentation was excellent (>99% accuracy) as it provided segmentation results that closely mimicked the ground truth provided by the human annotations, while >84% correlation was obtained for tumor and non-tumor classification.
2. Materials and Methods
OCT Instrumentation: A customized OCT imaging approach, previously reported by our team [
38], was used in this study. In brief, using this approach, an axial OCT reflectivity profile (also called A-line) is acquired only when an incremental movement of the OCT catheter probe is detected by a linear encoder (see concept in
Figure 1). The encoder creates a trigger that is sent to a data acquisition (DAQ) card. The DAQ card initiates the data acquisition and processing sequence. Each processed OCT signal (A-scan) is inserted into an array that is appended at each incoming encoder trigger to form an OCT image.
By using this approach, data collection can be performed at variable speeds of the probe advancement through the tissue, enabling the use of an either manual or motorized scanning approach for the OCT catheter. Scanning linearity does not impact image quality.
The OCT instrument, based on the spectral domain approach, uses a 1310 nm light source with a bandwidth of approximately 85 nm, providing an axial resolution of ~10 um, which supports the detection of small tissue features, at the cellular level. The light from the broadband source is split into the sample and reference arms of the interferometer by a 10/90 fiber splitter. A fiber optic circulator is inserted between the light source and fiber splitter to maximize the return from both arms of the fiber interferometer. The interference signal obtained by mixing the returned light from the sample with that from the reference arm is sent to a linear array camera. The fringe signals are digitized by a camera link frame grabber and processed in real-time by a Graphical Processing Unit (GPU).
OCT probe: A specially designed OCT probe, suitable for tissue investigation through the bore of the biopsy needle, was used in this study. A simplified schematic design of this probe is shown in
Figure 2.
As observed, the probe consists of four major parts: probe main body, plunger, encoder, and needle containing the OCT fiber optic catheter. The plunger is spring-loaded and has the fiber optic OCT catheter attached to it through a fiber connector (see cross-sectional and transparent views). When pressed, the plunger moves the OCT catheter forward within a custom-made needle. The OCT light exits the needle through a slot made at the tip of the needle. The needle is covered on the slot area by a fluorinated ethylene propylene (FEP) tube to seal the OCT catheter inside the needle and prevent the tissue from catching on the needle. To secure the FEP tube in place, the needle is slightly tapered towards its tip, where the axial slot is located. An optical encoder is attached to the probe holder and used to monitor the movement of the OCT fiber optic catheter relative to an optical scale, which is also attached to the plunger. A sliding mechanism is used to maintain the scale parallel to the encoder surface, such that correct scale readings are provided during the relative movements of the OCT catheter to main body of the hand-held probe.
The custom-made needle is attached to the probe holder through a luer-lock cap, identical to that of commercial syringes. As a result, this needle can be easily replaced during the procedure, if needed. An electronic circuit inserted into the probe body is used to an A-line acquisition only when the plunger is moved with over a distance of at least 1 mm. Thus, it blocks false triggers generated by the small vibrations during probe insertion within the tissue, before pushing the plunger. This circuit also formats the trigger signal, so it can be reliably sent through a 2 m length mini-USB cable to the instrumentation unit.
The OCT fiber catheter consists of a single mode (SM) fiber, terminated with a micro lens, polished at 45 deg, to send the light orthogonally to the catheter scanning direction. The catheter is encapsulated within a 460-um outer diameter hypodermic tube, terminated with a fiber optic connector (Model DMI, Diamond USA).
Photographs of the Gen I instrument, +and biopsy guidance probe are shown in
Figure 3. The instrumentation rack is small (16” x 14” x 12”) and incorporates the power supply, the spectrometer, the optical delay line, the light source, and the fiber optic spectrometer. The computer can be placed on the side or underneath. The instrumentation unit can be placed within a commercially available wheeled rack to add portability. The OCT probe is easy to use: the plunger can be pushed with the thumb, while the index and the middle fingers can be inserted through probe ears to hold it in place. OCT images at multiple angular positions can be generated by successively rotating the probe, while still in the tissue, and repeating the scans of the OCT catheter.
Animal model: A rabbit model of cancer- Albino New Zealand White (NXW) Rabbit, Strain Code 052 was used to perform an in vivo study for technology evaluation at MD Anderson Cancer Center (MDACC), Houston, Tx. All experiments were performed in agreement with the MDACC IAUCUC approved animal protocol - 00001349-RN00-AR002.
A total of 30 animals were prepared for this study using the following protocol:
(a) Percutaneously, intramuscularly inject VX2 tumor in both thighs of each rabbit;
(b) Allow tumor to grow 10 to 14 days +/- 2 days to reach size of 1.5 to 2 cm in diameter (appropriate size for use);
(c) Use palpation to verify tumor growth in thighs determine tumor growth and volume.
Data collection: The imaging protocol included the next steps:
Percutaneously insert a biopsy guidance needle (18Ga) within the tumor using ultrasound guidance.
Remove needle stylet and insert the optical probe into the tumor site through the bore of the guidance needle;
Perform up to 4 quadrant OCT measurements (4x 90 deg angular orientations) at each location and collect at least 2 images/quadrant;
Retract the OCT probe and use an 18 Ga core biopsy gun to collect 1 biopsy core after imaging is performed;
Reinsert the guidance needle in the tumor adjacent area and repeat steps above to collect OCT images of heathy tissue;
Following the final biopsy, euthanize the animal using Beuthanasia-D (1ml/10lb) solution.
As each animal had 2 tumor inoculations (one in each thigh), with a minimum of 4 images collected from each site, plus on image of the healthy tissue near each tumor site, over 300 images were collected. Representative examples of such images are shown in
Figure 4. As can be easily observed, morphology details, such as fiber muscle bundles and micro vessels were well recovered by OCT. Approximately 100 images corresponding to each tissue type were selected for the AI algorithm training set. These images were selected in collaboration with the pathologist, to best match the pathology findings.
Data Processing: OCT image analysis was performed using a Convolutional Neural Networks (CNN) artificial intelligence (AI) software, Aiforia Technologies Oyj, Pursimiehenkatu 29-31, FI-00150 Helsinki, Finland. This is a supervised deep learning software for image analysis, which uses annotated data for AI algorithm training. The AI model was designed to segment tissue boundaries, while excluding air-tissue interfaces and surface artifacts (e.g., vertical white lines, “shadows” from blood vessels), and then to segment 3 regions of interest within the tissue: cancer (tumor); necrosis within the tumor; and healthy or also called here as “non-tumor”.
The AI model was structured in 3 layers/3 classes, as shown in
Table 1. The main goal was to differentiate between normal and cancer tissue. The first class, called “Tissue”, defines tissue boundaries (top and bottom). Once boundaries are detected, the next step is to differentiate the two major classes: non-tumor or healthy tissue, and tumor tissue. The tumor class contains a sub-layer called necrotic tissue. This is an important subclass to be highlighted, as the necrotic tissue does not provide any diagnostic value.
A total of ~100 OCT images with 4.2 μm/px resolution were selected for the initial training of the AI model. The images were in bmp format with 2400 x 600 pixels, corresponding to an area of 2.5 mm x 10 mm. Most of the images contained one single tissue type, except for the necrotic tissue, which is inherently present within the tumor. As this relatively low number of images has been proven to not provide satisfactory results, additional images were added for model training. However, since the remaining images contained more than one tissue type, they were annotated by expert OCT image readers to define the boundaries of each tissue type (see example
Figure 5). The selection of multiple areas in each image enabled a significant increase (~3x) of the total training set of images.
Although the total number of training images was still relatively small for an AI model, the model was able to produce satisfactory results by using a supervised training approach. OCT reader supervision was used during AI software development to optimize the model for the available training data. Over 100 selected regions were visually inspected by our team during AI model development to determine if tissue boundaries were properly detected, or if cancer/normal tissue interface were properly differentiated. OCT reader supervision has greatly improved AI model performance.
3. Results
After carefull training, the AI model was applied to a validation set of ~100 images, not included in the training set. The AI results were compared against OCT image reader annotated images. The results were quite satisfactory, considering the relatively small training set of images used in this preliminary evaluation.
A representative example of tissue differentiation by classes specified above is shown in
Figure 6. As it may be observed, a few areas were not classified (see arrows), as the AI model was not capable to associate the tissue to a specific class with high certitude (>90%). Overall, the OCT reader-AI agreement was good. The bottom boundary was more accurately identified by the OCT reader, while te AI model slightly overestimated tissue depth in some locations.
Another representative example is shown in
Figure 7, where cancer tissue is present in a larger amount than the normal tissue, indicating that this area is appropriate for taking a biopsy core. Very small areas of potential necrosis were detected; however, this will not be a real concern for the interventional radiologist, as the amount of tumor tissue is fairly large (over 75% of the scanned area).
The entire validation set of images was analyzed by three OCT readers who individually made the annotations and the consensus among readers was analyzed. Areas of each tissue type were calculated for each reader, as well as for the AI-segmented images. Reader agreement (human vs human), as well as the AI vs human agreements was analyzed. The false positive (FP) rate, false negative (FN) rate, precision, sensitivity, and F-1 score were assessed for each class, using the formulas defined in
Table 2.
As may be observed, false negatives, false positives and errors were under 3% for normal and tumor tissue, while the error was somewhat higher (~5%) for necrotic tissue. The precision, the sensitivity and the F1-Score were within the 70% range for the normal and tumor for both AI vs Human and Human vs Human. However, lower values were obtained for necrotic tissue. It is to be noted that the F1 score is a preffered metric for evaluating the accuracy of the AI model. It combines the precision and recall scores of a model. This accuracy metric computes how many times a model made a correct prediction across the entire dataset.
The agreement between AI model vs. humans and human vs. human was calculated as a fraction using the following formula:
Over 95% agreement between AI and human findings was obtained for the F1 score, while somewhat lower agreement (~84%) was obtained for necrotic tissue.
4. Discussion
The AI-OCT technology was preliminarily evaluated on an animal model of cancer to determine its feasibility for its safe in vivo use, while the potential use of the AI approach for real-time assessment of tissue composition at the tip of the biopsy needle was analyzed as well.
The proposed encoder-feedback approach has proven to work reliably and generate high-quality micron scale images with a rate of 1-2 images/sec, mainly dictated by the user ability to perform a faster or slower mechanical scan of the OCT probe by pushing/releasing probe plunger. In some cases, minimal motion artifacts were noted if the user does not have a steady hand and the probe is moved while acquiring an OCT scan. Therefore, further implementation will consider the use of a motorized probe.
The AI model was optimized for the current training OCT data set, which used 255 images. It was noted that there were regions within the tissue where the model could not accurately classify tumor or non-tumor regions. There are likely two related reasons: first, these regions were also challenging for human annotators and ground-truth experts to agree upon the class designation and make accurate annotations for model training. This is because the visual patterns in the shades of white, gray, and black that humans recognize as “tumor” or “non-tumor” in some regions of the OCT images overlap in their morphology. Second, the number of images in the training dataset with these types of challenging regions was relatively small. Certainly, it is possible to substantially improve model classification accuracy with additional training data. Therefore, this is the next step we propose to take to further evaluate the potential of the AI-OCT approach for biopsy guidance.
5. Conclusions
The use of a novel AI-OCT approach for analyzing tissue composition at the tip of the biopsy needle was analyzed. OCT was able to provide high-quality images of the tissue at the tip of the biopsy needle, while the cloud-based AI analysis of these images seemed to provide suitable results for analyzing tissue composition in real-time. However, further improvements are still needed to make the technology able to provide more accurate results, which will likely improve its potential for clinical adoption. The use of large-sized training sets of images is deemed to be necessary. A human trial is planned to generate large training sets of images and further improve AI accuracy.
Author Contributions
Nicusor Iftimia: Conceptualization; Methodology; Instrument fabrication; Data collection and analysis; Manuscript Preparation; Gopi Maguluri: Instrument software, Cloud communication with the AI software; Data analysis; Data curation. John Grimble: Instrument mechanical design and fabrication. Aliana Caron: Image annotation; AI Training. Ge Zhu- Image annotation; AI Training. Savitri Krishnamurthy: Pathology processing, OCT-Histology correlation. Amanda McWatters: Animal model development; Animal study coordination. Gillian Beamer: AI training and data analysis; Paper proof-reading. Seung-Yi Lee: AI training and data analysis; Data curation. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the US national Institute of Health, grant 5R44CA273961 and Contract No. 75N91019C00010.
Institutional Animal Care
All experiments were performed in agreement with the MDACC IAUCUC approved animal protocol - 00001349-RN00-AR002
Data Availability Statement
Data supporting reported results are considered proprietary to Physical Sciences and cannot be released without signing a confidentiality agreement.
Acknowledgments
The research presented in this paper was supported buy the following NIH contracts and grants:
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
References
- Basik M, Aguilar-Mahecha A, Rousseau C, Diaz Z., Tejpar S., Spatz A., Greenwood Celia M.T., Batist G,, Biopsies: Next-generation biospecimens for tailoring therapy. Nat Rev Clin Oncol 2013, 10, 437–450.
- Tam AL, Lim HJ, Wistuba II, et al. Image-guided biopsy in the era of personalized cancer care: Proceedings from the Society of Interventional Radiology Research Consensus Panel. J Vasc Interv Radiol. 2016, 27, 8–19. [Google Scholar] [CrossRef] [PubMed]
- Swanton, C. Intratumor heterogeneity. Evolution through space and time. Cancer Res. 2012, 72, 4875–4882. [Google Scholar] [CrossRef] [PubMed]
- Marusyk A, Almendro V, Polyak K. Intra-tumour heterogeneity: A looking glass for cancer? Nat Rev Cancer. 2012, 12, 323–334. [Google Scholar] [CrossRef] [PubMed]
- Hatada T, Ishii H, Ichii S, Okada K, Fujiwara Y, Yamamura T. Diagnostic value of ultrasound-guided fine-needle aspiration biopsy, core-needle biopsy, and evaluation of combined use in the diagnosis of breast lesions. J Am Coll Surg. 2000, 190, 299–303. [Google Scholar] [CrossRef] [PubMed]
- Mitra S, Dey P. Fine-needle aspiration and core biopsy in the diagnosis of breast lesions: A comparison and review of the literature. Cytojournal. 2016, 13, 1742–6413. [Google Scholar]
- Brem, R. F., Lenihan, M. J., Lieberman, J. & Torrente, J.. Screening breast ultrasound: past, present, and future. American Journal of Roentgenology 2015, 204, 234–240.
- Cummins T, Yoon C, Choi H, Eliahoo P, Kim HH, Yamashita MW, Hovanessian-Larsen LJ, Lang JE, Sener SF, Vallone J, Martin SE, Kirk Shung K. High-frequency ultrasound imaging for breast cancer biopsy guidance. J Med Imaging (Bellingham). 2015, 2, 047001. [Google Scholar] [CrossRef]
- Bui-Mansfield LT, Chen DC, O'Brien SD. Accuracy of ultrasound of musculoskeletal soft-tissue tumors. AJR Am J Roentgenol 2015, 204, W218. [Google Scholar] [CrossRef]
- Carra BJ, Bui-Mansfield LT, O'Brien SD, et al. Sonography of musculoskeletal soft-tissue masses: techniques, pearls, and pitfalls. AJR Am J Roentgenol 2014, 202, 1281–1290. [Google Scholar] [CrossRef]
- Resnick MJ, Lee DJ, Magerfleisch L, et al. Repeat prostate biopsy and the incremental risk of clinically insignificant prostate cancer. Urology. 2011, 77, 548–552. [Google Scholar] [CrossRef] [PubMed]
- Wu J. S., McMahon C. J., Lozano-Calderon S, and W. Kung J. Utility of Repeat Core Needle Biopsy of Musculoskeletal Lesions With Initially Nondiagnostic Findings, American Journal of Roentgenology 2017, 208, 609–616.
- Katsis JM, Rickman OB, Maldonado F, Lentz RJ. Bronchoscopic biopsy of peripheral pulmonary lesions in 2020: a review of existing technologies. J Thorac Dis 2020, 12, 3253–3262. [Google Scholar] [CrossRef] [PubMed]
- Chappy, SL. Women’s experience with breast biopsy. AORN Journal. 2004, 80, 885–901. [Google Scholar] [CrossRef] [PubMed]
- Silverstein MJ, Recht A, Lagios MD, Bleiweiss IJ, Blumencranz PW, Gizienski T, et al. Special report: consensus conference III. Image-detected breast cancer: state-of-the-art diagnosis and treatment. J Am Coll Surg. 2009, 209, 504–520. [Google Scholar] [CrossRef] [PubMed]
- Tam AL, Lim HJ, Wistuba II, et al. Image-Guided Biopsy in the Era of Personalized Cancer Care: Proceedings from the Society of Interventional Radiology Research Consensus Panel. J Vasc Interv Radiol. 2016, 27, 8–19. [Google Scholar] [CrossRef]
- Lee JM, Han JJ, Altwerger G, Kohn EC. Proteomics and biomarkers in clinical trials for drug development. J Proteomics. 2011, 74, 2632–2641. [Google Scholar] [CrossRef]
- Myers, MB. Targeted therapies with companion diagnostics in the management of breast cancer: current perspectives. Pharmgenomics Pers Med. 2016, 22, 7–16. [Google Scholar] [CrossRef]
- Iftimia N, Park J, Maguluri G, Krishnamurthy S, McWatters A, Sabir SH. Investigation of tissue cellularity at the tip of the core biopsy needle with optical coherence tomography. Biomed Opt Express. 2018, 9, 694–704. [Google Scholar] [CrossRef]
- Wilson RH., Vishwanath K, and Mycek MA. Optical methods for quantitative and label-free sensing in living human tissues: principles, techniques, and applications. Adv Phys. 2016, 1, 523–543.
- Krishnamurthy, S. Microscopy: A Promising Next-Generation Digital Microscopy Tool for Surgical Pathology Practice. Archives of Pathology & Laboratory Medicine: September 2019, 143, 1058–1068. [Google Scholar]
- Konecky SD, Mazhar A, Cuccia D, Durkin AJ, Schotland JC, Tromberg BJ. Quantitative optical tomography of sub-surface heterogeneities using spatially modulated structured light. Opt Express. 2009, 17, 14780–14790. [Google Scholar] [CrossRef] [PubMed]
- Iftimia, N. Mujat M., Ustun T., Ferguson D., Vu D., and Hammer D. Spectral-domain low coherence interferometry/optical coherence tomography system for fine needle breast biopsy guidance. Review of Sc. Instr. 2009, 80, 024302. [Google Scholar] [CrossRef] [PubMed]
- Iftimia N, Park J, Maguluri G, Krishnamurthy S, McWatters A, Sabir SH. Investigation of tissue cellularity at the tip of the core biopsy needle with optical coherence tomography. Biomed Opt Express. 2018, 9, 694–704. [Google Scholar] [CrossRef] [PubMed]
- Quirk BC, McLaughlin RA, Curatolo A, Kirk RW, Noble PB, Sampson DD. In situ imaging of lung alveoli with an optical coherence tomography needle probe. J Biomed Opt. 2011, 16, 036009. [Google Scholar]
- Liang CP1, Wierwille J, Moreira T, Schwartzbauer G, Jafri MS, Tang CM, Chen Y. A forward-imaging needle-type OCT probe for image guided stereotactic procedures. Opt Express. 2011, 19, 26283–26294. [Google Scholar] [CrossRef] [PubMed]
- Chang EW, Gardecki J, Pitman M, Wilsterman EJ, Patel A, Tearney GJ, Iftimia N. Low coherence interferometry approach for aiding fine needle aspiration biopsies. J Biomed Opt. 2014, 19, 116005. [Google Scholar]
- Curatolo A, McLaughlin RA, Quirk BC, Kirk RW, Bourke AG, Wood BA, Robbins PD, Saunders CM, Sampson DD. Ultrasound-guided optical coherence tomography needle probe for the assessment of breast cancer tumor margins. AJR Am J Roentgenol. 2012, 199, W520–W522. [Google Scholar] [CrossRef]
- Wang J, Xu Y, Boppart SA. Review of optical coherence tomography in oncology. J Biomed Opt. 2017, 22, 1–23. [Google Scholar] [CrossRef]
- Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef]
- Moorthy, U.; Gandhi, U.D. A survey of big data analytics using machine learning algorithms. Anthol. Big Data Anal. Archit. Appl. 2022, 655–677. [Google Scholar]
- Luis, F.; Kumar, I.; Vijayakumar, V.; Singh, K.U.; Kumar, A. Identifying the patterns state of the art of deep learning models and computational models their challenges. Multimed. Syst. 2021, 27, 599–613. [Google Scholar]
- Moorthy, U.; Gandhi, U.D. A novel optimal feature selection for medical data classification using CNN based Deep learning. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 3527–3538. [Google Scholar] [CrossRef]
- Chen, S.X.; Ni, Y.Q.; Zhou, L. A deep learning framework for adaptive compressive sensing of high-speed train vibration responses. Struct. Control Health Monit. 2020, 29, E2979. [Google Scholar] [CrossRef]
- Finck, T.; Singh, S.P.; Wang, L.; Gupta, S.; Goli, H.; Padmanabhan, P.; Gulyás, B. A basic introduction to deep learning for medical image analysis. Sensors 2021, 20, 5097. [Google Scholar]
- Dahrouj M, Miller JB. Artificial Intelligence (AI) and Retinal Optical Coherence Tomography (OCT). Semin Ophthalmol. 2021, 36, 341–345. [CrossRef]
- Kapoor R, Whigham BT, Al-Aswad LA. Artificial Intelligence and Optical Coherence Tomography Imaging. Asia Pac J Ophthalmol (Phila). 2019, 8, 187–194. [Google Scholar]
- Iftimia N, Maguluri G, Chang EW, Chang S, Magill J, Brugge W. Hand scanning optical coherence tomography imaging using encoder feedback. Opt Lett. 2014, 39, 6807–6810. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).