Preprint
Essay

This version is not peer-reviewed.

Artificial Intelligence Application for Thoracic Surgeon: “Phe-Nomenal Cosmic Powers into the Magic Lamp”

A peer-reviewed article of this preprint also exists.

Submitted:

12 May 2024

Posted:

13 May 2024

You are already at the latest version

Abstract
In the digital age, Artificial Intelligence (AI) is emerging as a transformative force in various sectors, in-cluding medicine. This article explores the potential of AI, akin to the magical genie of Aladdin's lamp, particularly within thoracic surgery and lung cancer management. It examines AI applications like ma-chine learning and deep learning in achieving more precise diagnoses, preoperative risk assessment, and improved surgical outcomes. The challenges and advancements in AI integration, especially in computer vision and multi-modal models, are discussed alongside their impact on robotic surgery and operating room management. Despite its transformative potential, implementing AI in medicine faces challenges regarding data scarcity, interpretability issues, and ethical concerns. Collaboration between AI and medi-cal communities is essential to address these challenges and unlock the full potential of AI in revolution-izing clinical practice. This article underscores the importance of further research and interdisciplinary collaboration to ensure the safe and effective deployment of AI in real-world clinical settings.
Keywords: 
;  ;  ;  ;  
Introduction
In the digital age, Artificial Intelligence (AI) is emerging as a powerful tool poised to revolutionize various sectors, including medicine [1]. This transformative potential of artificial intelligence today is akin to the phenomenal cosmic powers of the genie contained within the enchanted lamp of the Aladdin tale. Like a magic genie, the applications of machine learning and deep learning techniques promises to grant the wishes of clinicians and surgeons yielding significant advancements in critical areas such as diagnosis, prognosis, treatment planning, and pharmaceutical research. Concerning the lung cancer, in recent years the field has seen a profound transformation, characterized by intricate diagnostic processes and complex therapeutic protocols that integrate various omics domains, ushering in a personalized and preventive healthcare approach.
Numerous applications in thoracic surgery can be also identified, but according to the Aladdin tale, the three most important desires of the thoracic surgeon are: to achieve an accurate preoperative diagnosis of lung lesions, to evaluate and mitigate preoperative risks, and to enhance surgical performance by choosing a personalized surgical approach. While the potential of these methodologies may seem unlimited, a multitude of unanswered questions emerge regarding their true effectiveness. Of particular concern are the control systems within the learning mechanisms of neural networks, as well as the short- and long-term supervision of their associated outcomes.
Large Language Models (LLMs) and multi-modal models are emerging as promising instruments to enhance diagnostic precision, operational efficiency, and personalized care.
LLMs and chatbot. LLMs, exemplified by GPT-3 [2] and LLaMA [3], are artificial intelligence models proficient in processing and generating text in a sophisticated manner. LLMs can be used to: Analyze medical records and reports to identify patterns and anomalies indicative of pathology; produce concise and comprehensible medical reports for patients and other healthcare professionals and create chatbots capable of delivering information and support to patients automatically. In the wake of the success of LLMs in various scientific domains, a significant area of research has been dedicated to applying these models in the medical and pharmaceutical realms [4]. M. Sallam [5] has examined the applicability of ChatGPT in the biomedical sector, assessing its potential in scientific output, clinical case analysis, and aiding in diagnosis processing, while also shedding light on the limitations and associated risks: ethical and copyright concerns regarding the data used to train such models, as well as the potential for inaccuracies in responses, occasionally lacking real or scientific validation (referred to colloquially as hallucination) [6,7]. To enhance reliability in the medical domain, several recent studies have subjected cutting-edge LLMs to additional training phases using strictly medical data, including publications sourced from the PubMed database [8,9,10]; researchers at Google Research and Google DeepMind have introduced Med-PaLM [11], a model designed to process textual data and demonstrate promising performance across various benchmarks, although its accuracy falls short when compared to human expert judgment.
Computer vision and multi-modal models. Multi-modal models amalgamate diverse data types, including images, text, and audio, to extensively comprehend the patient and their medical condition. Their applications range from analyzing radiological and biomedical images to identify pathologies more precisely, to developing patient monitoring systems capable of early detection of signs of clinical deterioration, and to creating virtual assistants capable of interacting with patients naturally and intuitively. Since various biomedical data are not in textual form but rather in formats such as images, scans, or temporal sequences, several studies have proposed computer vision models trained on medical data. Some studies have focused on use cases such as tumor detection or identifying other pathologies from X-ray scans, ultrasounds, or magnetic resonance imaging [12,13]. Traditional computer vision algorithms relied on extracting a set of low or high-level features from images or videos (e.g., points of interest, color intensity, edges, etc.), and then using these features to train a supervised learning model such as a support vector machine (SVM) [14], random forest [15], or other models, for object recognition or image classification. Others have concentrated on developing models capable of processing data in various formats, including multi-modal ones [16]. The introduction of deep learning methods, particularly deep convolutional neural networks (CNNs), has made significant strides in recent years in the field of computer vision [17]. These methodologies are used in imaging in the field of radiomics, where increasing evidence shows that this can be used for quantitative characterization of tumors for tasks such as disease characterization or outcome prediction [18] surpassing, by far, the human operator and any other previous related technologies in image recognition and analysis. In this framework, radiomics is an emerging and rapidly developing field that integrates knowledge from radiology, oncology, and computer science, emphasizing the integration of medicine and engineering [19]. Increasing evidence shows that radi-omics can be used for quantitative characterization of tumors for tasks such as disease characterization or outcome prediction which constitutes an important research direction in medical applications [20]. Prior to CNNs, improvements in image classification, segmentation, and object detection were marginal and incremental. However, the introduction of CNNs has revolutionized this field. Furthermore, the advent of the Transformer architecture [21] and its application to vision tasks [22] has enabled deep learning models to take a step ahead toward accurate performance [23,24]. The extension of these models to jointly process multiple input types has made them applicable to several real-world scenarios, and thereby increasingly appealing for the medical field. T. Tu and coll. at Google DeepMind recently introduced a multi-modal version of Med-PaLM, known as Med-PaLM M [25]; Med-PaLM M can simultaneously process text and images and significantly improves diagnostic production and clinical case recognition, achieving performances comparable to those of human experts.
First wish. Delving specifically into the topics of interest in thoracic surgery, the foremost desire is to obtain a precise, reliable preoperative diagnosis in the shortest time possible. Early detection of lung malignancies is crucial for improving survival rates; therefore, the management of lung nodules has been significantly influenced by the implementation of AI based on computer vision and multi-modal models. The reported high variability among radiologists in detecting lung nodules and the elevated false-positive rate in screening programs as reported in the National Lung Screening Trial [26] underscore the need for tools to assist radiologists in nodule identification, measurement, risk stratification, and monitoring. Computer-Aided Diagnosis (CAD) techniques was developed in the 1970s to improve the efficacy of chest radiography for nodule detection [27]. The use of computer algorithms in CAD systems can aid in the diagnosis of a range of medical conditions. This includes the use of computer-aided detection systems to help detect abnormalities or lesions in medical images, and computer-aided diagnosis systems to assist in the interpretation and diagnosis of medical images. Such systems have the potential to enhance the precision and speed of medical diagnosis, particularly in instances where human interpretation may be constrained or susceptible to errors. Among other supervised Machine Learning algorithms, SVM and Random Forest are widely used in the diagnosis of lung diseases, with SVM demonstrating success in increasing diagnostic efficiency [28] and Random Forest being effective in the classification of non-small cell lung cancer [29].
Although CAD has enhanced detection and efficiency, its adoption in clinical practice is impeded by the high false-positive rate [30]. Recently, the introduction of deep learning techniques has garnered considerable attention owing to their capacity to enhance diagnostic accuracy. The introduction of CNNs, by Krizhevsky et al., demonstrated their superiority in detecting lung nodules compared to CAD [31]. CNNs can learn features from images and have been shown to reduce the false-positive rate, potentially avoiding unnecessary follow-ups [32,33]. Unlike CAD, the innovation of CNNs lies in their ability to learn from verified data and autonomously determine previously unknown features, thereby maximizing classification with limited direct supervision [34]. Consequently, this architecture of feature extraction through convolutional layers has proven applicable to image classification and segmentation.
Second wish. The second, among the three wishes of thoracic surgeons, revolves around the utilization of AI in forecasting and mitigating perioperative risk [35]. In recent decades, Artificial Intelligence has gathered substantial attention in the realm of preoperative risk assessment, resulting in the proliferation of numerous Machine Learning algorithms aimed at predicting the likelihood of major complications and mortality following surgery [36]. Within this context, AI-driven technologies have shown promising outcomes, providing valuable assistance in the decision-making process and in formulating comprehensive risk assessments, even in cases of major lung resection. Given the elevated morbidity rates associated with such procedures, meticulous evaluation of candidates to ascertain their individual risk and prognosis is of critical importance [37]. Various algorithms have been put forth for consideration. Among these, some, employing diverse models of probabilistic neural networks, have successfully estimated post-operative prognosis following lung resection [38] and cardio-respiratory morbidity subsequent to lung resection for non-small cell lung cancer (NSCLC) [39]. Others have achieved encouraging results by devising a model capable of delineating the risk of cardiac and pulmonary complications during the postoperative phase in patients undergoing anatomical lung resection through an innovative ML approach known as XGBOOST [40]. Lastly, additional researchers have managed to predict the onset of respiratory failure after lobectomy by identifying risk factors and introducing two machine learning-based techniques for predicting respiratory failure, serving both quality review and clinical decision-making purposes [37]. ML algorithms hold promise in optimizing risk assessment for individual patients, enhancing the efficacy of preoperative evaluations, recommending suitable therapeutic strategies, and facilitating communication with patients and their families.
Third wish. The third and final wish concerns the integration of AI-based technologies in the operating room. While their current role remains limited, there is optimism that they will increasingly contribute to enhancing surgical precision and safety, facilitating intraoperative decision-making, and predicting postoperative outcomes in the near future.
One of the most promising applications of AI is in robotic surgery, particularly in thoracic surgery and the treatment of lung cancer, where it has demonstrated reductions in hospital stay duration and postoperative complications. However, despite its association with AI, robotic-assisted surgery does not utilize AI-based technology at today but it requires constant supervision by human surgeons. Despite initial skepticism among surgeons regarding the feasibility of fully automated surgery, advancements in robotic surgery have piqued interest, leading to the exploration of the potential for autonomous actions in procedures like interventional radiology, endoscopy, and surgery [41].
Robotic systems offer enhanced visualization in three dimensions (3D) and magnification, along with effector instruments capable of wide-ranging motion, thereby augmenting surgical dexterity during procedures. Nonetheless, the absence of tactile feedback poses a challenge to surgical outcomes. Research has begun to investigate the possibility of robots learning to manage tension on sutures and anastomoses, or providing feedback on tissue compression through auditory cues. Furthermore, in striving for greater autonomy in robots, perhaps the focus should shift from haptics as perceived by humans to haptics as perceived by robots/computers [42].
The complexity of translating machine learning into effective and safe actions in humans is evident. AI necessitates the storage of extensive video recordings of surgical procedures, requiring meticulous data collection, preparation, and annotation, which must become integral to future medical practice. This underscores the importance of interdisciplinary collaboration between AI and medical communities [43]. Moreover, while AI models have demonstrated comparable or superior performance to humans, the complexity of these models makes it difficult to interpret and understand how they get to their decisions, which has led to the concept of AI models as black boxes [44]. Another major concern is the generalizability of these models to all patients, which could be addressed by developing continuous learning systems that utilize cloud techniques to allow for real-time delivery of clinical records and continuous modification of the underlying training models. This would ensure machine-independent reproducibility of the models [45].
Finally, two other fields are involved in the integration of AI-based technologies in the operating room: education and improvement of management processes. The application of AI holds promise for advancing precision surgery and surgical training, with ML algorithms proposed for accurate assessment of surgical skills, providing feedback during learning curves and periodic evaluations [46,47]. Despite the clear benefits, robotic surgery is associated with prolonged procedural times and substantial costs, necessitating precise scheduling of surgical procedures. Improved and optimized surgical procedure planning, particularly in robotic surgery, can be achieved through AI algorithms capable of accurately planning each procedure, predicting case duration, and identifying surgeries at high risk of cancellation. Ultimately, the use of ML models could significantly enhance operating room efficiency, leading to cost savings and optimal resource utilization, especially in the face of challenges to healthcare system sustainability posed by the high costs of new technologies.
Healthcare settings are witnessing an increasing integration of technologies. Within perioperative medicine, the implementation of machine learning algorithms holds the potential to drive a multidisciplinary approach, particularly in preoperative assessment, risk stratification, and postoperative outcomes.
Conclusions. Despite their potential, the integration of AI in medicine presents challenges primarily regarding data scarcity, interpretability, and the risk of bias. Only with the application of AI techniques, researchers and physicians will have the potential to deal with the complexity represented by the quantitative aspect of the big data-related features. For this reason, oncologists, radiologists and surgeons should continue to integrate machine learning tools into the clinical care continuum of NSCLC and become part of the digital revolution that has already taken place in business and technology sectors. However, it is imperative to emphasize that research in this field is still in its early stages, and numerous challenges must be overcome before AI can be safely and effectively implemented in real-world clinical settings with autonomy. Alongside the technical constraints highlighted, ethical and legal concerns must also be considered [48]. In particular, while AI in medicine holds much promise, the rights of citizens must be carefully considered. Great attention must be paid to the protection of privacy, which has become increasingly apparent in the healthcare sector, where advancements must navigate the safeguarding of personal and highly sensitive information. Consequently, some scientific societies have developed specific guidelines on this issue [49], respecting the delicate balance between progress and privacy [50].
The magical genie of AI promises to revolutionize medicine as we know it today. Its powers are still hidden, and they may be able to fulfill dreams that were previously unattainable. However, there are many challenges, risks, and pitfalls on this journey that must be addressed before AI can be safely and effectively used in real and autonomous clinical settings.

Author Contributions

All authors have contributed substantially to the work reported and they have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Conflicts of Interest

The authors declare no conflicts of interest

References

  1. Rajpurkar, P.; Chen, E.; Banerjee, O.; Topol, E.J. Ai in health and medicine. Nature medicine 2022, 28, 31–38. [Google Scholar] [CrossRef] [PubMed]
  2. Floridi, L.; Chiriatti, M. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines 2020, 30, 681–694. [Google Scholar] [CrossRef]
  3. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; Lample, G. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
  4. Thirunavukarasu, A.J.; Shu Jeng Ting, D.; Elangovan, K.; Gutierrez, L.; Fang Tan, T.; Shu Wei Ting, D. Large language models in medicine. Nature medicine 2023, 29, 1930–1940. [Google Scholar] [CrossRef] [PubMed]
  5. Sallam, M. Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. In Healthcare 2023, 11, 887. [Google Scholar] [CrossRef]
  6. Harrer, S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. EBioMedicine. 2023, 90, 104512. [Google Scholar] [CrossRef] [PubMed]
  7. lusmann, J.; Kolbinger, F.R.; Muti, H.S.; Carrero, Z.I.; Eckardt, J.N.; Laleh, N.G.; Löffler, C.M.L.; Schwarzkopf, S.C.; Unger, M.; Veldhuizen, G.P.; Wagner, S.J.; Kather, J.N. The future landscape of large language models in medicine. Commun Med (Lond). 2023, 3, 141. [Google Scholar] [CrossRef] [PubMed]
  8. Jin, Q.; Dhingra, B.; Liu, Z.; Cohen, W.W.; Lu, X. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146, 2019.
  9. Pal, A.; Kumar Umapathi, L.; Sankarasubbu, M. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. Proceedings of the Conference on Health, Inference, and Learning, PMLR 2022, 174, 248–260. [Google Scholar]
  10. Jin, D.; Pan, E.; Oufattole, N.; Weng, W.-H.; Fang, H.; Szolovits, P. What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams. Appl. Sci. 2021, 11, 6421. [Google Scholar] [CrossRef]
  11. Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; Payne, P.; Seneviratne, M.; Gamble, P.; Kelly, C.; Babiker, A.; Schärli, N.; Chowdhery, A.; Mansfield, P.; Demner-Fushman, D.; Agüera YArcas, B.; Webster, D.; Corrado, G.S.; Matias, Y.; Chou, K.; Gottweis, J.; Tomasev, N.; Liu, Y.; Rajkomar, A.; Barral, J.; Semturs, C.; Karthikesalingam, A.; Natarajan, V. Large language models encode clinical knowledge. Nature. 2023, 620, 172–180. [Google Scholar] [CrossRef] [PubMed]
  12. Pranav Rajpurkar and Matthew P Lungren. The current and future state of ai interpretation of medical images. New England Journal of Medicine 2023, 388, 1981–1990. [Google Scholar] [CrossRef]
  13. Najjar, R. Redefining radiology: a review of artificial intelligence integration in medical imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef] [PubMed]
  14. Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar]
  15. Elyan, E.; Gaber, M.M. A genetic algorithm approach to optimising random forests applied to class engineered data. Inf. Sci. 2017, 384, 220–234. [Google Scholar] [CrossRef]
  16. Acosta, J.N.; Falcone, G.J.; Rajpurkar, P.; Topol, E.J. Multimodal biomedical AI. Nat Med. 2022, 28, 1773–1784. [Google Scholar] [CrossRef] [PubMed]
  17. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  18. Liu, Z.; Wang, S.; Dong, D.; Wei, J.; Fang, C.; Zhou, X.; Sun, K.; Li, L.; Li, B.; Wang, M.; Tian, J. The Applications of Radiomics in Precision Diagnosis and Treatment of Oncology: Opportunities and Challenges. Theranostics. 2019, 9, 1303–1322. [Google Scholar] [CrossRef] [PubMed]
  19. Lambin, P.; Leijenaar, R.T.H.; Deist, T.M.; Peerlings, J.; de Jong, E.E.C.; van Timmeren, J.; Sanduleanu, S.; Larue, R.T.H.M.; Even, A.J.G.; Jochems, A.; van Wijk, Y.; Woodruff, H.; van Soest, J.; Lustberg, T.; Roelofs, E.; van Elmpt, W.; Dekker, A.; Mottaghy, F.M.; Wildberger, J.E.; Walsh, S. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol. 2017, 14, 749–762. [Google Scholar] [CrossRef] [PubMed]
  20. Qian, Z.; Li, Y.; Wang, Y.; Li, L.; Li, R.; Wang, K.; Li, S.; Tang, K.; Zhang, C.; Fan, X.; Chen, B.; Li, W. Differentiation of glioblastoma from solitary brain metastases using radiomic machine-learning classifiers. Cancer Lett. 2019, 451, 128–135. [Google Scholar] [CrossRef]
  21. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Advances in neural information processing systems 2017, 30. [Google Scholar]
  22. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations 2021. arXiv:2010.11929.
  23. Li, C.; Li, L.; Geng, Y.; Jiang, H.; Cheng, M.; Zhang, B.; Ke, Z.; Xu, X.; Chu, X. Yolov6 v3. 0: A full-scale reloading. 2023 arXiv preprint arXiv:2301.05586.
  24. Cheng, B.; Misra, I.; Schwing, A.G.; Kirillov, A.; Girdhar, R. Masked-attention mask transformer for universal image segmentation. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
  25. Tu, T.; Azizi, S.; Driess, D.; Schaekermann, M.; Amin, M.; Chang, P.; Carroll, A.; Lau, C.; Tanno, R.; Ktena, I. Towards generalist biomedical ai. NEJM AI 2024, 1. [Google Scholar] [CrossRef]
  26. Abraham, J. Reduced lung cancer mortality with low-dose computed tomographic screening. Community Oncology 2011, 8, 441–444. [Google Scholar] [CrossRef]
  27. Kakeda, S.; Moriya, J.; Sato, H.; Aoki, T.; Watanabe, H.; Nakata, H.; Oda, N.; Katsuragawa, S.; Yamamoto, K.; Doi, K. Improved detection of lung nodules on chest radiographs using a commercial computer-aided diagnosis system. AJR Am J Roentgenol. 2004, 182, 505–510. [Google Scholar] [CrossRef] [PubMed]
  28. Naqi, S.M.; Sharif, M.; Yasmin, M. Multistage segmentation model and SVM-ensemble for precise lung nodule detection. Int J Comput Assist Radiol Surg 2018, 13, 1083–1095. [Google Scholar] [CrossRef] [PubMed]
  29. Choi, W.; Oh, J.H.; Riyahi, S.; et al. Radiomics analysis of pulmonary nodules in low-dose CT for early detection of lung cancer. Med Phys 2018, 45, 1537–1549. [Google Scholar] [CrossRef] [PubMed]
  30. Roos, J.E.; Paik, D.; Olsen, D.; et al. Computer-aided detection (CAD) of lung nodules in CT scans: radiologist performance and reading time with incremental CAD assistance. Eur Radiol 2010, 20, 549–557. [Google Scholar] [CrossRef]
  31. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Communications of the ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  32. Tran, G.S.; Nghiem, T.P.; Nguyen, V.T.; Luong, C.M.; Burie, J.C. Improving Accuracy of Lung Nodule Classification Using Deep Learning with Focal Loss. J Healthc Eng. 2019, 2019, 5156416. [Google Scholar] [CrossRef]
  33. Nibali, A.; He, Z.; Wollersheim, D. Pulmonary nodule classification with deep residual networks. Int J Comput Assist Radiol Surg 2017, 12, 1799–1808. [Google Scholar] [CrossRef]
  34. Murphy, A.; Skalski, M.; Gaillard, F. The utilisation of convolutional neural networks in detecting pulmonary nodules: a review. Br J Radiol 2018, 91, 20180028. [Google Scholar] [CrossRef]
  35. Bellini, V.; Valente, M.; Del Rio, P.; Bignami, E. Artificial intelligence in thoracic surgery: a narrative review. J Thorac Dis. 2021, 13, 6963–6975. [Google Scholar] [CrossRef]
  36. Bonde, A.; Varadarajan, K.M.; Bonde, N.; Troelsen, A.; Muratoglu, O.K.; Malchau, H.; Yang, A.D.; Alam, H.; Sillesen, M. Assessing the utility of deep neural networks in predicting postoperative surgical complications: a retrospective study. Lancet Digit Health. 2021, 3, e471–e485. [Google Scholar] [CrossRef] [PubMed]
  37. Bolourani, S.; Wang, P.; Patel, V.M.; Manetta, F.; Lee, P.C. Predicting respiratory failure after pulmonary lobectomy using machine learning techniques. Surgery. 2020, 168, 743–752. [Google Scholar] [CrossRef] [PubMed]
  38. Esteva, H.; Marchevsky, A.; Núñez, T.; Luna, C.; Esteva, M. Neural networks as a prognostic tool of surgical risk in lung resections. Ann Thorac Surg. 2002, 73, 1576–1581. [Google Scholar] [CrossRef] [PubMed]
  39. Santos-García, G.; Varela, G.; Novoa, N.; et al. Prediction of postoperative morbidity after lung resection using an artificial neural network ensemble. Artif Intell Med 2004, 30, 61–69. [Google Scholar] [CrossRef] [PubMed]
  40. Salati, M.; Migliorelli, L.; Moccia, S.; Andolfi, M.; Roncon, A.; Guiducci, G.M.; Xiumè, F.; Tiberi, M.; Frontoni, E.; Refai, M. A Machine Learning Approach for Postoperative Outcome Prediction: Surgical Data Science Application in a Thoracic Surgery Setting. World J Surg. 2021, 45, 1585–1594. [Google Scholar] [CrossRef] [PubMed]
  41. Kassahun, Y.; Yu, B.; Tibebu, A.T.; Stoyanov, D.; Giannarou, S.; Metzen, J.H.; Vander Poorten, E. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J Comput Assist Radiol Surg. 2016, 11, 553–168. [Google Scholar] [CrossRef] [PubMed]
  42. Gumbs, A.A.; Frigerio, I.; Spolverato, G.; Croner, R.; Illanes, A.; Chouillard, E.; Elyan, E. Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery? Sensors (Basel). 2021, 21, 5526. [Google Scholar] [CrossRef] [PubMed]
  43. Marcus, H.J.; Payne, C.J.; Hughes-Hallett, A.; Gras, G.; Leibrandt, K.; Nandi, D.; Yang, G.Z. Making the Leap: The Translation of Innovative Surgical Devices from the Laboratory to the Operating Room. Ann. Surg. 2016, 263, 1077–1078. [Google Scholar] [CrossRef] [PubMed]
  44. Domingues, G. Pereira, P. Martins, H. Duarte, J. Santos, P.H. Abreu, Using deep learning techniques in medical imaging: A systematic review of applications on CT and PET, Artif. Intell. Rev. 2020, 53, 4093–4160. [Google Scholar] [CrossRef]
  45. Halder, A.; Chatterjee, S.; Dey, D. Adaptive morphology aided 2-pathway convolutional neural network for lung nodule classification, Biomed. Signal Process. Control 2022, 72, 103347. [Google Scholar]
  46. Mellia, J.A.; Basta, M.N.; Toyoda, Y.; Othman, S.; Elfanagely, O.; Morris, M.P.; Torre-Healy, L.; Ungar, L.H.; Fischer, J.P. Natural Language Processing in Surgery: A Systematic Review and Meta-analysis. Ann. Surg. 2021, 273, 900–908. [Google Scholar] [CrossRef] [PubMed]
  47. Stahl, C.C.; Jung, S.A.; Rosser, A.A.; Kraut, A.S.; Schnapp, B.H.; Westergaard, M.; Hamedani, A.G.; Minter, R.M.; Greenberg, J.A. Natural language processing and entrustable professional activity text feedback in surgery: A machine learning model of resident autonomy. Am. J. Surg. 2021, 221, 369–375. [Google Scholar] [CrossRef] [PubMed]
  48. WHO Guidance. Ethics and governance of artificial intelligence for health. World Health Organization, 2021. https://iris.who.int/bitstream/handle/10665/341996/9789240029200-eng.pdf?sequence=1.
  49. Nick, W.; Castro, D. The Impact of the EU’s New Data Protection Regulation on AI. https://datainnovation.org/2018/03/the-impact-of-the-eus-new-data-protection-regulation-on-ai/.
  50. European Union’s AI Act 2024 https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated