Preprint
Article

This version is not peer-reviewed.

Artificial Intelligence in Cancer Diagnosis and Oncological Surgery

Submitted:

12 January 2026

Posted:

14 January 2026

You are already at the latest version

Abstract
Artificial intelligence (AI) offers multiple advantages such as: improvement and accuracy of the diagnosis, decrease of the doctors’ workload, decrease of the hospitalization costs, becoming increasingly widespread, studied, and applied in medicine. The use of AI and the study of the specialty literature raise ethical and legal questions for which there is no unanimous answer yet. Medical liability (malpractice) for AI‑related errors and damages for the patient prompt legal reflections on this topic. The diagnostic algorithms of AI raise questions regarding the risks of using AI in the diagnosis and treatment of cancer (especially in rare cases), the information provided to the patient, all of these having moral and legal implications, as well as regarding the impact on the empathic doctor–patient relationship. Actually, the use of AI in the medical field has triggered a revolution in the doctor-patient relationship, but it has possible medico‑legal consequences as well. The current legal framework regulating medical liability when AI is applied is inadequate and requires urgent measures, because there is no specific and uniform legislation to regulate the liability of the various parties involved in applying AI, or that of the end-users. Consequently, greater attention should be paid to the risk of applying AI, to the necessity to regulate its safe use, and to maintain the safety standards of the patient by continuously adapting and updating the system.
Keywords: 
;  ;  ;  ;  ;  ;  

Introduction

In contemporary medical practice, the application of artificial intelligence (AI) algorithms represents a new situation which helps doctors, both during the diagnostic stage and during treatment, being increasingly used in hospitals. AI enables healthcare professionals to obtain more accurate and more precise diagnoses, as well as to provide more efficient and less invasive medical and surgical treatments. At present AI is used successfully for the imaging diagnosis of tumors, as well as in oncology, for its potential to recognize complex patterns which provide quantitative and qualitative assessments, improving the prediction of the diagnosis and treatment [1]. Through risk-stratification models, AI may optimize the resource allocation plan, proving useful in-patient management, research, and clinical trials [1]. However, although AI systems are increasingly used in medicine, the ethical implications and the legal regulations regarding their use remain under debate. The human–machine interaction needs to be analyzed, particularly when AI systems make autonomous choices, which raise legal liability issues in case the patients are harmed. The new technological approaches create a new reality which is unlikely to fit within the limits of the current legislation. In this context, this paper aims to address both the critical aspects of using AI algorithms in the diagnosis and treatment of cancer and to discuss the legal liability associated with the use of the method and the potential solutions [1].
The following questions arise: Can AI revolutionize medicine? The information systems are already independently analyzing the data from the online healthcare system, and can learn from them, based on the algorithms and parameters provided by the doctors, and offer treatment recommendations. What studies are necessary to apply artificial intelligence in practice, and what opportunities arise for the patients? Some parameters are constantly generated by the healthcare system: blood pressure values, laboratory test results, oxygen saturation, and results from ultrasound, CT, MRI, or PET-CT examinations. The information is evaluated and used for diagnosis and archiving [2]. Could information systems learn independently from the data collected from the patients and even proceed to develop the diagnosis and treatment system? We believe that the development and application of AI in medicine will be one of the important topics in future healthcare research [3]. Where are we and what are the prospects? Can a computer learnin a controlled manner? AI represents the capacity of computer programs to learn. There are two possibilities for learning. The first is “supervised learning”, in which a large number of similar things are presented to the computer, for example, a large number of imaging investigations (Computed Tomography - CT, mammography, etc.), thereby teaching it which are normal or suspected of neoplasm, based on the images provided and on their description. If the computer receives sufficient information, it will be able to distinguish between the images provided. Thus, it can not only make the work of the doctors easier, but it can also improve the quality of the diagnosis and of the personalized treatment. In so far as computer-assisted detection software is concerned, this can increase the accuracy and speed of the diagnoses made by the radiologists, but it also acts as a “second set of eyes”, and the effectiveness of the algorithms increases when they are subjected to human oversight [2]. AI can thus lead to a decrease of the time spent for diagnosis and an increase of the capacity to examine a greater number of patients. For example, for breast cancer, AI can help with the fast diagnosis of the disease [1]. AI technologies should be considered complementary to the diagnosis, enhancing the performance of the doctors, reducing the risk of human error, and serving as clinical decision support. Furthermore, AI can filter simple cases and allow specialists to focus more on the more difficult cases. The advantages could also include its use in cancer screening (breast, cervical, colorectal, etc.) and the early identification of individuals at increased risk of developing this disease [1,2,3,4,5,6]. These applications are tools which help with the diagnosis but they do not replace the clinicians. In addition to clinical diagnosis, AI may assist surgeons intraoperatively, leading to more precise, minimally invasive surgical techniques [1]. Furthermore, when AI begins to function autonomously (machine learning - ML), without specialists, based on a self-learning algorithm (“the second stage of learning”), it may make a diagnosis without the necessity of the clinician’s approval. Then, should the system not be held responsible for incorrect results, and should the AI programmers or the software-selling company not assume at least partial medical liability?
What is the impact on the doctor-patient relationship? Might the use of AI lead to a decreased ability to empathize with the patients? Might the application of AI lead to less interest in the patient’s history? Is the medical clinical examination still useful? Might the quality of medical care decline? This also raises questions concerning the legal liability in the case of diagnostic errors due to the absence of the physical examination. Will AI and machine learning (ML) ever manage to replace “a skilled, empathic clinician at the bedside of the patient”? Yes, would be the answer now and in the future, because AI would do the same thing repeatedly, i.e see patients, make a diagnosis, and propose treatment options, without becoming tired and without losing patience, for a large number of patients. Does this mean that doctors will lose some of their jobs? Will young doctors still want to specialize in the fields where AI can make the diagnosis on its own (radiology)? This also has a flip side: complete trust in AI. And in that case, how is medical fault established? What about malpractice?
Doctors will have to inform the patients about the benefits, risks, and limitations of using AI and ask for their consent. Can the data provided be used in malpractice lawsuits? What and how much of the medical information may be disclosed to the patients? The critical issues related to the use of AI in medicine are confidentiality and cybersecurity: the effectiveness of the software relies on the use of numerous data, and some of them may have an impact on the private lives of the patients [1].
Liability for AI errors is an important issue, and even though numerous papers have been published, there is currently no unanimous and definitive answer to this issue. AI is a new technology, and the legal sanctions applicable to it are not yet well-developed. In the case of a malpractice lawsuit, the question is whether there was a deviation from the standard of care for the patient. In this context, AI may fail in the use of programming algorithms and in guiding doctors. [6]. The following dilemma arises: if the manufacturer is liable for a medical device in case of malfunction and harm to the patient, in the case of AI use, is the doctor, who approves the AI results, liable for them? They must inform the patients about the limits of using the system, and the responsibility of the clinicians should be narrowed. In practice, if a robotic surgical procedure is performed autonomously, the injuries caused by software configuration defects would likely be the responsibility of the developer. At the same time, programmers should not be held fully liable, since AI is not capable of preventing the injuries in all the cases. The legislation will have to be based on the degree of autonomy of the software, and when AI is used only as a decision support in an activity, the doctor who approves it bears the risk of liability, even if it is “indirect” [1,6].
AI products offered to doctors which have a defect for which the manufacturer is responsible could be an obstacle for the development of the initial algorithm, although it may improve in time. And then another question arises: Can the patients sue the medical device manufacturer, the doctor who recommended the use of AI, and the hospital directly for malpractice? Is it better to have the joint liability of the three actors before the patient than individual liability? This is why the information provided to the patients about the risks, benefits, and limitations of AI use is of paramount importance and should aim to ensure their fully informed and conscious choices, as well as to propose possible alternative paths in case of opposition to new technologies. All of these represent informed consent which is the legal foundation of the doctor’s duty to inform the patient. The topic of consent is essential given the application of personalized treatment, including the use of AI in medicine, which must ensure confidentiality, without discriminating against populations based on ethnicity and gender and guaranteeing equal, fair access to resource allocation.
It is obvious that the accelerated application of AI in medicine will lead to claims/complaints regarding medical negligence and that, for this reason, on the one hand, the legal system should provide clear solutions regarding which entity bears liability, but on the other hand, the medical malpractice insurance system should state the terms of coverage in the cases where healthcare decisions are partially/fully made by AI.
A lot of personal information circulates in the media, but when discussing access to medical data to be used for AI research and development, numerous obstacles must be overcome. Personal data protection (GDPR - General Data Protection Regulation) is extremely important; however, considering only this, progress in AI will not be possible. Practically speaking, it is not the names and personal data that matter, it is the medical information obtained from the patients that is important, and it is difficult to collect medical data for research if it is not available! Nevertheless, in many countries around the world, politicians and legal experts are becoming involved in solving this problem as there is a true will for the advance of AI [3,7,8,9].
From the legal point of view, the following questions arise: What are the limits of AI use? Is there a legal regulation regarding its use in the diagnosis and treatment of cancer? In this respect, the European Parliament (EP) has adopted “The Artificial Intelligence Act”, which guarantees the safety and respect for the fundamental human rights and stimulates innovation. The law aims “to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI systems, to encourage innovation and to ensure a leading role in the field for Europe” [7,8]. The regulation imposes obligations concerning the use of AI according to its potential risks and the anticipated impact. The law responds to the proposals for “a safe and trustworthy society, including by fighting against disinformation and ensuring that people ultimately have control, ensuring human oversight, as well as reliable and responsible use of AI”. The framework convention will be open for international ratification and enables countries around the world to adhere and to comply with the established ethical and legal norms, in line with the vision of the Resolution of the General Assembly of the UN of 21 March 2024 [7,8]. The Romanian Government has also adhered to this legislation by publishing Government Decision no. 832/2024 in the Official Gazette of 25.07.2024 regarding the approval of “The National Strategy in the Field of Artificial Intelligence” [9].
One of the critical points of the legal regulations consists in the difficulty of adapting to the new realities. We consider that a regulation is considered evolved to the extent it manages to capture within itself its own events, which are different from typical norms, it anticipates future events and which can be subsumed by it: this can be achieved only if it includes broad definitions and at the same time clear, precise, and comprehensive recommendations.

Discussion

AI in the Diagnosis and Treatment of Cancer (Clinical Decision Support Systems - CDS) - Huge Potential, Huge Risks

Multiple political and legislative options could ensure a more balanced liability system, including by adjusting the standard of care, medical insurance, compensation, and legal regulations. In practice, with an adequate legislative framework, the doctors could facilitate the safe and speedy implementation of AI, as well as its autonomous learning (machine learning - ML), ensuring modern and efficient care for the patients [10]. In the USA (2017), the FDA (Food and Drug Administration) approved the use of AI in medicine (X-rays, biopsies - prostate cancer, etc.) [11], but the question remains as to who is liable for the incidents/inconveniences caused to the patients. The solution will depend on how much innovation in AI/ML is appropriate, and its balanced introduction into the healthcare system will also have to take into account compensation for the injured patients. The level of liability directly influences the level of development and implementation of the clinical algorithms. Increasing liability may discourage doctors, healthcare systems, and AI designers, hindering or delaying the development and implementation of these algorithms. Likewise, research in this field is discouraged by increasing costs [10]. The AI/ML systems are improving and advancing at a rapid pace, whereas the legal system moves more slowly and has to adapt and balance progress with liability to promote innovation, safety, and the adoption of these new methods of diagnosis and treatment [10].
AI processes and uses various algorithms to find complex correlations in massive data sets (analytics). Machine Learning (ML) are systems which use algorithms to learn from substantial quantities of data to make predictions, without explicit programming, training themselves by correcting minor algorithmic errors (they train), thereby stimulating the prediction and accuracy of the model. From an ethical point of view, the question arises whether the application of AI/ML still meets the medical standards of the Hippocratic Oath [12]. Nonetheless, science is advancing, and systems such as the Artificial Intelligent System (AIS) from IBM Watson - a programme for oncology (IBM Watson for Oncology SaaS - 5725-W51) - have the mission to support oncologists, thus directly influencing clinical decision-making. With over 30 billion images available for analysis, AIS could evaluate the information provided for the diagnosis of neoplastic disease and recommend personalized treatment for the patient. The most recent systems have a broad coverage which includes breast, lung, colorectal, gastric, and cervical cancer [13,14,15]. Thus, Zou F et al. 2020 show that the recommendations for the diagnosis and treatment of cervical cancer using WFO (Watson for Oncology) were consistent with real-world clinical practice in 72.8% of cases. Factors such as the location of the cancer, as well as individual factors such as the type of chemotherapy used, the surgical procedure chosen, the adjuvant/neoadjuvant therapy, limit its wider application. The conclusion was that WFO cannot replace oncologists (the tumor board) in supporting the diagnosis and recommending personalized treatment for the patients with cervical cancer, but it might be an efficient tool in decision-making and therapy standardization [14]. Regarding the concordance between the diagnosis and the treatment established using WFO, it was the highest for breast cancer (88.99%) and the lowest for gastric cancer (57.94%), whereas the concordance rate was higher in stages I-III than in stage IV [15]. It should be noted that while it takes a tumor board between 12-20 minutes to make a diagnosis and recommend personalized treatment, it takes the WFO system 40 seconds to do the same and it can offer consultations to an unlimited number of patients [16]. There are authors who consider that in the near future AI should be included in the diagnosis and treatment guidelines (renal cancer, prostate cancer, etc.), thus revolutionizing decision-making [17]. It is estimated that by 2030 AI will influence medical practice by 14%, intervening in the analysis of electronic medical records, processing laboratory data, and analysing medical imaging, providing diagnoses and therapeutic solutions, everything done quickly and with a high cost/efficiency ratio. The data obtained may be used both in medical research and in the development of new drugs (The Nobel Prize in Chemistry 2024 - David Baker, Demis Hassabis, John Jumper) [18,19,20].

AI and Robotic Surgery

As far as surgical interventions are concerned, according to predictions, robots equipped with artificial intelligence could replace doctors in treating patients in the near future [18]. The use of AI may be divided into 2 branches: a component with a material application, such as surgical robots, and a virtual component - the patient’s electronic file - with the establishment of the diagnosis and of the treatment (clinical decision support systems CDS) [18,19]. The undeniable advantages of robotic surgery, successfully applied in oncological surgery as well, are well known, such as: high precision and increased control of the intervention, improved visualization-3D, reduced intraoperative blood loss, fewer postoperative complications, faster patient recovery, shorter hospital stay.
Robots guide the surgeon in the three-dimensional space by tracking the movements of the instruments during the surgical procedure by using optical or electromagnetic sensors. The computer and the robot alert the surgeon about the precise location and spatial orientation of the surgical instruments, providing feedback on the danger to surrounding anatomical structures and the placement and orientation of the surgical instruments. Many robots have “haptic” sensors - which provide increased resistance to tissue movement - to provide the surgeon with feedback during the surgery. Thus, robots define the haptic boundaries for the surgical instruments and provide haptic feedback when the surgeon deviates from the safety area created by the preoperative surgical planning in the form of tactile or auditory and visual alerts, preventing injury to the organs outside the operating field [18]. Today’s robots are semi-automated, helping the surgeon to perform extensive minimally invasive surgical manoeuvres in three-dimensional space by positioning the instruments, but having a “passive” attitude, although theoretically the robots could perform certain manoeuvres. The first five robotic surgery systems are: (1) da Vinci by Intuitive Surgical, (2) Ion by Intuitive Surgical, (3) Mako by Stryker, (4) NAVIO by Smith Nephew, and (5) Monarch by Auris Health [19,21]. These are successfully used in colorectal cancer surgery, gynecological oncology, prostate and renal cancer, lung cancer, etc. [18].

Machine Learning (ML)

Machine Learning (ML) enables computers “to receive data and learn on their own” from “examples rather than from a list of instructions”. ML uses methods which gather algorithms to mimic human thinking, “enabling computers to make predictions by using substantial quantities of patient data, by learning their own associations”. By using substantial quantities of patient data, “machine learning can create algorithms which function similarly to the reasoning of doctors”. ML can also learn and “take into account variables and interactions, make often unexpected predictions, and offer solutions not previously described in the literature and previously unknown to researchers”. It can thus make predictions about the evolution of the disease of an oncological patient, about its prognosis, and offer therapeutic solutions. It can revolutionize healthcare by enabling doctors to identify “which patients may develop certain diseases”, “which patients should be examined more often”, which patients should be “treated more aggressively”, and to select the most appropriate specific treatments (personalized medicine) [19,22].
Deep learning is the most advanced part of ML. It uses advanced computational models which “allow an algorithm to program itself by learning from a large set of examples, removing the need to specify the rules explicitly”. In practice, the computer acts autonomously in decision-making. Deep learning has been used by companies such as Google and Facebook to analyse “big data” in order to “anticipate what people search for on the internet: where they wish to travel, what they like to buy, what they prefer to eat, and what kind of friends they make” [19,23]. Medical applications include imaging diagnosis and the establishing the treatment for a given disease. An example is the prediction in the diagnosis and treatment of skin cancer (melanoma and non-melanoma tumors) using the computerized analysis of photographic images and the gene expression profile (GEP) obtained through biopsy [24]. Similarly for breast cancer, the analysis of the anatomopathological specimens is fast, and the diagnosis is comparable to that made by the best pathologist [25]. AI can analyse the CT scans of a patient with cancer and, by combining these data and by learning from previous patients, it can recommend patient-tailored radiotherapy which aims to minimize injury to adjacent organs. The work of radiation oncologists is made easier by reducing the time for treatment planning, quality control, and its standardization [26,27].

Liability for the Use of AI in Medicine – Principles

A.
Legal Liability
Several questions arise: Who is legally liable for the use of AI in autonomously operating systems in the medical field? Does the same law apply as for the cars which drive/park autonomously or for the airplanes flying on autopilot? Who is liable when a surgical robot using AI has a manufacturing defect?
  • Design defect – The specialty literature shows that there are few cases in which surgical robots have design defects. Bearing in mind that robotic surgeries have increased by 250% in recent years, there have been few malpractice lawsuits on this topic. The specialties with the highest number of complaints were obstetrics/gynecology (48.9%), general surgery (28.9%), and urology (15.6%) [28]. Therefore, the person making the malpractice claim should distinguish between the design defect of a robot using AI and the action of the doctor who performed the surgery. This is why we believe that it is necessary for doctors to supervise the application of AI in robotic surgery. AI and the VR system (virtual reality) could help surgeons to learn faster how to work with robots [19,29].
  • Estimated risks - the application of a new product also entails usage risks which may be attributable to the manufacturers. The use of AI includes predictable risks common to many medical devices such as: malfunction, improper use, wear and tear due to the use of the devices. All of these should involve legal liability similar to the liability for non-AI products. Some of the predictable risks are unique and associated with AI, such as: erroneous data input, faulty algorithm application, discrimination, IT- system failures, etc. [19].
  • Erroneous information (Bad Data) - AI and deep learning depend on the quality of the data supplied [30]. Billions of medical data are used to generate models, and if these are not monitored, they can amplify with errors. Therefore, it is essential to ensure clarity and comprehension when introducing algorithms, and the data must be reproducible. There are several factors which influence this activity and can generate errors: the consistent volume of information, its quality, the scope and complexity of the information from the medical field (especially in oncology), its distribution and development over time (constantly changing medical guidelines due to the emergence of new methods of diagnosis and treatment), and last but not least, the interpretation of this information. Regarding the volume and quality of the data, although the number of patients/day that an AI system can “see” may be limited, the volume of processed information is huge. At least 10 parameters are necessary to make a correct diagnosis [31,32]. These data are far more complex than voice and image recognition [19]. At the same time, care must be taken that, if the system is used for the prognosis of a neoplastic disease, and the data analysis shows a life expectancy of less than 6-12 months, the patient should not remain without medication [19]. We consider that the doctor should not be liable for the inadequate system operation caused by AI programming defects.
Medicine is not a science guided solely by statistics, mathematics, and algorithms. The question is how can AI interact with human feelings as an important part of the art of medicine, such as “touch, compassion, and empathy”? Illness in general and the diagnosis of cancer in particular represent more than a simple physical examination or a laboratory test, and from this point of view, the doctor-patient relationship is important. From an ethical point of view, communication between doctor and patient, and between doctor and family is important. “Today’s technology is not yet sensitive enough to capture nuanced social signals, such as body language, tone of voice, and the emotional state at the respective moment.” [19,20,21,22,23,24,25,26,27,28,29,30,31].
Taking into account all the above, the following questions arise: How safe is the use of AI devices and what are the accepted risks? given that people can also make mistakes. The legal system should consider the following questions: Did AI perform as well as a person? Are the performances of the AI system in accordance with those described by the manufacturer? Is the legislation prepared to answer such questions? It is important when the algorithm for AI and the use of electronic records is created, because there are cases in the literature where complaints have appeared regarding the method of making the diagnosis and the prognosis of the disease’s evolution [19].
d.
Malpractice - by using AI in medicine, the system brings new elements to the complexity of malpractice, especially since the legislation in this domain is in its infancy. Doctors must ensure a human interface to AI, carefully interpreting the data provided by the system and making correct clinical decisions. Doctors devote an average of 15 minutes to a patient’s history, but with AI they could see many more patients and be exposed to a greater volume of information. When doctors breach their legal obligations, when they do not meet the standard of care for the patients, they are subject to malpractice and to the sanctions provided by the law. This means that doctors are “obliged to have a standard of learning and skills” - continuous medical education. From this, it can be deduced that, sooner rather than later, with the introduction of AI into the medical practice of diagnosis and treatment, “doctors must know how the law will hold them liable for the physical and psycho-emotional injuries resulting from the interaction between algorithms and patients” [19]. AI will rapidly influence the diagnosis and treatment of the patients, predicting who will develop a certain disease and recommending the appropriate treatment, and it may become part of “the standard of care and efficiency in treating patients”. At present, in order to avoid the absence of adequate legislation in the field, doctors prefer to use AI as a tool to confirm their work rather than as a source of improvement [19,32]. These aspects will have to be included in a new malpractice law that protects the doctors who uses AI, otherwise, they will have to appeal to the courts to defend their decisions. Therefore, there is a risk that doctors will avoid applying AI. For example, a surgeon who uses an AI-assisted robot risks being subject to malpractice if this procedure is not covered by the rules of use for the respective surgery and will avoid applying this technique. Thus, during a robotic operation performed with AI, the surgeon will only have to supervise certain surgical times, but he/she will have to correct any possible problems which arise at the end of the operation [19]. This shows that surgeons must also master classic/laparoscopic surgical techniques.
At this time, it is difficult to prove whether the patient’s injury was caused strictly by the manufacturer of the AI-equipped robot or by the faulty use of the system by the surgeon. It is also possible that the hospital which owns the robot and is responsible for its maintenance is to blame. Thus, the following question arises: in such a case, who does the malpractice law sanction? This is an aspect to which jurists must adapt the law. It must be taken into account that AI and ML can optimize an oncological treatment [33] and can consult an immense number of patients/days, which would reduce the time for scheduling them for personalized diagnosis and treatment, while also providing a prognosis of the evolution of the disease. It is inevitable that with the increase of the application of AI, the number of malpractice cases will also increase. It should not be forgotten that the patient must be informed about the use of the AI system and consent to it. With the development of ML, is it possible for malpractice law to apply strictly to the computer? This is another question that jurists will have to consider and possibly insurance policies may need to be modified.
B.
The current legal framework
Liability for malpractice when using artificial intelligence (AI) in medicine is a developing field, with regulations varying according to the jurisdiction. In the European Union and in Romania, the current regulations regarding medical liability apply to AI as well, but legislative initiatives to adapt them are underway.
  • The general framework applicable in Romania and the EU
    • The Civil Code and Law no. 95/2006 regarding the healthcare reform regulate the civil liability of medical professionals. The doctor, the medical facility, or the manufacturer of a medical device may be held liable if the use of AI causes harm. [34,35]
    • Regulation (EU) 2017/745 regarding medical devices (MDR -Medical Devices Regulation) - Regulation EU 2017/745 Of The European Parliament and of The Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC) -sets strict requirements for AI-based medical software, particularly in terms of safety and performance [36].
    • The GDPR (General Data Protection Regulation) Regulation imposes requirements regarding the protection of medical data, particularly in the case of the AI algorithms which process sensitive information. (Regulation EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). [37,38]
  • Who is liable in case of harm caused by AI?
    • The doctor - if he/she uses AI as a diagnostic tool and makes a wrong decision based on its recommendation. Although AI may support the medical decision, the final responsibility lies with the doctor.
    • The provider of medical services - if a hospital or clinic uses a faulty AI system and fails to implement adequate safety measures.
    • The AI software manufacturer - may be liable if the algorithm has systemic errors, offers faulty recommendations or does not comply with the MDR standards.
  • Current and Future regulations
    • The AI Act (the EU Regulation on Artificial Intelligence, proposal) Artificial intelligence (AI) act: Council gives final green light to the first worldwide rules on AI (Regulation laying down harmonised rules on artificial intelligence (artificial intelligence act), 21 May 2024) - will impose stricter requirements for the AI systems used in medicine, classifying them as “high risk” and will oblige manufacturers to ensure transparency, safety, and the possibility of human oversight [39]
    • The Directive on Liability for Defective Products (revised in 2024) extends the liability of the manufacturers to software, including medical AI. Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products [40]
To conclude, the liability for malpractice in the case of AI in medicine depends on the human factor involved, as well as on the obligations of the manufacturers and of the providers of medical services. In the future, the AI Act will clarify this responsibility even further.
Liability for malpractice when using Artificial Intelligence in medicine - Detailed legal analysis
The use of artificial intelligence (AI) in the medical field raises complex legal issues regarding liability in the event of harm. In the absence of specific exhaustive regulations, a general legislative framework applies in Romania and the European Union, supplemented by recent initiatives aimed at adapting the norms to the new technological realities.
1. The normative framework applicable in Romania and the European Union
1.1. Medical liability and malpractice
In Romania, liability for malpractice is regulated by:
  • Law no. 95/2006 regarding the healthcare reform – The chapter regarding civil liability for medical malpractice (art. 642-689) [34].
  • The Civil Code - General principles of tort liability (art. 1349 and the following) [35].
  • Government Ordinance no. 137/2000 regarding the prevention and sanctioning of all forms of discrimination - Relevant in cases of algorithmic bias in medical AI [37].
These regulations establish the liability of the doctors, healthcare facilities, and other actors involved in providing medical services.
1.2. The liability of the manufacturers of medical software based on AI
AI used in medicine can be classified as a medical device according to:
  • The Regulation (EU) 2017/745 regarding medical devices (MDR) (Regulation (EU) 2017/745 of The European Parliament and of The Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC), which classifies medical software as a medical device and imposes strict requirements for safety, performance, and post-market surveillance [36].
  • The Regulation (EU) 2017/746 regarding in vitro diagnostic medical devices (IVDR), (Regulation EU 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices- IVDR)– applicable to AI used in the diagnosis of diseases through laboratory tests [41].
  • EU Regulation No 988/2023 on General Product Safety (GPS) applicable from 13 December 2024 (the “GPS Regulation”) - (Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety - GPS) [42]
  • For the same reasons that led to the adoption of the SGP Regulation, on 18.11.2024, EU Directive No. 2024/2853 on liability for defective products (LPD) and repealing Directive No. 85/374 (the “LPD Directive”) was published. The Directive is to apply to products placed on the market or put into service after 9 December 2026, and by 9 December 2026, Member States must ensure the entry into force of the regulatory acts transposing this new Directive [40]
  • The LPD Directive will apply to any product understood as any movable property, even if it is integrated into another movable or immovable property or if it is interconnected with it, and will also include digital production files, as well as software [40].
  • Software may be placed on the market as a stand-alone product and subsequently integrated into other products as a component, potentially causing damage when operated. Therefore, for reasons of legal certainty, the LPD Directive will clarify that software is a product for the purposes of strict liability, regardless of how it is supplied or used, and therefore regardless of whether it is stored on a device or accessed via cloud technologies. However, the source code of software should not be considered a product for the purposes of this Directive, as it is merely a set of information [40].
  • Liability for defective products is objective liability, meaning that no fault is required. Liability lies with the manufacturer of the product, the manufacturer of a defective component, and if the product or component is produced outside the EU, then the importer or the manufacturer’s authorised representative may also be held liable. If there is no importer or authorised representative, then the logistics service provider may be held liable. Software developers or manufacturers, including providers of AI systems within the meaning of the (EU) AI Regulation, will be considered manufacturers and treated as such [40].
Non-compliance with these regulations may entail the liability of the software manufacturers, distributors, and providers of medical services.
1.3. Data protection and confidentiality
The AI which processes medical data must comply with:
  • The General Data Protection Regulation (GDPR) - Regulation (EU) 2016/679, particularly Art. 9 (sensitive data) and Art. 22 (automated decisions with a significant impact on individuals) [38].
  • Regulation (Eu) 2016/679 Of The European Parliament and of The Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) https://gdpr-info.eu/, [38]
  • Law no. 190/2018 regarding the measures for the implementation of GDPR in Romania, which sets specific rules for processing health data. Law No. 190/2018 on implementing measures for Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [43]
Non-compliance with these norms may be sanctioned by the National Supervisory Authority for Personal Data Processing (ANSPDCP) [44].
1.4. The Directive regarding liability for defective products
  • EU Directive No. 2024/2853 on liability for defective products and repealing Directive No. 85/374 (the “LPD Directive”). This Directive is to apply to products placed on the market or put into service after 9 December 2026. Member States must bring into force the legislative acts transposing this Directive by 9 December 2026 [40].
  • Directive 85/374/EEC (European Economic Community) (Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products) regarding the liability for defective products (revised in 2024) includes medical software and AI in the category of the products for which the manufacturer can be held directly liable [45].
  • This imposes the objective liability of the manufacturer for the harm caused by product defects, irrespective of fault.
2. Who is liable in case harm is caused by medical AI?
In the event of harm resulting from the use of AI in medicine, liability may be attributed to several entities:
2.1. The liability of the doctor
  • Art. 643 of Law no. 95/2006 stipulates that doctors are liable for the harm caused to patients in the case of a professional error [34].
  • If AI is used only as a decision support tool and the doctor makes the final decision, the doctor may be held liable if it is proven that he/she used AI in an inappropriate way.
Example: A doctor relies exclusively on an AI-generated recommendation to diagnose a patient, without checking the correctness of the diagnosis, and the patient is harmed. In this case, the doctor may be held liable for malpractice.
2.2. The liability of the healthcare facility
  • The hospitals and the clinics may be liable under Art. 1357 of the Civil Code and Art. 644 of Law no. 95/2006 if negligence in the selection, implementation, or supervision of an AI system leads to harm to the patient [34,35].
  • If the medical facility uses AI as an autonomous decision (without the doctor’s involvement), it can be directly liable.
Example: A hospital implements an AI system for patient triage, but the algorithm has a bias which causes underdiagnosis of certain categories of patients. The hospital may be liable for the harm done.
2.3. The liability of the software manufacturer
  • According to the MDR Regulation and the Directive regarding the liability for defective products, the manufacturer of an AI software may be liable if its defects cause harm [36,40,41,42].
  • If the AI is trained on a faulty data set and produces systematically erroneous decisions, the manufacturer may be liable for a defective product [36,40,41,42].
Example: An AI algorithm for diagnosing skin cancer was trained predominantly on images of Caucasian patients and misses the diagnosis of the patients with darker skin. If a patient is harmed, the software manufacturer may be liable.
3. Relevant current and future regulations
3.1. The Artificial Intelligence Act (AI Act)
  • The European Commission’s 2021 proposal for the AI Regulation (AI Act) and adopted in 2024 (artificial intelligence act, 21 May 2024) classifies the AI used in medicine as a “high-risk AI system” [39].
  • It imposes strict requirements for transparency, auditability, human intervention, and safety, which will influence both users (doctors, hospitals) and software developers.
3.2. Revision of the Directive regarding liability for defective products
  • EU Directive No. 2024/2853 (DPR Directive) on liability for defective products and repealing Directive No. 85/374/EEC The 2022 proposal extends liability to the software manufacturers, including AI, and eliminates the necessity for the victims to prove a specific algorithmic defect in order to obtain compensation. after damages [40].
4. Conclusions and recommendations
  • Currently, liability for the use of AI in medicine is assessed according to the general regulations regarding medical malpractice, consumer protection, and medical devices.
  • The AI Act and the new rules regarding product liability will bring more clarity and legal responsibility for AI users and developers.
  • Doctors and medical facilities must implement clear protocols for the use of AI, avoid exclusive dependence on AI in decision-making, and observe the principles of medical ethics and data protection.
In medicine, through the use of adequate algorithms, artificial intelligence will be increasingly used and, consequently it must be used with moral responsibility. AI cannot completely replace clinical reasoning, but it can help doctors make the best decisions [3]. Although there are moral dilemmas regarding the use of AI, science must progress for the benefit of people, of course, within an appropriate legal framework. AI is revolutionizing healthcare, but it may create new liabilities for manufacturers and users alike. AI is already being used for the diagnosis and treatment of cancer, with AI-equipped surgical robots becoming more and more present in its treatment (prostatectomies, etc.)
Naturally, as the use of AI increases, so do the risks and the liability associate with it. In order not to reduce the use and to maximize the application of the AI system, the risks and the liability must be very well defined, so that all the parties involved (patient, doctor, device) understand their responsibilities and the implicit legal liability when the technology inevitably causes harm. Current and future medical insurance must be prepared for new types of malpractice at a pace which should keep up with the pace of AI system development [46,47].
Essential elements of patient care (especially of the oncology patient) such as clinical examination, compassion, clinical intuition, and empathy make medicine a distinctive science, and will probably determine predominantly human medical care, in many cases, which will exceed AI-dominated care. However, in the future, AI will play an increasingly major role in medical decision-making. AI influences medical malpractice liability by adding new causes and the appearance of new damages from this point of view, a judge will have to rule on the new malpractice cases very carefully and to determine whose fault it is: the doctor’s, the AI manufacturer’s, and/or the hospital’s [48,49,50,51,52,53,54,55,56].
A doctor’s decision to follow or not AI predictions and to use the system will have legal consequences. New issues will arise, such as doctors who are trained to work with AI technology (e.g., AI-assisted robotic surgery), but who never learned to operate without it, will have difficulties when computers fail, which could lead to malpractice liability issues if the doctor is not prepared to complete an operation or solve a complication, for example, without the robot. Cause-and-effect relationships will play an important role, because it will be difficult for a judge to analyse whose responsibility it is when the doctor’s responsibilities are shared with AI systems. Here, the opinion of an expert in the field will likely be required [57,58,59,60,61,62,63,64,65].

Conclusions

Doctors must be the human interface between technology and the patient by responsibly using AI in the diagnosis and treatment of a disease, especially neoplastic diseases, so that essential elements such as compassion and empathy do not disappear from medical practice. As we have sought to emphasize in this paper, we must prepare the legal framework to provide the parties involved (doctor, technology manufacturer, hospital) with the reasonable capacity for the development and application of AI and ML systems, in order to anticipate liability issues from a legal point of view, so that these technologies may continue to develop rapidly and revolutionize medical care.

References

  1. Cestonaro C., Delicati A., Marcante B., Caenazzo L. and Tozzo P. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review. Front. Med., 27 November 2023. Sec. Regulatory Science.Volume 10 - 2023. [CrossRef]
  2. AI in medicine - Helmholtz Munich (helmholtz-munich.de); https://www.helmholtz-munich.de/en/newsroom/research-highlights/ai-in-medicine. Accessed 06.03.2024.
  3. https://www.gustaveroussy.fr/en/france-set-become-global-leader-using-ai-diagnose-and-treat-diseases. Accessed 20.01.2025.
  4. Tripathi S., Tabari A., Mansur A., Dabbara H., Bridge C.P.and Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics 2024, 14, 174. [CrossRef]
  5. AI Predicts Future Pancreatic Cancer. AI model spots those at highest risk for up to three years before diagnosis. https://hms.harvard.edu/news/ai-predicts-future-pancreatic-cancer. Accessed 06.03.2025.
  6. Terranova C, Cestonaro C, Fava L and Cinquetti A (2024) AI and professional liability assessment in healthcare. A revolution in legal medicine? Front. Med. 10:1337335. [CrossRef]
  7. Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | Actualitate | Parlamentul European (europa.eu). https://www.europarl.europa.eu/news/ro/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai, Accessed 13.08.2024.
  8. https://www.capital.ro/regulamentul-ia-inteligenta-artificiala-uniunea-europeana.html. Accessed 13.08.2024.
  9. Government Decision no. 832/2024 on the approval of the National Strategy in the field of artificial intelligence 2024-2027. Published in the Official Gazette no. 730 bis of July 25, 2024, Part I no. 730Bis Annex to Government Decision no. 832/2024 on the National Strategy in the field of artificial intelligence 2024-2027.pdf. Accessed 13.08.2024.
  10. Maliha G., Gerke S., Cohen I.G. and Parikh R.B. Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation. The Milbank Quarterly, Vol. 00, No. 0, 2021 (pp. 1-19).
  11. Ström P, Kartasalo K, Olsson H, et al. Artificial intelligence for diagnosis and grading of prostate cancer in biopsies: a population-based, diagnostic study. Lancet Oncol. 2020;21(2):222-232. https: //doi.org/10.1016/S1470-2045 (19)30738-7.
  12. Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, Aggarwal K, Ibrahim S, Patil V, Smriti K, Shetty S, Rai BP, Chlosta P and Somani BK (2022) Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front. Surg. 9:862322. [CrossRef]
  13. IBM. https://www.ibm.com/docs/en/announcements/watson-oncology?region=CAN. https://www.ibm.com/us-en/marketplace/ibm-watson-for-oncology. Accessed 05.09.2024.
  14. Zou F, Tang Y, Liu C, Ma J and Hu C (2020) Concordance Study Between IBM Watson for Oncology and Real Clinical Practice for Cervical Cancer Patients in China: A Retrospective Analysis. Front. Genet. 11:200. [CrossRef]
  15. Jie, Z., Zhiying, Z. & Li, L. A meta-analysis of Watson for Oncology in clinical application. Sci Rep 11, 5792 (2021). [CrossRef]
  16. Printz, C. Artifcial intelligence platform for oncology could assist in treatment decisions. Cancer 123(6), 905. https://doi. org/10.1002/cncr.30655 (2017).
  17. Shah M, Naik N, Somani BK, Hameed BMZ. Artificial intelligence (AI) in urology-Current use and future directions: An iTRUE study. Turk J Urol 2020; 46(Supp. 1): S27-S39.
  18. Frank Griffin, Artificial Intelligence and Liability in Health Care, 31 Health Matrix 65 Available at: https://scholarlycommons.law.case.edu/healthmatrix/vol31/iss1/5.
  19. Bradford S. Waddell & Douglas E. Padgett, Computer Navigation and Robotics in Total Hip Arthoplasty, in 5 Orthopaedic Knowledge Update: Hip And Knee Reconstruction 423, 427 (Michael A. Mont & Michael Tanzer eds., 2017).
  20. The Nobel Prize in Chemistry 2024 is about proteins, life’s ingenious chemical tools. David Baker has succeeded with the almost impossible feat of building entirely new kinds of proteins. Demis Hassabis and John Jumper have developed an AI model to solve a 50-year-old problem: predicting proteins’ complex structures. These discoveries hold enormous potential https://www.nobelprize.org/prizes/chemistry/2024/press-release/, Accessed 21.01.2025.
  21. Pandav K, Te AG, Tomer N, Nair SS, Tewari AK. Leveraging 5G technology for robotic surgery and cancer care. Cancer Rep (Hoboken). 2022 Aug;5(8):e1595. Epub 2022 Mar 9. PMID: 35266317; PMCID: PMC9351674. [CrossRef]
  22. Parikh RB, Manz C, Chivers C, et al. Machine Learning Approaches to Predict 6-Month Mortality Among Patients With Cancer. JAMA Netw Open. 2019;2(10):e1915997. https://doi.org/10.1001/jamanetworkopen.2019.15997.
  23. Tien Yin Wong & Neil M. Bressler., Artificial Intelligence With Deep Learning Technology Looks Into Diabetic Retinopathy Screening, 316 JAMA 2366, 2366 (2016).
  24. Wei ML, Tada M, So A and Torres R (2024) Artificial intelligence and skin cancer. Front. Med. 11:1331895. [CrossRef]
  25. Zhou LQ, Wu XL, Huang SY, Wu GG, Ye HR, Wei Q, Bao LY, Deng YB, Li XR, Cui XW, Dietrich CF. Lymph Node Metastasis Prediction from Primary Breast Cancer US Images Using Deep Learning. Radiology. 2020 Jan;294(1):19-28. Epub 2019 Nov 19. Erratum in: Radiology. 2024 Mar;310(3):e249009. doi: 10.1148/radiol.249009. PMID: 31746687. [CrossRef]
  26. M. Zadnorouzi, S.M.M. Abtahi. Artificial intelligence (AI) applications in improvement of IMRT and VMAT radiotherapy treatment planning processes: A systematic review. Radiography. Volume 30, Issue 6, 2024, Pages 1530-1535, ISSN 1078-8174, . [CrossRef]
  27. Mariko Kawamura, Takeshi Kamomae, Masahiro Yanagawa, Koji Kamagata, Shohei Fujita, Daiju Ueda, Yusuke Matsui, Yasutaka Fushimi, Tomoyuki Fujioka, Taiki Nozaki, Akira Yamada, Kenji Hirata, Rintaro Ito, Noriyuki Fujima, Fuminari Tatsugami, Takeshi Nakaura, Takahiro Tsuboyama, Shinji Naganawa, Revolutionizing radiation therapy: the role of AI in clinical practice, Journal of Radiation Research, Volume 65, Issue 1, January 2024, Pages 1–9, . [CrossRef]
  28. De Ravin E, Sell EA, Newman JG, Rajasekaran K. Medical malpractice in robotic surgery: a Westlaw database analysis. J Robot Surg. 2023 Feb;17(1):191-196. Epub 2022 May 12. PMID: 35554817; PMCID: PMC9097886. [CrossRef]
  29. E.W. Riddle, D. Kewalramani, M. Narayan, and D.B. Jones. Surgical Simulation: Virtual Reality to Artificial Intelligence / Current Problems in Surgery 61, Issue 11, (2024) 101625, . [CrossRef]
  30. Verghese A, Shah NH, Harrington RA. What This Computer Needs Is a Physician: Humanism and Artificial Intelligence. JAMA. 2018;319(1):19–20. [CrossRef]
  31. Derek Bolton & Grant Gillett, The Biopsychosocial Model Of Health And Disease: New Philosophical and Scientific Developments (2019), Ncbi Bookshelf.
  32. Aliyu Tetengi Ibrahim, Mohammed Abdullahi, Armand Florentin Donfack Kana, Mohammed Tukur Mohammed, Ibrahim Hayatu Hassan. Categorical classification of skin cancer using a weighted ensemble of transfer learning with test time augmentation. Data Science and Management, 2024, ISSN 2666-7649. (https://www.sciencedirect.com/science/article/pii/S2666764924000535). [CrossRef]
  33. Chen Sagiv et al., Artificial intelligence in surgical pathology – Where do we stand, where do we go? European Journal of Surgical Oncology, . [CrossRef]
  34. Law no. 95/2006 on the reform in the field of healthcare – Chapter on civil liability for medical malpractice (art. 642-689). https://legislatie.just.ro/public/detaliidocument/71139, Accessed 21.10.2025.
  35. Civil Code – General principles of tortious civil liability (art. 1349 et seq.). https://legislatie.just.ro/public/detaliidocument/109884, Accessed 21.10.2025.
  36. Regulation (Eu) 2017/745 Of The European Parliament And Of The Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, http://data.europa.eu/eli/reg/2017/745/oj, https://eur-lex.europa.eu/legal-content/RO/TXT/PDF/?uri=CELEX:32017R0745,Accessed 21.10.2025.
  37. Romanian Government Ordinance No. 137/2000 on the prevention and sanctioning of all forms of discrimination, https://legislatie.just.ro/public/detaliidocument/24129, Accessed 21.10.2025.
  38. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), http://data.europa.eu/eli/reg/2016/679/oj, Accessed 21.10.2025.
  39. Regulation laying down harmonised rules on artificial intelligence (artificial intelligence act, 21 May 2024) https://www.consilium.europa.eu/ro/policies/artificial-intelligence/, Accessed 21.10.2025.
  40. Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products, http://data.europa.eu/eli/dir/2024/2853/oj, Accessed 21.10.2025.
  41. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices, http://data.europa.eu/eli/reg/2017/746/oj Accessed 21.10.2025.
  42. Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety), http://data.europa.eu/eli/reg/2023/988/oj, Accessed 21.10.2025.
  43. LAW no. 190 of 18 July 2018, on measures implementing Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), Issuer, Parliament of Romania, Published in the Official Gazette no. 651 of 26 July 2018. https://legislatie.just.ro/Public/DetaliiDocument/203151, Accessed 21.10.2025.
  44. National Supervisory Authority for Personal Data Processing. https://www.dataprotection.ro/, Accessed 21.10.2025.
  45. Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products), http://data.europa.eu/eli/dir/1985/374/oj, Accessed 21.10.2025.
  46. France needs a public label for the evaluation of AI used in healthcare. 11 September 2024, https://www.ghadvocates.eu/france-public-label-evaluation-ai-healthcare/ Accessed 21.01.2025.
  47. Carter, Stacy M. et al. The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. The Breast, Volume 49, 25 - 32 February 2020, . [CrossRef]
  48. L’IA dans le domaine de la santé: un immense potentiel, d’énormes risques. https://www.oecd.org/content/dam/oecd/fr/publications/reports/2024/01/ai-in-health-huge-potential-huge-risks_ff823a24/ebfdeb50-fr.pdf. Accessed 21.01.2025.
  49. Price, W. Nicholson; Gerke, Sara; and Cohen, I. Glenn, “Liability for Use of Artificial Intelligence in Medicine” (2022). Law & Economics Working Papers. 241. https://repository.law.umich.edu/law_econ_current/241.
  50. Samuel D. Hodge, The Medical and Legal Implications of Artificial Intelligence in Health Care – An Area of Unsettled Law, 28 RICH. J.L. & TECH. 405 (2022).
  51. Laptev, Vasiliy Andreevich, Inna Vladimirovna Ershova, and Daria Rinatovna Feyzrakhmanova. 2022. Medical Applications of Artificial Intelligence (Legal Aspects and Future Prospects). Laws 11: 3. [CrossRef]
  52. Brent Mittelstadt. The Impact of Artificial Intelligence on The Doctor-Patient Relationship. https://rm.coe.int/inf-2022-5-report-impact-of-ai-on-doctor-patient-relations-e/1680a68859.
  53. Sarah Kamensky, Artificial Intelligence and Technology in Health Care: Overview and Possible Legal Implications, 21 DePaul J. Health Care L. (2020) Available at: https://via.library.depaul.edu/jhcl/vol21/iss3/3.
  54. Lupton M (2018) Some ethical and legal consequences of the application of artificial intelligence in the field of medicine. Trends Med, 2018. Volume 18(4): 1-7. [CrossRef]
  55. Jorstad KT. Intersection of artificial intelligence and medicine: tort liability in the technological age. J Med Artif Intell 2020;3:17. [CrossRef]
  56. Teka Komisji Prawnicze, PAN Oddział w Lublinie, vol. XIV, 2021, no. 1, pp. 205-218 . [CrossRef]
  57. Gonçalves. MA. Liability arising from the use of Artificial Intelligence for the purposes of medical diagnosis and choice of treatment: who should be held liable in the event of damage to health? https://arno.uvt.nl/show.cgi?fid=146408. Accessed 21.01.2025.
  58. Jobson D., Mar V., Freckelton I. Legal and ethical considerations of artificial intelligence in skin cancer diagnosis. Volume63, Issue1, February 2022, Pages e1-e5, . [CrossRef]
  59. Smith, H., & Fotheringham, K. (2020). Artificial intelligence in clinical decision-making: Rethinking liability. Medical Law International, 20(2), 131-154. [CrossRef]
  60. Tripathi, S.; Tabari, A.; Mansur, A.; Dabbara, H.; Bridge, C.P.; Daye, D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics 2024, 14, 174. https:// doi.org/10.3390/diagnostics14020174.
  61. Shah M, Naik N, Somani BK, Hameed BMZ. Artificial intelligence (AI) in urology-Current use and future directions: An iTRUE study. Turk J Urol 2020; 46(Supp. 1): S27-S39.
  62. Parikh RB, Manz C, Chivers C, et al. Machine Learning Approaches to Predict 6-Month Mortality Among Patients with Cancer. JAMA Netw Open. 2019;2(10):e1915997. [CrossRef]
  63. https://www.sciencealert.com/scientists-train-ai-to-forecast-over-1000-diseases-years-in-advance, Accesed in 21.10.2025.
  64. https://www.insideprecisionmedicine.com/topics/precision-medicine/ai-designs-thousands-of-peptides-with-antibiotic-potential/, Accessed in 21.10 2025.
  65. https://www.insideprecisionmedicine.com/topics/informatics/ai-method-improves-accuracy-for-clinical-decision-making/, Accessed in 21.10 2025.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated