Relevance
Information and communication technologies play a crucial role in modernizing medical education systems, especially under conditions of limited resources typical for low- and middle-income countries. Modern methods such as virtual modeling, remote access, and artificial intelligence have the potential to transform traditional teaching approaches, making them individualized, accessible, and learner-centered. The successful experience of implementing innovative solutions underscores the importance of integrating digital technologies to bridge educational gaps and foster highly skilled health care professionals (Pebolo et al., 2024).
Since 2022, the opportunities and challenges associated with the adoption of generative neural networks powered by artificial intelligence (hereinafter referred to as AI) have become topics of discussion in academic circles. There are documented cases where graduation projects have been entirely written using generative neural networks, and it has been noted that chat-bots can assist in passing tests during the preliminary certification examination for medical practitioners specializing in general medicine. Despite these developments, there remains no established legal framework regulating the use of AI in education, despite its direct impact on the system (Bezuglyy, T. A., & Ershova, M. E. (2023); (Ilinykh, V. A., Bezuglyy, T. A., & Zavarukhin, N. E. 2025).
Objective
To assess the potential benefits and limitations of implementing generative neural networks in medical education.
Methods and Material
Consider the design of a qualitative study:
1. Research Question: Do participants in the educational process (students and teachers) utilize Artificial Intelligence-based tools?
2. Data Collection Method: Anonymous face-to-face interviews conducted directly by researchers in Russian language with audio recording.
3. Instrument Development: An interview structure was developed consisting of two parts: a passport section (3 questions) and a research section (6 questions).
4. Data Collection: Interviews were conducted with randomly selected students and faculty members at South Ural State Medical University from January 15th to January 30th, 2025.
5. Data Analysis: Audio recordings were analyzed, synthesized, and key findings related to the research questions were extracted by the authors.
This study involved conducting 30 in-depth interviews with students and 10 interviews with teachers (Ph.D.s and Doctors of Science) affiliated with the Southern Ural State Medical University.
Interviews represent a qualitative method of collecting detailed insights; however, they limit quantitative evaluation and statistical analyses. Interpreting qualitative data requires caution and awareness of researcher bias.
Limitations of Sample Selection
The study focuses exclusively on one institution—the Southern Ural State Medical University—and thus restricts generalization beyond this context. Findings might differ significantly when applied to other institutions across regions or countries.
Thirty interviews with students and ten interviews with faculty members were conducted. Although this number ensures rich qualitative data, it may compromise sample representation and reliability of conclusions. Larger samples would enhance diversity of perspectives and improve overall validity.
For editing the English-language version of the article, we used a large language model (GigaChat).
The participants in the interview signed a voluntary informed consent.
The research was awarded a 2nd degree diploma at the IX International Youth Scientific and Practical Forum «Medicine of the Future: from development to implementation» (Orenburg State Medical University).
Results
According to the authors’ assessment, the utilization of generative neural networks in medical education can be categorized into two distinct categories: ethical and unethical.
Ethical usage of AI in education involves adherence to certain guiding principles aimed at promoting fairness, transparency, and respect for the rights of all stakeholders including students, faculty, and administration (Foltynek, T., Bjelobaba, S., Glendinning, I., et al. (2023). Illustrative examples of ethical practices include performing statistical computations, generating idea clouds, and facilitating literature searches. It should be noted that according to the position outlined by the Ministry of Science and Higher Education in response to an inquiry in 2023, the inclusion of AI-generated content in academic submissions is permissible if limited to acceptable levels equivalent to those allowed for citations and references (Bezuglyy, T. A., & Ershova, M. E. 2023; Dambegov, A. A. 2023).
Unethical use of AI, conversely, refers to practices that violate fundamental principles of justice, openness, and respect for others’ rights (Foltynek, T., Bjelobaba, S., Glendinning, I., et al. (2023). Based on interview outcomes, 26 students reported engaging in unethical behavior when utilizing generative neural networks. Specific instances mentioned by respondents encompassed composing review articles, preparing dissertations, crafting presentations, solving problems across basic and clinical disciplines, sitting exams, and responding to instructor queries. Speed of response generation served as the predominant rationale behind adopting these behaviors.
It is worth noting that only two respondents described ethical usages of AI-related tools—namely, executing statistical analyses. Four respondents indicated non-use of AI altogether. Among experts surveyed, one held an outright negative stance towards incorporating AI in academic tasks, whereas another fully endorsed its use. The majority (eight experts) adopted a balanced perspective, viewing AI as essentially a tool available to students whose appropriateness hinges upon critical engagement with output rather than uncritical reliance. Specifically, presenting AI-produced material as original work was deemed unethical, although leveraging it for defined purposes like statistical computations was considered legitimate.
Discussion
Currently, there is an active discussion about transforming educational systems as a whole, particularly those focused on training healthcare professionals, in low- and middle-income countries due to the impact of artificial intelligence (AI). Despite existing constraints such as lack of consistent internet access, high costs associated with hardware or software, and unavailability of certain AI applications within specific regions—researchers have observed that most students are already utilizing AI tools. Furthermore, they have established a statistically significant relationship between awareness of AI and its application in learning environments (Wobo et al., 2025).
Researchers who implemented interactive displays equipped with chatbots based on AI into their medical training programs report that these technologies provide personalized experiences for each student, enabling virtual simulations and real-time feedback. These innovations positively influence curriculum mastery, knowledge gap reduction, and skill acquisition (Pebolo et al., 2024).
The critical issue concerning the utilization of AI in medical education lies in the absence of regulatory control over this technology. Three examples illustrate this problem. Firstly, the capacity for self-learning may lead to inaccurate information being embedded into chatbot responses. Secondly, without proper oversight, discrepancies can arise in treatment standards if AI models trained on clinical guidelines from one country disseminate them globally, even when conflicting with local practices. Thirdly, chatbots occasionally generate nonexistent data, posing substantial risks during practical implementation in medicine and healthcare settings (Titus, 2024; Il’inykh et al., 2025).
Conclusions
The use of generative neural networks in the academic environment of a medical university can be divided into two main areas: ethical and unethical usage.
The majority (n=26) of participating students engaged in unethical use of AI for completing academic assignments or preparing works. Only two students reported experiences of ethical use of generative neural networks.
The primary motivation for unethical use of generative neural networks was the rapid availability of answers.
The majority (n=8) of interviewed lecturers (Ph.D.s and Doctors of Science) expressed the view that AI serves primarily as a tool for students, with its use being permissible provided there is critical processing of the generated content.
Funding
The research was supported by a grant from Rosmolodezh Grants for the project «Youth Scientific Laboratory «Synopsis», URL: http://sinopsislab.ru.
References
- Abdusalamov, R. A. (2024). Osnovnye napravleniya pravovogo regulirovaniya ispol’zovaniya iskusstvennogo intellekta v sfere vysshego obrazovaniya [Main directions of legal regulation of artificial intelligence use in higher education]. Yuridicheskiy vestnik Dagestanskogo gosudarstvennogo universiteta, 52(4), 135-143. EDN: FOTQUA. [CrossRef]
- Bezuglyy, T. A., & Ershova, M. E. (2023). Is pol’zovanie tekstovykh neyrosetey i iskusstvennogo intellekta v uchebnykh rabotakh studentov [Use of text-based neural networks and artificial intelligence in students’ assignments]. Problemy sovremennogo obrazovaniya [Issues of Modern Education], (5), 206-216. EDN: YGBOFX. [CrossRef]
- Dambegov, A. A. (2023, April 4). Otvet № 7/1333-O ot 04.04.2023 na obrashchenie № 4865-O ot 14.03.2023 [Response No. 7/1333-O dated April 4, 2023, to appeal No. 4865-O dated March 14, 2023]. Retrieved from Minobrnauki Rossii website: https://ecopathology.ru/2023/04/08/otvet-ministerstva-nauki-i-vyshego-obrazovaniya/. Accessed: March 27, 2023.
- Ivanova, L. A. (2024). Iskusstvennyy intellekt pri napisanii nauchnykh statey - polozhitelnyy ili vredonosnyy faktor? [Artificial intelligence in scientific paper writing: positive or harmful factor?]. Crede Experto: transport, obshchestvo, obrazovanie, yazyk [Trust Expert: Transport, Society, Education, Language], (4), 6-17. EDN: YRKWJQ. [CrossRef]
- Il’inykh, V. A., Bezuglyy, T. A., & Zavarukhin, N. E. (2025). Izucheniye potentsiala primeneniya generativnoy neyroseti dlya resheniya testovoy chasti pervichnoy akkreditatsii po spetsial’nosti “Lechebnoe delo” [Study of the potential of applying generative neural network to solve the test part of initial accreditation in specialty “General Medicine”]. Medical Scientific Journal Synopsis, (1), 82-92., URL: https://cyberleninka.ru/article/n/studying-the-potential-of-using-a-generative-neural-network-to-solve-the-test-part-of-primary-accreditation-in-the-specialty.
- Tarabarina, Yu. A. (2025). Akademicheskoye myshennichestvo studentov s ispol’zovaniyem neyrosetey [Academic fraud among students using neural networks]. Sotsial’no-gumanitarnyye znaniya [Social-Humanitarian Knowledge], (1), 123-130. EDN: VSVJHF.
- Cherkasova, M. N., & Taktarova, A. V. (2024). Iskusstvenno sgenerirovannyy akademicheskiy tekst (lingvopragmaticheskiy aspect) [Artificially generated academic text (linguo-pragmatic aspect)]. Filologicheskie nauki. Voprosy teorii i praktiki [Philological Sciences. Issues of Theory and Practice], 17(7), 2551-2557. EDN: YYWRDN. [CrossRef]
- Foltynek, T., Bjelobaba, S., Glendinning, I., et al. (2023). ENAI Recommendations on the ethical use of Artificial Intelligence in Education. International Journal of Educational Integrity, 19(12). [CrossRef]
- Wobo, K. N., Nnamani, I. O., Alinnor, E. A., Gabriel-Job, N., & Paul, N. (2025). Medical students’ perception of the use of artificial intelligence in medical education. International Journal of Research in Medical Sciences, 13(1), 82. . [CrossRef]
- Titus, S. (2024). Implementing extended reality (XR) and artificial intelligence (AI) in health professions education in southern Africa. African Journal of Health Professions Education, 16(2), 2-3. [CrossRef]
- Pebolo, P. F., Jackline, A., Opwonya, M., Otim, R., & Bongomin, F. (2024). Medical Education Technology in Resource-Limited Settings. Advances in Medical Education and Training, 105. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).