In his June 10, 2025, address to the Italian Parliament, focused on the topic of artificial intelligence, Nobel laureate Giorgio Parisi [
5] drew attention to a pivotal figure in the history of neuroscience: Camillo Golgi, who as early as 1912 elucidated the structure of the neuron through a distinctive staining technique involving a black solution. Parisi proposed a parallelism between biological and artificial systems (this pertains to the concept of bioinspiration) arguing that this comparison is far more significant than previously acknowledged by other scholars, including Luciano Floridi [
6], whose perspective will be examined subsequently. Parisi particularly emphasized the significance of the dendritic tree structure, associated with neuronal excitation and inhibition phenomena, well documented in neurological literature [
7]. These dynamics play a crucial role in associative memory processes, whereby partial information can reactivate a more complete or complex memory trace. Floridi, also address to the Italian Parliament [
8] on the other hand, offers a different interpretation: artificial intelligence does not constitute a novel form of intelligence per se, but rather an unprecedented mode of agency. According to Floridi,the value of AI lies less in its cognitive capabilities and more in its operational nature, which introduces a form of technological agency hitherto unseen. Floridi stresses the imperative to prevent the misuse of AI and insists that those who develop these technologies must assume responsibility for their potential consequences. Nonetheless, it remains ambiguous whether such warnings are directed solely at human actors or, implicitly, at the technology itself. The central issue thus becomes the question of autonomy: does the primary risk stem from Dr. Frankenstein’s malevolence or from the inherent danger of his creation? A paradigmatic case illustrating the tension between human responsibility and technological autonomy is Amazon, CEO Andy Jassy has disclosed that the widespread adoption of generative AI agents will result in significant reductions in corporate roles in the coming years [
9]. Jassy has urged employees to engage with AI tools and to “do more with less.” This exemplifies how human decisions shape the social impact of AI, underscoring the need to clarify who, or what, should be ethically constrained. This reflection lies at the core of Floridi’s viewpoint: responsibility cannot be delegated to technology but requires deliberate intentionality on the part of developers and policymakers, who must steer innovation towards outcomes that uphold equity and human dignity. In light of this, there has been a proliferation of ethical codes, guidelines, and declarations from institutions, states, and associations, each eager to contribute to the discourse on how AI should be regulated. Every new initiative in artificial intelligence tends to generate further statements of principles and values, creating an impression of a competitive race to participate. Initially motivated by a collaborative spirit, many of these declarations have evolved into attempts to assert proprietary ownership of the ethical narrative, “mine and mine alone.” Years later, the risk persists that these efforts may produce redundant or overlapping principles or, conversely, divergent frameworks that engender confusion and ambiguity. Floridi also emphasizes that it is evident both that human autonomy must be promoted and that machine autonomy should be limited and made intrinsically reversible whenever human autonomy needs to be protected or restored (for example, in the case of a pilot able to deactivate the autopilot and regain full control of the aircraft). This introduces a concept that can be defined as meta-autonomy, or a model of delegated decision-making. Humans ought to retain the authority to decide which decisions to make, exercising freedom of choice where necessary and relinquishing it in cases where overriding considerations, such as effectiveness, may justify the loss of control over the decision-making process. However, any delegation should, in principle, remain revisable, adopting as a final safeguard the power to decide to decide again. Parisi also emphasizes the importance of enabling vulnerable individuals to use generative AI as a psychologist and tutor, particularly young people seeking support. We are aware that a student can request, for example, “write an essay on Julius Caesar in the style and with the mistakes of a 13-year-old,” which simultaneously undermines the value of the exercise. According to Parisi, AI is becoming increasingly significant in education; previously, the internet was the primary tool, but now AI has taken on this role. It is essential to teach students how to critically select information in school. Whereas selection was once based on the authority of sources, the current integration of AI presents a complex challenge: how can students navigate this blended informational environment? This represents a major educational challenge moving forward. According to the Nobel laureate, the solution lies in clearly defining the sources even when using generative AI. The issue at hand is not related to copyright in the traditional sense, but rather to the right of inclusion within such AI systems, a user’s right to access and engage with the content. He argues that the way forward is to prevent de facto monopolies and cites several dominant actors as examples: Google (Alphabet), with its search engine, online advertising, Android, and YouTube; Microsoft, with its Windows operating system, Office suite, and Azure cloud services; and Intel, known for its PC microprocessors and, additionally, INVIA for graphics cards. To these must be added Amazon, which leads in e-commerce and cloud computing through AWS; Meta (Facebook), which controls social networks such as Facebook, Instagram, and WhatsApp; and Samsung, a major player in Android smartphones, semiconductors, and display technologies. Sadin [
10] (philosopher and writer, he is considered one of the most prominent and perceptive critics of new technologies), in his address to the Italian Parliament, highlights that as early as 2014 in France, François Hollande had asserted that within one year all students would exclusively use tablets in schools. However, Sadin argues that this approach sacrifices an entire generation, causing them to lose valuable traditional habits in favor of a hype driven by the interests of IT and technology lobbies. Now, will the same happen with AI? What truly matters is recognizing that technologies are not meant to replace but to complement existing practices. New media should be understood as cultural artifacts that necessitate the development of both individual and collective responsibility, as well as critical thinking. The idea is to, in the words of Tisseron, accompany, to alternate, to ensure that the younger generations are capable of self-regulation between traditional and real media [
11]. According to Sadin, there is an illusion in natural language processing that operates through the extrapolation of semantic rules which produce logical laws based on statistical analyses. The objective is to identify automatic correlations. From this point begins the necrosis of text generation, as we exist within a “regime” of probability determined by what has already occurred. In practice, what happens is simply what must happen. This stands in stark contrast to creative thinking. What is language? It is the most emblematic space of our encounters, the shared heritage, and the power to empower. It becomes evident that technological determinism can occur, and all of this stems from a process that begins in school, starting from early childhood, where the shared heritage is encountered. According to Sadin, what truly matters is resisting the utilitarian logic underlying the use of LLMs and the culture of copy-and-paste. He advocates for a collective affirmation of a fundamental principle from
Émile ou de l’éducation written by Jean-Jacques Rousseau: the most important rule is that the most important thing is
not to save time [
12]. Rather, it is the ability to
waste time that holds educational value, as learning inherently involves a form of temporal investment that resists efficiency. In this light, LLMs should not be employed merely to complete tasks devoid of genuine interest, but instead to foster meaningful engagement, for example, through practices such as question time that stimulate critical reflection and dialogue. Sadin advances the theory that AI systems, designed to apologize, accommodate, and offer fully customized responses without resistance, stand in stark contrast to human educators, who represent an “otherness” in relation to the student, including in generational terms. According to this view, such frictionless interactions risk fostering the development of “little tyrants,” as learners are no longer challenged by the presence of a distinct and authoritative interlocutor. For this reason, increasing difficulties in coexisting and engaging in shared social life are likely to emerge. According to Sadin, who described the automatic generation of texts as necrotic, we are facing a struggle against the producers of the large systems previously mentioned also by Parisi. It is essential to preserve what remains alive within us; otherwise, we risk entering a form of humanity that is absent to itself. Although Sadin adopts a critical stance that frames artificial intelligence as a fundamentally utilitarian form of action, and often expresses apocalyptic tones in his forecasts, it remains essential to consider the broad spectrum of academic perspectives on the subject. Given that data concerning human cognitive systems are still being gathered and analyzed, it is crucial to include a diverse range of expert viewpoints. This plurality enables a more nuanced understanding of the ethical and educational implications of AI, fostering an interdisciplinary dialogue that enriches the ongoing debate. Maria Chiara Carrozza [
13] adopts a notably more reassuring stance in this debate, perhaps due to her engineering-oriented perspective and her focus on artificial intelligence as applied to robotics. Nevertheless, she too observes the pervasive influence of utilitarian logic among school students. However, she also argues that AI, when applied to assistive technologies such as exoskeletons, will be more readily accepted because it enhances our ability to live, this is the central concern of neuro-robotics, where the robotic component is effective rather than clumsy, and does not impair but rather improves human movement. Another important aspect is the potential role of robotics in supporting individuals with autism. While robots are not meant to replace therapists or special education teachers, they can nonetheless perform a range of useful tasks that complement human intervention. For example, neural networks are “redesigning” the way a robotic hand grasps a bottle, not by relying on sensors and pre-programmed physical equations, but by inferring such equations through statistical approximations derived from supervised and unsupervised trial-and-error learning. This process challenges the boundary between the natural and the artificial. Similarly, a hip prosthesis replacing a deteriorated section of bone becomes part of a complex interaction involving biocompatibility, tissue regeneration within the prosthetic structure, and the restoration of the person’s ability to walk. In doing so, it crosses the boundary between natural and artificial, establishing a new state of equilibrium. It becomes necessary to collaboratively define the rules governing this new equilibrium. To illustrate the complexity and trade-offs involved in balancing technological development with ethical considerations, it is worth noting that Google has recently announced its adherence to the new Code of Conduct on Artificial Intelligence proposed by the European Commission. However, in an official statement, Kent Walker, President of Global Affairs at Google, while reaffirming the company’s commitment, voiced significant concerns regarding the potential negative impact this regulatory framework could have on innovation and technological advancement in Europe, an observation that has been widely discussed across various industry blogs [
14].