Preprint
Article

This version is not peer-reviewed.

Human-Machine Teams: Advantages Afforded by the Quantum-Likeness of Interdependence

Submitted:

06 August 2024

Posted:

08 August 2024

You are already at the latest version

Abstract
Big data is considered to be the solution to many complex problems, but its application appears to be relegated to the problems that are factorable (e.g., the tensor-like operations which reflect separable elements), not the non-factorable data from teamwork, team intelligence, and possibly not individual consciousness. In contrast, the quantum likeness of interdependence is set apart from the standard science of teams by recognizing a measurement problem between a team and its individual teammates, and between individual beliefs and actions. Teams that concern us are based on artificial intelligence (AI) and machine learning (ML) systems, which we find lacking for two reasons. First, neither AI nor ML models the human's timeless need for debate to solve a problem at the social level when facing uncertainty or complexity, where the debate of proposed actions serves to establish the boundaries of a problem in reality, no matter how complex nor uncertain. Second, at the individual level exists the illusion of a unified reality propagated by a bifurcated brain that we assume is split to determine a context with narrative and bounded spaces. We hypothesize that the quantum-likeness of interdependence applied to autonomous human-machine teams suggests that at the social level, debate models an individual's bifurcated brain, providing advantages for human-machine teams in freer systems over those oppressed by command and control systems (e.g., authoritarian regimes). Human-machine teams are coming, but they are not yet available, forcing us to rely on data from human systems. We close with plans for future research.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

The over-arching goal of our research is to advance the science of human-machine teams performing autonomously in the open while facing uncertainty. This means that, as recommended by the National Academy of Sciences [1], we seek results that apply in the open over those obtained in the laboratory. In addition, recently, two articles have appeared in the journal Science’s Policy Forum that have raised alarms about the potential risks caused by artificial intelligence (AI), the first article indicating the possibility of “existential risks from advanced artificial agents" [2].
In the Policy Forum’s second article in Science, Bengio and colleagues [3] make the following arguments. First, they recognize that their are many ways that AI can help humanity, but, second, they are concerned that AI systems:
“can be replicated by the millions ... [that] many risks could soon be amplified ... [to] pursue undesirable goals ... [that humans] may lose control of autonomous AI systems ... [with] potential to act and develop ideas autonomously, progress explosively, behave in an adversarial manner, and cause irreversible damage ... [and that AI systems] could culminate in a large-scale loss of life ... [and the] extinction of humanity."
We do not make light of these risks; instead, we argue that the essential tension [4] across a free society transmitted by interdependence helps to manage existential risks by also conveying relevant countering information that constructs an equilibrium [5], by increasing the power of teams, by increasing innovation, and by increasing safety, viz., by gaining decision advantages. For example, with interdependence transmitted across all social levels, Grygiel [6] argues that
“A well-ordered state arises out of a healthy civil society ... that ... keep selfish instincts of individuals in check.”
In comparison, for teams operating under authoritarian regimes, suppression impedes or stops interdependence, creating disadvantages, our hypothesis.
Self-driving AI labs are coming into existence. From the Editorial by Service [7]: Six automated laboratories overseen by AI successfully produced new laser materials. The labs were overseen by a cloud-based AI system that operated somewhat like a “symphony," with “The main hurdle became shipping compounds around the world in time."
The authors of the self-driving labs suggests the formation of a team overseen by humans [8],
“the integration of multiple modules into intricate experimental workflows and the parallelization of experimental tasks across multiple sites—rely on distributed experimentation as the key factor for accelerated materials discovery. However, the implementation of distributed experimentation necessitates a central, readily accessible platform with clearly defined standards for communication, data transfer, and experiment planning … This platform must also be adaptable to integrate with the specific preexisting infrastructures at each site, to account for the logistics of interdependent experiments, and to accommodate asynchrony between sites … to minimize disruptions and enhance overall efficiency. … Eventually, driving the decentralized experimental engine with the developed ML models required a central hub for storing experimental data, overseeing the logistics of interdependent1 asynchronous experiments, and facilitating informed decision-making for future experiments.”
These AI self-driving labs form teams, loosely construed, but the information passing between the teams to coordinate their interdependent activities is Bayesian (contingent on partitioned sets), not self-coordinating interdependence between the labs. And yet, it is a start. That is why we use data from human teams, organizations and nations in this study.
Other problems exist with AI and ML. From an engineering perspective [9], “machine learning models are characterized by a lack of requirements specification, lack of design specification, lack of interpretability, and lack of robustness." More problems include data quality, overfitting, etc. In addition, data dependency impedes genaralization to new problems in new contexts outside of the training data; and ML relies on reinforcement learning (e.g., [10]), which builds associations, not the critical thinking require by debate [11]. However, the critical problem heretofore unchallenged is that machine learning models rely on factorable differences to align the interpretation of a word(s) or image(s) with its real context; e.g., [12]:
“The term tensor is one that comes up a lot in the context of machine learning. While in mathematics a tensor has a more rigorous definition, in the context of NNs,2 tensors describe n-dimensional arrays of data, flowing through the network. … e.g., a matrix multiplication or a convolution, performed on any number of input tensors and resulting in a single output tensor” (p. 3).
A brief review of our prior findings [13] leads to case studies and a hypothesis that we propose at the end of the Introduction. Previously, with data from the Middle Eastern North African (MENA) nations collected by the United Nations as part of its Human Development Index,3 and with innovations data provided by the World Intellectual Property Organization’s Global Innovation Index,4 we concluded that interdependence among a team’s members, an organization’s or a nation’s citizens was significantly associated with a nation’s freedom, the overall education of its young, and the innovation experienced by the nation as a whole, led by Israel in the MENA countries [14]. For this new study, in the methods section, we hypothesize that authoritarians place their nations at unsafe disadvantages by oppressing interdependence, the quantum-like resource used by humans to organize themselves at varying social levels, including teams and organizations, where the “essential tension" from interdependence not only promotes the innovation we have already found, but also, as in our proposed hypothesis, increases decision advantages in general.
We include in our review the knowledge that is known about the human interaction and that can be applied to autonomous human-machine teams. As a result of our review, like the Academy of Sciences [1], we explain below why we have begun to conclude that not only is very little known about what occurs in the interaction, but also that a gap exists in the knowledge of human interactions in general, and, worse, that the gap appears to be one that cannot be bridged ([15]; also [16]). Counterintuitively, the best way to control interdependence is to allow interactants to be as free as possible to cross-check themselves [6].

1.1. Background: Mathematics

To capture this gap in the lack of knowledge, we begin with Schrödinger [17] who described the phenomenon of quantum entanglement as one state dependent on another, creating “a whole does not necessarily include the best possible knowledge of all its parts," caused by a loss of information among the interacting parts of a team that depend on each other (in our case, teammates). At the quantum level, according to Zeilinger [18], we cannot know what occurs in a state of superposition. To model the same situation in a team, we model it with subadditivity. But before we model a team, we model subaddivity among classical sets in Equation (1). If a set function, μ , has countable subadditivity in a countable collection of sets where μ is defined, then:
μ k = 1 n A k k = 1 n μ ( A k ) .
As a simple example of Equation (1), let the set function be the standard discrete probability measure. The sample space consists of the integers 1 through 10, and we have subsets A 1 = 1 ; A 2 = 2 , 4 , 6 , 8 , 10 ; and A 3 = 3 , 4 , 5 , 6 , 7 , 8 , 9 .
Then μ ( A 1 ) = . 1 ; μ ( A 2 ) = . 5 ; μ ( A 3 ) = . 7 . Therefore,
μ A 1 A 2 A 3 = 1 , whereas μ ( A 1 ) + μ ( A 2 ) + μ ( A 3 ) = 1.3 .
To convey the information about subadditivity from classical systems, we use Shannon entropy, H, for two discrete, independent random variables, X , Y , where H ( X , Y ) is their joint entropy:
H ( X , Y ) H ( X ) + H ( Y ) .
.
Equations (1) and (2) can be used to model statistical independence ( H ( X , Y ) = H ( X ) + H ( Y ) ) . Classical systems are often composed of independent parts. As an example of independent parts, should a tire on a car go flat on a road trip, replace the tire and continue on the trip. Another example are two phones being unused. If one phone breaks and is replaced and then used, the users are oblivious to its replacement. Shannon’s contribution is the interdependent communication, I ( X , Y ) , between two phones when in use:
I ( X , Y ) = H ( X ) + H ( Y ) H ( X , Y )
However, the parts of a phone are separable. A phone can be disassembled, and reassembled. It is a classical system.
Measurement theory in classical mechanics differs from quantum mechanics because measuring a classical entity has no effect on the other elements of a combined entity when they are independent of each other. In a review of the science of entanglement posted for the three Nobel Physics Laureates in 2022,5
“That a pure quantum state is entangled means that it is not separable ... being separable means that the wave function can be written as
ψ ( x , y ) = ψ 1 ( x ) ψ 2 ( y ) . . . "
The National Academy of Sciences (see p. 12, in [1]) reported that the “performance of a team is not decomposable to, or an aggregation of, individual performances.” Classical systems that model independent parts cannot re-produce the effect reported by the Academy, that is, the dependent interactions among the members of a team involved in synergy (increasing a team’s power; e.g., in group dynamics, see Lewin’s whole is greater than the sum of its parts; in [19]), or in its opposite (e.g., disunity, discord, divorce). For synergy, when a team’s members are interdependent, we use subadditivity to model a whole that has become greater than the sum of its parts (for Systems Engineering, see [20]).
For dependent quantum systems and, we assume, quantum-like systems, with H now as the Hamiltonian of the system (its total energy), with ρ as the density matrix, and with ρ A B as the density matrix of a bipartite system where ρ A B ϵ D ( H A H B ) , and entropy, S, then ([23])
S ( ρ A B ) S ( ρ A ) + S ( ρ B ) .
Equality governs iff  ρ A B = ρ A ρ B , then
S ( ρ A B ) = S ( ρ A ρ B ) .
If the parts of a team are factorable and can be represented by a tensor, then equality in Equations (4), (5) and (6) govern. However, if the parts are not factorable, as implied by the Academy, in the quantum case the joint entropy vanishes. To model the Academy’s claim, we make the same assumption for the quantum-like case of a team to get the joint entropy of an interdependent bipartite system, i.e.,
S ( ρ A B ) = 0 .
Equation (7) not only models what the Academy reported, but it also means that we cannot know the effect of adding a new member to a team until the entropy, S, increases (indicative of a poorly fitting choice) or decreases (indicating that a choice of a new member fits with existing teammates). If the result is a good fit among teammates, then interdependence reduces the subadditivity of entropy to zero.
How can we use Equation (7) to model a team? First, we represent the members of a team with N degrees of freedom ( d o f ), where H A is the classical information produced by the whole, and a n is the information from each of the whole’s parts. If the team is a group of disconnected, uncoordinated, separated participants, then:
H A = d o f = 1 N H ( a N ) = H ( X 1 ) + H ( X 2 ) . . . + H ( X N ) .
There are two problems with Equation (8): First, it does not explain the effect of Equation (7). Second, it affords no advantage to teams; i.e., there is no power gained from teamwork. For the first, if when a team comes together, its degrees of freedom among the interacting parts of its whole are reduced (viz., as it becomes a team), that also reduces its information in accord with Equation (7). Second, there is a finite amount of free energy available to a team to operate. If the team is disunited, it wastes its free energy on the team’s structure (e.g., divorce). In contrast, if the team’s structure is unified and stable, its free energy becomes available to operate the team to be more productive than the sum of its members, more able to accomplish a mission, more capable of serving a function; in that case, we represent the whole, A, as the structure of a team. When humans controlling the team are able to build and maintain the structure as a unified, cohesive, and stable whole, the entropy, S, produced by its structure, S E P A , decreases, allowing the team to shift proportionately some (a lesser stable team structure) or most (a highly stable team structure) of its limited available free energy from keeping its team together to the team’s productivity, increasing the relative power of a team. For that to happen, structural entropy production must reduce, in the limit becoming:
S = lim d o f 1 log ( S E P A ) = 0 .
Equation (9) explains how Equation (7) happens. The loss of degrees of freedom precludes the information from being available to “decompose" the “performance of a team ..." [1]. But before proceeding, what can social science offer to help us?

1.1.1. Background: Measurement Theory

The Failure of Social Science. Teams have been studied for decades [15], but with little progress that can be applied to human-machine teams (e.g., no equations exist in the 2015 report by the National Academy of Sciences [21], preventing the generalization of their findings to machine teammates). Individuals have been studied for over a century [22], but that has not prevented the failure of concepts in the social sciences [24] (from among the examples of failed concepts for individuals, see self-esteem, in [25]; implicit racism, in [26]; and for the retracted scale for honesty, see [27]).6
Our program of research began with an exploration of the validation failure of self-reported concepts in surveys or questionnaires, for example, with self-esteem. In 1995, self-esteem was hailed by the American Psychology Association (APA) [28]:
“Although, relatively little is known about self-esteem, it is generally considered to be a highly favorable personal attribute, the consequences of which are assumed to be as robust as they are desirable. Books and chapters on mental hygiene and personality development consistently portray self-esteem as one of the premier elements in the highest levels of human functioning . . . Its general importance to a full spectrum of effective human behaviors remains virtually uncontested.7 We are not aware of a single article in the psychological literature that has identified or discussed any undesirable consequences that are assumed to be a result of realistic and healthy levels of personal self-regard.
In 2005, however, Baumeister and colleagues [25] found no validity for the self-esteem concept; i.e., it was not related to academic performance, nor to performance at work. From our perspective, a critical subtlety is involved. Self-esteem is significantly associated with a myriad of self-reported concepts, not only to self-reported academic and self-reported work performance, but also to self-reported depression, suicidal ideation, etc. But, critically, the concept of self-esteem failed to capture the real-world behavior that the APA had claimed. Unlike interdependence, we suspect that cognitive concepts that fail to become reality in the field are disembodied, under control of the brain’s left hemisphere [29], possibly a statistical artifact [16].
The problem of replication remains in 2024. From Psychology Today [30],
“The vibrations of the “replication crisis” continue to be felt throughout the social sciences, and particularly within psychology. Being able to replicate findings is among the most fundamental principles of science, and the inability to replicate a high volume of published research represents an ongoing challenge to the field ... Numerous theories have been suggested as to why this is the case within psychology, from the maleficent, such as factors centering around the moral choices of researchers, to the more benign, in the form of ideas such as the complexities of the human being rendering replication less reliable. ... [but] Rather than effectively scrutinize this foundation, it has been more convenient for stakeholders in the field to push the narrative that it was founded on scientific rigor."
Replication, however, is only part of the problem. Generalization is much larger and more serious. To solve this problem, assume that worded beliefs and concepts are mostly constructed by an individual’s left hemisphere brain [31], that spatial representations are mostly constructed by the right brain, that the phenomena of a unified awareness is an illusion, and that the two brain halves when interdependent are able to navigate in reality [29] by finding an equilibrium [5] to counter the tension between the two interpretations of reality (viz., the narrative and the spatial) to determine each context entered [34]. This assumption led us to conclude that Belief A when completing a self-reported questionnaire and Action B described in the questionnaire can be orthogonal to each other:
( B e l i e f A ) * ( A c t i o n B ) = A * B = | A | | B | cos 90 0 = 0
We found this equation to represent the case in a study of US Air Force pilots who were being educated in the classroom about the skills needed for air combat maneuvering, to find that the correlation between their education of air-to-air combat and their actual skills in air-combat was zero, unlike air-combat training, which was significant (reviewed in [13]).
The measurement problem: Based on our arguments above, the evidence and Equation (7), we conclude that the measurement of a state of interdependence disrupts that state, producing Shannon information, known as independent and identically distributed random data (i.e., i.i.d. data; in [38]). To buttress our assumptions, and of greater importance, the Shannon information captured from any social or team interaction is i.i.d. data that, by definition, cannot reproduce the original state of interdependence (see also [39]).
That is why measuring a quantum system affects the system measured. But, we argue, that is also true with quantum-like individuals, teams and organizations, resulting from two non-commutative observables that cannot be measured exactly simultaneously; specifically, when two operators, A and B are interdependent, the result is Equation (11) below. To set the stage for a debate, if we assume the existence of two matrix operators, A and B, that are incommensurable, non-commutative or interdependent, with C as a constant, then
[ A , B ] = AB BC C .
Equation (11) reflects Nash’s [5] idea about the existence of an equilibrium between countering views. Given Equation (11) and given a team controlled to be in a state of interdependence when its interactions among teammates remain coherent, modeled by Equation (7) and (9), we assume that the available energy saved while coherent can be applied to a team’s productivity, where the maximum available free energy saved from a team’s structure and applied to a team’s productivity produces maximum entropy (MEP), then:
Δ S E P * Δ M E P 1
Equation (12) represents our equation for the power gained by forming a group of individuals into a team.

1.2. Case Studies

We test Equation (12) in six (6) case studies. Control requires the intelligence inherent in a team’s interactions to make the adjustments on the fly in real time that maintains the coherence among its team’s teammates (namely, intelligence arises among a team’s interactions [33]; as our guide, see [35]) by maximizing the team’s interdependence, allowing the team to maximize its performance, no matter its size [36], which we identify by its production of maximum entropy (MEP), or by exposing a vulnerability in another team’s structure (e.g., for the case of Boeing, see below; also, see [13]).
We assume that control means that either the state of interaction remains in a state of coherence or decoherence, producing broadly different products. From Liu and colleagues [52] addressing quantum coherence,
“with reactants prepared in an entangled nuclear spin state ... [we’ll] measured the outcome to distinguish between coherent and incoherent processes ... If coherence is maintained, it will become evident in the final-product ..."
At the team level, if we assume that interdependence produces quantum likeness, we provide the first two case studies.
Two examples of organizational success and failure. A case study of two organizations, a highly successful SpaceX (i.e., with the highest MEP) and a dysfunctional Boeing (with the lowest MEP). To better understand what was happening in the organizations of SpaceX and Boeing, recently, a NASA engineer recalled visiting SpaceX to find its atmosphere feeling “like a frenzied graduate school, where all of the employees were being pulled in different directions," seconding what Cummings [36] had found for the most productive science teams. In contrast, the visiting engineer had also found that Boeing was struggling to keep its Starliner on schedule.
Based on this brief case study, by oppressing interdependence, mature organizations can lose their way or falter, like Boeing, which has been unable to compete with the much younger organization, SpaceX. After the Space Shuttle retired, NASA chose Boeing for $4.2 billion to develop a "commercial crew" transportation system named “Starliner" to reach the Space Station and its competitor, SpaceX, for a significantly lesser amount. SpaceX won the race with its Dragon capsule; Boeing lost badly [37]:
“With Boeing’s Starliner spacecraft finally due to take flight this week with astronauts on board, we know the extent of the loss, both in time and money. Dragon first carried people to the space station nearly four years ago. In that span, the Crew Dragon vehicle has flown thirteen public and private missions to orbit. Because of this success, Dragon will end up flying 14 operational missions to the station for NASA, earning a tidy fee each time, compared to just six for Starliner. Through last year, Boeing has taken $1.5 billion in charges due to delays and overruns with its spacecraft development."
Four more examples of success and failure. If nothing can be known inside of quantum superposition until a measurement occurs ([18]; [52]), and if a similar quantum likeness is true of teams, then random choices apply to teams, whether in business or marriage. To gain Shannon information, we can either interrupt an ongoing state of interdependence (e.g., an interaction), or we can draw conclusions about the state or an interaction once it has ended or by looking at its byproducts. Consider the results of good and bad businesses, and good and bad marriages. For good team outcomes, Cummings [36] found in his study of the best scientific teams that productuvity increased based on the size of a team, but only when fully interdependent. For bad business outcomes, one way to fail is for businesses to respond too slowly to prevent the threats to their businesses [50]. The findings for good marriages leads to children with significant advantages in life compared to kids from poorly functioning parental relationships, especially from marriages that end in divorce [51]. We conclude that for a team constituting a business or a family, coherent structures produce the best outcomes by reducing the available energy consumed by the structure, providing the maximum energy available for the function of the team, producing (relative) maximum entropy (MEP).
To recap, by reducing information with fewer degrees of freedom, entanglement at the quantum level is a mathematical model of a merger that, we argue, acts as a quantum-like model of interdependence at the team or organizational level; viz., when successful, organizational mergers build, among other things, new organizations with a series of teams by making them mutually dependent upon each other, the dependence reducing the production of entropy by the newly merged structure’s arrangement and the information derivable from it, when successful, replacing the need for knowledge of how to acquire a new team’s member with random selection in a trial and error process instead of with logic [13]. As Christensen and his team had found [49], the end result of mergers is often poor, no better than 50-50, supporting our prediction of a random process. While we need trial and error to build coherent teams, even though we cannot know what is occurring inside, if it remains coherent, its products are better, and can be measured in the output, our hypothesis.

2. Method

2.0.1. Hypothesis

The need for debate to resolve complex problems when facing uncertainty suggests that the bifurcated brain connected via the corpos callosum is the means of control for the individual, and by generalization, for human teams, tribes and nations, and, by induction, it should be for intelligent machines participating as part of human-machine teams.
Based on the value of debate, we hypothesize that authoritarians place their nations at a decision disadvantage by oppressing interdependence, the quantum-like resource used by humans to organize themselves at varying social levels, including teams and organizations, where the “essential tension" from interdependence improves social function by providing a decision advantage in general.
In particular, we focus on eight nations in highly competitive situations, either in outright conflict on nearing conflict (Iran; Israel; China; Taiwan; North Korea; South Korea; Russia; Ukraine).
The data come from open sources (see the data we used in Table 1). We use freedom scores; gross domestic products; environmental protection scores; and corruption perceptions scores. These sources are as follows. Freedom Scores were found at Freedom House.8 Population and GDP scores were found at the CIA World Factbook.9 Environmental Protection Index scores were found at Yale.10 Corruption Perceptions Index scores were found at Transparency International.11
Table 1. Raw data, rounded to two decimals.
Table 1. Raw data, rounded to two decimals.
Country Freedom GDP N EPI CPI
Iran 11 4.1E11 8.6E7 42 24
Israel 74 5.2E11 9.8E6 48 62
China 9 1.9E13 1.4E9 36 42
Taiwan 94 7.6E11 2.4E7 50 67
N. Korea 3 4.8E10 2.6E7 30 17
S. Korea 83 1.7E12 5.2E7 51 63
Russia 13 4.0E12 1.4E8 46 26
Ukraine 49 3.8E11 3.6E7 55 36
Subtotal 336 2.6E13 1.8E9 356 337

3. Results

First, we constructed a comparison table (Table 2) of advantages based on the information from the eight nations targeted (data from Table 1).
In Table 3, we used Kullback-Leibler (K-L) divergence, an information distance metric, to measure the relative entropy or difference in information represented by two distributions. It can be thought of as measuring the distance between two data distributions showing how different the two are from each other. Using the same data in Table 1, we constructed a K-L test of relative entropy based on freedom scores (the results are in Table 3; to further its explanation, and as an example for interested readers, we calculated the K-L results for Russia and posted it in footnote 12).
Finally, in Table 4, we looked at the combat fatality ratios of the two open conflicts (Iran’s Gaza versus Israel; Ukraine versus Russia).
First, in Table 2, we review the relative advantages among the eight targeted nations based on raw data for Freedom Scores, Population and Gross Domestic Product (GDP) scores (per capita), Environmental Protection Index (EPI) scores, and Corruption Perceptions Index (CPI) scores. For example, in Table 2, the ”11.1" number means that Israel’s GDP per capita is 11.1 times stronger than Iran’s.
Table 2. Advantages, Raw Scores: Freedom; GDP; EPI; and CPI.
Table 2. Advantages, Raw Scores: Freedom; GDP; EPI; and CPI.
Country Freedom GDP/N EPI CPI
Iran
Israel 6.72 11.10 1.16 2.58
China
Taiwan 10.44 1.73 1.42 1.60
N. Korea
S. Korea 27.67 17.55 1.68 3.71
Russia
Ukraine 3.77 0.37 1.17 1.38
Second, using the same data (from Table 1), we review the relative entropy scores for the same eight nations (in footnote 12, we provide a sample calculation of the Kullback-Leibler (K-L) relative entropy differences in information represented by two distributions at a time (GDP versus freedom; EPI versus freedom; and CPI versus freedom). The K-L scores can be thought of as measuring the distance between two data distributions to show how different the two are from each other.12 In each of the three columns, we ranked the results from strongest (#1) to weakest (#8), and then summed the three columns to create the average rank found in the last column.
We provide the results from the perspective of national advantages of the eight nations followed by advantages from the perspective of relative entropy for the same nations based on the hypothesis that freedom is best for decisions made in as free of a market as possible (e.g., capitalism), measured by Gross Domestic Product (GDP); about protecting the environment (EPI); and about reducing corruption (CPI).
Table 3. Kullback-Leibler relative entropy to Freedom.
Table 3. Kullback-Leibler relative entropy to Freedom.
Country GDP-Freedom EPI-Freedom CPI-Freedom Ave. Rank
Iran 0.000 0.166 0.059 5
Israel 0.09 -0.068 -0.036 4
China 0.059 0.120 0.166 6.5
Taiwan -.080 -0.097 -0.067 1
N. Korea 0.000 0.2 0.080 8
S. Korea -0.060 -0.081 -0.052 2
Russia .22 0.153 0.055 6.5
Ukraine -0.055 0.000 -0.034 3
Reading from all results in the left column in Table 2 to the next column on the reader’s right, for those countries with GDP disadvantages relative to freedom, Russia is ranked last, then Israel followed by China; next follows North Korea tied with Iran. For those countries with relative advantages, we found Ukraine, S. Korea and that Taiwan to have the best relative entropy advantages.
For EPI, from worst relative entropy advantage to the best based on freedom, we found North Korea, Iran then Russia followed by China; for the better relative entropy advantages, we found Ukraine, Israel, South Korea and then Taiwan, the latter as the leader.
For CPI, from the worst relative entropy advantage to the best advantage in increasing order is China at the bottom, then N. Korea, Iran and Russia (these two were tied); for best advantage, we found Ukraine (least positive), then Israel, S. Korea, and Taiwan (best).
Finally, in Table 3, we combined the rankings of each nation from columns 2, 3 and 4 to get the overall ranking shown in column 5.
Table 4 below reviews news accounts of unofficial combat fatality rations.
Table 4. Unofficial Combat Fatality Ratios.
Table 4. Unofficial Combat Fatality Ratios.
Country Kill Ratio Sources
Israel versus Gaza (Iran) 21.1:1 Fabian, E. (2024, 2/20), “IDF says 12,000 Hamas fighters killed in Gaza war, double the terror group’s claim. Military announces air force has carried out over 31,000 strikes since October 7, including more than 1,000 in Lebanon and dozens in West Bank,” The Times of Israel, 6/8/2024 from https://www.timesofisrael.com/idf-says-12000-hamas-fighters-killed- in-gaza-war-double-the-terror-groups-claim/
Israel versus Gaza (Iran) 10.6:1 1200+316 versus 15,000+1,000: Fabian, E. (2024, 6/28), “IDF infantryman killed in southern Gaza, bringing ground op toll to 316. Sgt. Eyal Shynes, 19, killed in Hamas sniper attack in Rafah; army says it struck terrorists hiding in schools, humanitarian zone,” The Times of Israel, retrieved 6/28/2024 from https://www.timesofisrael.com/idf-infantryman-killed-in-southern- gaza-bringing-ground-op-toll-to-316/
Ukraine 10.2 Bloomberg (2024), 3,100 kia versus 315,000 kia and wounded: ”Putin Is Running Out of Time to Achieve Breakthrough in Ukraine,” Bloomberg News, retrieved 6/8/2024 from https://www.msn.com/en-us/news/world/putin-is-running-out-of-time- to-achieve-breakthrough-in-ukraine/ar-BB1nQDhl

4. Discussion

In his book, The Wealth of Nations, Adam Smith [42] featured a pin factory to illustrate that an individual man could produce one pin per day, but a team of men could produce thousands. Smith’s implication underscores the power gained from teamwork, which we attribute to interdependence, but only in free countries, not in those countries under the control of authoritarian governments.
The raw data in Table 2, except for Ukraine, gives the advantage to the freest nations, the disadvantage to those nations with oppressed citizens. With the relative entropy calculated from a baseline of freedom, Table 3 provides similar results, especially with our unique finding of a dividing line between those nations with freer rather than oppressed citizens. Table 4 can be better understood from the perspective of Tables 2 and 3, if we treat Hamas in Gaza as a proxy for Iran, and if we treat Ukraine as a proxy for the United States.
In support of our hypothesis, the results in the tables demarcate the advantages (the relative power) accrued to teams and organizations under freedom, self-organized interdependently, compared to the teams organized by central commands to produce peoples in those countries who are oppressed, a disadvantage caused by the lack of interdependence.
Limitations. Three cautions should be raised. First, a word about the scales. Despite the results supporting our hypothesis, the results partly hinge on scales. GDP is roughly accurate, but Freedom, EPI and CPI are combinations of hard and perceptual data. Perceptions versus determinations are important when assessing risks, as with, for example, decisions regarding the management of military nuclear waste management [43], “the existential risks from advanced artificial agents" [2], or medical decisions about smoking cigarettes.13 For example, in assessing inflation [44]: “While professional forecasters are more accurate in the middle of the inflation density, households’ expectations are more useful in the upper tail." The divergence between risk perception and risk determination was made dramatic by Slovic and his team [46]. They found that perceptions regarding radioactivity were not correlated with the actual determination of the health effects found with radioactivity.
Second, there is always the need to be concerned about false alarms with Kullback-Leibler divergences. It is a valid concern. However, in our case, we began with the data in Table 2 which we found intriguing, derived the supporting results in Table 2, and only then decided to test our hypothesis with Kullback-Leibler relative entropy.
Third, the calculations used rounded numbers, which could have produced ranking errors.
Consciousness. One of the competing theories of intelligence is integrated information theory (IIT; see [45,47]. Koch reviews consciousness with integrated-information theory with the idea that it is the ability to causally change one’s self. He uses IIT to conclude that generative AI will be able to do what humans can do, except feel it. The problem we have with IIT is that its parts are separable, meaning that it does not include subadditivity, and thus cannot explain the National Academy of Sciences’ 2022 finding [1]. Specifically, IIT cannot account for interdependence.

4.1. Conclusions

The Academy’s claim is the first outside support for dependency in a team that we have modeled, supported by the assembly theory of Wong and colleagues about the theoretical model of intelligence in an alien civilization [32]. Our finding more closely aligns our model with the quantum likeness of interdependence for human-machine teams that we have developed over the years where interdependence works at the team level roughly in parallel with quantum superposition and entanglement at the atomic level.
Safety. Nash’s countering to reach equilibrium at the start of each debate by the left-right brain or in a public debate provides the critical tension necessary to determine the essence of reality, to navigate in reality, and to provide the measurement information about social violations (safety), including corruption. If machines in a human-machine team are to become the teammates of humans, machines need to be able to check themselves and others in the presence of others. The immediate byproduct of interdependent human-machine teams in free countries will be safer machines that will be less likely to put humans at existential risk, and will be significantly superior to machines operating under oppression.
We have elsewhere addressed boundaries (e.g., [13,14]). We postpone a further discussion on boundaries except to write that boundaries once established must be defended.
We addressed the bidirectional challenges in developing and managing the quantum likeness of interdependence for autonomous human-machine teams. Recent advances surrounding Large Language Models (LLMs) have increased apprehension in the public and among users about the next generation of AI for collaboration and human-machine teams. The anxieties that have grown regard the risk, trust, ethics and safety from the potential uses of machines operating autonomously in open environments, including unknown issues that might arise unexpectedly. These concerns represent major hurdles to the development of verified and validated engineered systems involving bi-directionality across the human-machine frontier. Bi-directionality is a state of interdependence. It requires understanding the design and operational consequences that machine agents may have on dependent humans in a team, and, interdependently, the design and operational effects that humans may have on dependent machine agents. Current discussions on human-AI interactions focus on the impact of AI on human stakeholders; potential ways of involving humans in computational interventions (e.g., human factors; data annotation; approval for drone actions); but these discussions overlook the interdependent need for a machine to intervene for dysfunctional humans. The technology is available today for the start of bi-directional human-machine collaboration and autonomous human-machine teams to better protect human life now and in the future. Thus, despite the engineering challenges faced, we believe that the technical challenges associated with humans and machines cannot be adequately addressed if the social concerns related to risk, trust and safety caused by bi-directional forces are not also taken into consideration (for more, see AAAI Spring 202414).
Four more conclusions:
1. While recognizing the value of freedom of speech, the recent article published in Science on misinformation [55] leads to the suggestion, published in the same issue, to use science to oppress opposing viewpoints (see the companion perspective, in [56]). Nasty business, misinformation. To treat, one must believe in pure, correct, even judicious treatments. From the beginning of civilization, advertisements, political claims, scientific judgments have continued unabated in a sea of good and bad information. But, to treat medical “misinformation" since the pandemic, several assertions by scientific “authorities" have been debunked, causing a loss of prestige. The preview of this article by van der Linden & Kyrochenko even suggested oppression as a treatment (Science, p. 976, same issue). No where do the authors reckon with the need for debate, the tool that advances science and civilization together, primarily because their model is based on individuals. In free nations with checks and balances [6], misinformation is held in check interdependently as citizens, scientists and authorities test all information in open debate.
2. In 2015, Germanwings commercial airliner’s copilot commited suicide by killing all aboard the aircraft (see also MH 370 and China Air). In 2023, an F35 pilot ejected from his jet aircraft that continued to fly unattended for another 60 miles. In both of these cases and many others, humans have the technology to allow the machine to take over from a dysfunctional human. SpaceX is able to land its own boosters. And for some time now, the US Air Force (USAF) allows its fighter aircraft to take over from its pilot in the event that the pilot undergoing a high-G maneuver suffers from a loss of consciousness (known as a G-LOC event). The G-LOC event’s takeover by the machine means that the machine must be able to monitor the human’s mental status, an example of a machine in a state of interdependence with its human pilot [16].
3. From Rodney Brooks [53], “I concluded my talk encouraging people to do good things with LLMs but to not believe the conceit that their existence means we are on the verge of Artificial General Intelligence." We add to what Brooks has said by concluding that without the ability of an agent to check its self based on feedback from others and itself, afforded by interdependence, or for a machine as part of a human-machine team to respond interdependently in a timely manner to its teammates and to itself, human-machine autonomy will remain in the future. Nash’s countering to reach equilibrium at the start of each debate by the left-right brain or in a public debate is critical, and we conclude, needed by machine teammates, too. To work autonomously with humans alone or in an autonomous team (e.g., G-LOC), machines need the guidance derived from internal and external debate, afforded by interdependence.
4. More importantly, of the worries expressed about the possibility of AI machines posing existential threats to the human race (viz., [2]; [3]), while those threats may one day arise from single or factorable systems (e.g., like Von Neumann’s self-replicating automata; in [57]; see also [58] and [59]), based as they are on one of the lowest forms of learning (i.e., reinforcement), and not on interdependence which deals with the higher forms of human learning (e.g., intelligence, cognitive dissonance, debate, innovation, science, conflict, competition, etc.). If we liken machine automata to the command driven soldiers on present-day battle fields, as we have shown with our results, interdependent systems, whether with human-machines or not, will have a significantly greater and more powerful decision advantage, likely by an order of magnitude or more. Further, if these interdependent systems are able to fully engage in intelligent debate, they will advance science and the human race without the need for oppression to suppress undesired speech or misinformation (viz., see the intervention recommended by [56], p. 976).
To complete our review of human-machine teams, our last quote comes from Sun Tzu [54]: “Using order to deal with the disorderly ... is mastering strength." Democracy, expressed in a republican government, masters strength over authoritarian governments, kings, and gangs.

Acknowledgments

The corresponding author thanks the Office of Naval Research for funding his research at the Naval Research Laboratory where he has worked for the past seven summers (under the guidance of Ranjeev Mittu), and where parts of this manuscript were completed.

References

  1. Endsley, M. and colleagues, Human-AI Teaming: State of the Art and Research Needs; National Research Council, National Academies Press: Washington, DC, USA, 2021. [Google Scholar]
  2. Cohen, M.K.; Kolt, N.; Bengio, Y.; Hadfield, G.K.; Russell, S. Regulating advanced artificial agents. Science Policy Forum, 3: address the prospect of AI systems that cannot be safely tested, Science Policy Forum, 384(669); -38. [CrossRef]
  3. Bengio, Y.; Hinton, G.; et al. Managing extreme AI risks amid rapid progress. Preparation requires technical research and development, as well as adaptive, proactive governance, Science Policy Forum, 384(6698): 842-845, 10.1126/science.adn0117, 2024.
  4. Kuhn, T. The essential tension, University of Chicago Press, 1977.
  5. Nash, J.F., Jr. 4: Equilibrium points in n-person games, PNAS, 36(1); -49. [CrossRef]
  6. Grygiel, J. Illusions of U.S. Foreign Policy, The Marathon Initiative, https://themarathoninitiative.org/wp-content/uploads/2024/ 05/Illusions-Final-2024-05.pdf, 2024.
  7. Service, R.F. Editorial, AI-driven robots discover record-setting laser compound. Automated labs spread across the globe create, test, and assemble materials–no humans required, Science, 384(6697), 2024.
  8. Strieth-Kalthoff, *!!! REPLACE !!!*; et al. 6697. [CrossRef]
  9. Kuwajima, H.; Yasuoka, H.; Nakae, T. Engineering problems in machine learning systems, Machine Learning, 109: 1103-1126, 2020.
  10. Gupta, A.; et al. , Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning, International Conference on Robotics and Automation (ICRA), London, https://www. icra 2023.org, 2023.
  11. Oros, A.L. Let’s Debate: Active Learning Encourages Student Participation and Critical Thinking, Journal of Political Science Education, 3(3), 293–311. [CrossRef]
  12. Brakel, F.; Odyurt, U.; Varbanescu, A.L. A: Model Parallelism on Distributed Infrastructure, 2024. [CrossRef]
  13. Lawless, W.F.; Moskowitz, I.S.; Doctor, K.Z. R: Quantum-like Model of Interdependence for Embodied Human–Machine Teams, 1323; 25. [CrossRef]
  14. Lawless, W.F. , Interdependent Autonomous Human-Machine Systems: The Complementarity of Fitness, Vulnerability & Evolution, Entropy, 24(9):1308. [CrossRef]
  15. Jones, E.E. Major developments in five decades of social psychology, In Gilbert, D.T., Fiske, S.T., & Lindzey, G., The Handbook of Social Psychology, Vol. I, pp. 3-57, Boston: McGraw-Hill, 1998. [Google Scholar]
  16. Lawless, W.F.; Moskowitz, I.S. , Shannon Holes, Black Holes and Knowledge: The Essential Tension for Autonomous Human-Machine Teams Facing Uncertainty, Knowledge 2024, 4, 331-357. [CrossRef]
  17. Schrödinger, E. Discussion of Probability Relations Between Separated Systems, Proceedings of the Cambridge Philosophical Society, 31: 555–563, 1935; and 32: 446–451, 1936. [Google Scholar]
  18. Zeilinger A Experiment and the foundations of quantum physics, Reviews of Modern Physics, 71(2): S288–S297, 1999.
  19. Lewin, K. , Field theory in social science. Selected theoretical papers. Darwin Cartwright (Ed.), New York, Harper and Brothers, 1951.
  20. Walden, D.D.; Roedler, G.J.; Forsberg, K.J.; Hamelin, R.D.; Shortell, T.M. (Eds.), Systems Engineering Handbook. A guide for system life cycle processes and activities (4th Edition), International Council on System Engineering (INCOSE-TP-2003-002-04, Hoboken, NJ: John Wiley, (2015).
  21. Cooke, N.J.; Hilton, M.L. (Eds.) Enhancing the Effectiveness of Team Science, National Research Council, Washington (DC), National Academies Press, 2015.
  22. James, W. The Principles of Psychology, Dover Publications, 1892/1950.
  23. Wehrl, A. , General properties of entropy, Reviews of Modern Physics, 50(2): 221-260, 1978.
  24. Nosek, B. Estimating the reproducibility of psychological science, Science, 349 (6251):943, 2015.
  25. Baumeister, R. F.; Campbell, J.D.; Krueger, J.I.; Vohs, K.D. , Exploding the self-esteem myth, Scientific American, 292(1): 84-91, 2005.
  26. Blanton, Hart; Klick, J. ; Mitchell, G.; Jaccard, J., Mellers, B., Eds.; Tetlock, P.E. Strong Claims and Weak Evidence: Reassessing the Predictive Validity of the IAT, Journal of Applied Psychology, 94(3): 567–582, 2009. [Google Scholar]
  27. Berenbaum, M.R. , Retraction for Shu et al. Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end, 118(38). [CrossRef]
  28. Bednar, R.L.; Peterson, S.R. , Self-esteem Paradoxes and innovations in clinical practice, 2nd edition. Washington, DC, American Psychological Association (APA), 1995.
  29. Marinsek, N.L.; Gazzaniga, M.S. A Split-Brain Perspective on Illusionism, Journal of Consciousness Studies, 23(11–12), 149–59, 2016.
  30. Mobayed, T. Replicating Psychology’s Original Sin, Wilhelm Wundt and psychology’s "replication crisis," Reviewed by Gary Drevitch, Psychology Today, 2024.
  31. Waldrop, M.M. Can ChatGPT help researchers understand how the human brain handles language? Large language models are surprisingly good at mimicking our speech and writing. 2410. [Google Scholar] [CrossRef]
  32. Wong, *!!! REPLACE !!!*; et al. On the roles of function and selection in evolving systems, PNAS, 120(43), e2310223120, 2023.
  33. Cooke, N.J. & Lawless, W.F. Effective Human-Artificial Intelligence Teaming, Engineering Science and Artificial Intelligence, Editors: Lawless, W.F., Mittu, R., Sofge, D.A., Shortell, T. & McDermott, T.A., 2021. [Google Scholar]
  34. Lawless, W.F.; Mittu, R.; Sofge, D.A.; Hiatt, L. 5: Editorial (Introduction to the Special Issue), “Artificial intelligence (AI), autonomy and human-machine teams: Interdependence, context and explainable AI", AI Magazine, 40(3); -13. [CrossRef]
  35. Dong, D.; Petersen, I.R. A: Quantum control theory and applications. [CrossRef]
  36. Cummings, J. Team Science Successes and Challenges, National Science Foundation Sponsored Workshop on Fundamentals of Team Science and the Science of Team Science, Bethesda MD, 2015.
  37. Berger, E. , The surprise is not that Boeing lost commercial crew but that it finished at all. “The structural inefficiency was a huge deal", Ars Technia, retrieved 5/7/2024 from https://arstechnica.com/space/2024/05/the-surprise-is-not-that-boeing-lost-commercial-crew-but-that -it-finished-at-all/, 2024.
  38. Schölkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. , Towards Causal Representation Learning, arXiv, retrieved 7/6/2021 from https://arxiv.org/pdf/2102.11107.pdf, 2021.
  39. Lawless, W.F. ; Sofge, Donald A.; Lofaro, Daniel, Ed.; Mittu, Ranjeev, Editorial: Interdisciplinary Approaches to the Structure and Performance of Interdependent Autonomous Human Machine Teams and Systems, Frontiers in Physics, eBook, https://www.frontiersin.org/articles/10.3389/fphy.2023.1150796/full, 2023. [Google Scholar]
  40. Greenberger, D.M.; Horne, M.A.; Zeilinger, A. Multiparticle Interferometry and the Superposition Principle, Physics Today, 46, 8-22, 1993.
  41. Mann, R.P. E: Collective decision making by rational individuals, PNAS, 115(44), 1038. [CrossRef]
  42. Smith, Adam, An Inquiry into the Nature and Causes of the Wealth of Nations, University of Chicago Press, 1977(1776).
  43. Lawless, W.F. ; Akiyoshi, Mito; Angjellari-Dajcic, Fiorentina, Ed.; Whitton, John, Public consent for the geologic disposal of highly radioactive wastes and spent nuclear fuel, International Journal of Environmental Studies, 71(1): 41-62, 2014. [Google Scholar]
  44. Mitchell, James; Zaman, Saeed, The Distributional Predictive Content of Measures of Inflation Expectations Federal Reserve Bank of Cleveland Working Paper No. -31. [CrossRef]
  45. Koch, C. Then I Am Myself the World: What Consciousness Is and How to Expand It?, Basic Books, 2024.
  46. Slovic, P.; Layman, M.; Kraus, N.; Flynn, J.; Chalmers, J.; Gesell, G. Perceived risk. Risk, Media and Stigma, London, Earthscan, 2001.
  47. Larissa, Albantakis; et al. Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms, Lyle J. Graham, Editor, PLOS Computational Biology, https://journals.plos.org/ploscompbiol/article?id=10.1371/ journal.pcbi. 1011. [Google Scholar]
  48. Rovelli, C. , Seven brief lessons on physics, Riverhead Books, 2016.
  49. Christensen, C.M.; Alton, R.; Rising, C.; Waldeck, A. The Big Idea: The New M&A Playbook, Harvard Business Review, https://hbr.org/2011/03/the-big-idea-the-new-ma-playbook, 2011 (reprinted 2022).
  50. Carroll, P.; Mui, C. Seven Ways to Fail Big, Harvard Business Review, retrieved 6/14/2024 from https://hbr.org/2008/09/seven-ways-to-fail-big, 2008.
  51. AEI-Brookings Working Group on Childhood in the United States Why Family Structure and Stability Matter for Children, Children First, Institute for Family Studies, https://ifstudies.org/blog/author/aei-brookings-working-group-on-childhood-in-the-united-states, 2022.
  52. Liu, Y.X. 1: interference in atom-exchange reactions, SCIENCE, 384(6700), 6700. [CrossRef]
  53. Rodney Brooks, ”Rodney Brooks: Robots, AI and other stuff, Blog, POST: PREDICTIONS SCORECARD, 2024 JANUARY 01," retrieved 6/30/2024 from https://rodneybrooks.com/predictions-scorecard-2024-january-01/#comment-563287, 2024.
  54. Sun Tzu, The Art Of War, Translated by Thomas Cleary, Shambhala Pocket Classics, Boston, 1991.
  55. Allen, J.; Watts, D.J.; Rand, D.G. 6699. [CrossRef]
  56. Van der Linden, S.; Kyrychenko, Y. , A broader view of misinformation reveals potential for intervention, Science, 384(669930): 959-960, 2024.
  57. Von Neumann, J. , Theory of self-reproducing automata, Burks, A.W. (Ed.), University of Illinois Press, 1966.
  58. Nielsen-Cole, C. , In-space manufacturing of self-replicating machines. Understanding the contexts of manufacturing—internal, external, construction, endo-construction,Tech Briefs, retrieved 7/8/2024 from https://www.nxtbook.com/smg/techbriefs/24TB07/ index.php, 2024.
  59. Hiermath, M. , The rise of the celestial assembly line. How space factories will revolutionize space exploration and benefit earth, Tech Briefs, retrieved 7/8/2024 from https://www.nxtbook.com/smg/techbriefs/24TB07/index.php, 2024.
1
Highlighted by the authors
2
NNs stand for the artificial neural networks used in machine learning. NNs are modeled after the biological neural networks found in animal brains.
3
4
Look for a country’s innovation index, see at https://www.wipo.int.
5
6
After the discovery of fabricated data in its construction, the “honesty" scale was retracted by the editor-in-chief, Proceedings of the National Academy of Sciences.
7
Highlighted by the author.
8
9
https://www.cia.gov/the-world-factbook/countries/nation-id/#economy (data estimated for years 2022, except 2015 for N. Korea); downloaded 5/20/2024.
10
https://epi.yale.edu/measure/2024/EPI (downloaded 5/20/2024).
11
12
Using the data for Russia, the K-L score was calculated for GDP/N for each country then divided by the sum of all GDP/Ns listed to get a percent; e.g., (4E12/1.4E8 = 2.9E4)/(column’s sum = 1.85E5) = .16 = p(x); similarly, q(x) = .04 for its freedom percentage, giving: ROUND(0.16×LN(0.16÷0.04),2). For Russia’s EPI K-L calculation, we got: ROUND(0.13×LN(0.13÷0.04),2). For Russia’s CPI score, we got: ROUND(0.08×LN(0.08÷0.04),2). Interpretation: The result in each cell is the relative entropy of p(x) with respect to q(x).
13
https://www.cdc.gov/tobacco/data[underscore]statistics/fact[underscore]sheets/health [underscore]effects/effects[under score]cig[underscore]smoking/index.htm
14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated