TO KNOW THEM, REMOVE THEM: AN OUTER METHODOLOGICAL APPROACH TO BIOPHYSICS AND HUMANITIES

Set theory faces two difficulties: formal definitions of sets/subsets are incapable of assessing biophysical issues; formal axiomatic systems are complete/inconsistent or incomplete/consistent. To overtake these problems reminiscent of the old-fashioned principle of individuation, we provide formal treatment/validation/operationalization of a methodological weapon termed “outer approach” (OA). The observer’s attention shifts from the system under evaluation to its surroundings, so that objects are investigated from outside. Subsets become just “holes” devoid of information inside larger sets. Sets are no longer passive containers, rather active structures enabling their content’s examination. Consequences/applications of OA include: a) operationalization of paraconsistent logics, anticipated by unexpected forerunners, in terms of advanced truth theories of natural language, anthropic principle and quantum dynamics; assessment of embryonic craniocaudal migration in terms of Turing’s spots; evaluation of hominids’ social behaviors in terms of evolutionary modifications of facial expression’s musculature; treatment of cortical action potentials in terms of collective movements of extracellular currents, leaving apart what happens inside the neurons; e) a critique of Shannon’s information in terms of the Arabic thinkers’ active/potential intellects. Also, OA an outer of “Hell” issues boxing fight. the to to them from our observation and tackle an outer view, since mathematical/logical issues such as incompleteness/inconsistency of biophysical

Set theory has always made significant contributions to logic, mathematics, topology, etc. But we can ask the question: what is a set? Most mathematicians adopt the naive point of view that what is meant by a set is intuitively clear (Munkres 1999). In turn, logicians provided set theory axiomatizations where each axiom expresses a foundational property of the sets. Though, even the best available set axiomatization, viz. the Zermelo-Fraenkel set theory (Zermelo 1930;Fraenkel et al, 1973), does not help in gauging the appropriate experimental setting for physical and biological scientific questions (Day 2012). The axiom of infinity ZF7 requires the existence of a set having infinitely many members: it is not the case with the limited number of elements included in physical and biological sets. Further, the fuzziness of concepts such as life and internal/external in biology makes the quantitative use of sets very challenging (Tozzi 2020). Think, for example, to the current dilemmas in defining the boundaries of disease (Doust et al., 2017). There are problems also for the sets and subsets of the formal systems, i.e., the sets of axioms along with rules of symbolic manipulation that permits the derivation of new theorems. Computer programs can in principle use algorithms to enumerate all the theorems of the system, removing every statement not resulting from the same theorems. A set of axioms is complete if a statement (or its negation) is provable from the axioms, while a set of axioms is consistent if there is no statement such that both the statement and its negation are provable from the axioms (Siavashi 2016). Gödel's incompleteness theorems showed that a) for any consistent formal system of basic arithmetic capable of modelling natural numbers, there will always be statements that are true but unprovable within the system (Gödel 1931); b) the system cannot demonstrate its own consistency. In sum, a complete and consistent finite list of axioms can never be created in formal systems of basic arithmetic: if novel axioms are added to make the system complete, the price to pay is that the system becomes inconsistent. The earliest formulations of Gödel's incompleteness theorems concerned the arithmetic system of Principia Mathematica and the Hilbert's program for the set of mathematical axioms. By then, incompleteness theorems' demonstration has been extended to any effective formal system. Summarizing, we got two problems: a) We do not have a strict methodological definition of set to validate the description of biophysical and humanistic issues. b) A formal axiomatic system cannot be simultaneously complete and consistent: either it is complete and inconsistent, of incomplete and consistent.
In the sequel we will provide an effort to show how these two problems can be overtaken via topological and logical arguments. We suggest a shift in investigators' interest from the object under assessment to its surrounding environment. The evaluated system turns out to be just a subset devoid of information inside a larger set. This procedure leads, as we shall see, to physical as well as biological consequences able to provide novel theoretical and methodological approaches to old problems from far-flung branches.

TOWARDS "PUNCTURED" SUBSETS
How a set (or a subset) can be distinguished from another? This question takes us back to ancient disputes concerning the principle of individuation, regarded as fundamental around 1275-1350 for solving philosophical as well as physical and theological problems (Gracia 1991). How is a thing identified as distinguished from other things? Does an (ontological, or logical, or both) criterion exist that individuates the indivisible members of the kind for which it is given? The problem is far from removed from contemporary science and philosophy. Since Aristotle, different notions of individuation have been proposed (Regis 1976). The principle of individuation is the form for Augustine, the matter for Avicenna, the materia signata for Thomas of Aquinas. Duns Scoto introduced the haecceitas, a peculiar feature of both the indistinct matter and the form which allows an object to be exactly hic et nunc, achieving its unmistakable singularity (Clatterbaugh 1972). William of Ochkam claimed that there is no time for pointless discussions, since the universals are not ontological entities, rather conceptual issues. A young Leibniz suggested that the individual is a positive metaphysical entity which is both distinguishable from other individuals and an indivisible unity (Koszkalo 2017). We will focus on a rather neglected interpretation of the principle of individuation, i.e., the double negation theory, first put forward (possibly) by Henry of Ghent. Henry ponders that the cause of individuation that makes a substance individual must be something twice negative (Koszkalo 2017). The first negation concerns the divisibility of an individual: in the the hierarchical Porphyry's tree of species and genera, the lowest species contains no subdividing species and is therefore called individual, which means indivisible. The second negation concerns the identity: a given individual is different from another individual. The double negation operates both from the internal, since it removes any multiplication and diversity from a given subset, and from the external, since it excludes any identity with other subsets. In the next paragraphs, we will use this negative approach to the principle of individuality to cope with sets and subsets.
Individuality as negation: the operational approach. Take a subset x of a set U, where x stands for a formal axiomatic system ( Figure 1A). If the universe of the discourse x (encompassing a subset of U and a certain number of axioms) is assessed just from the "internal" standpoint, there are troubles with Godel's theorems. If an observer evaluates the sole x and does not consider U, the observed system x and a theory concerning just the "internal" view of x with its own rules must be either incomplete, or inconsistent. To reach the completeness and consistency of x, that's only one shot: to look at the whole set U encompassing x. In other words, the "content" (i.e., the subset x) can be axiomatically assessed by studying its "container" (i.e., the set U). In touch with ancient claims dating back to the 12 th century (Porretano 2009), the container is no longer a passive structure including its content, rather becomes an active structure enabling the formal treatment of its content. To attain a formal axiomatic system that is complete and consistent, we need to remove the formal system itself from our analysis and to focus on its surroundings. This strategy consists of the logical operation of the complement of x, i.e., the removal of x from U. The complement of x is the set of elements excluding x within the larger set U. Since the larger set U contains all the elements under study, the subset x is not analyzed by an internal standpoint, rather by an external one. In logical language, the complement of x is achieved by turning x into ¬x ( Figure 1B). In sum, the best way to guarantee the completeness and consistence to our universe of discourse (i.e., the subset x) is to remove x itself and to introduce the external set U. We will term this novel methodological weapon "outer approach" (OA). In our outer framework, contrary to the belief of philosophers such as Giulio Cesare Vanini, the negation precedes the affirmation (Palumbo, 1878). The elements under examination and their topological relationships are no longer the object of our interest, rather they become empty holes inside manifolds with genus >1 (Figures 1C-F). We could define them "punctured subsets", since the subsets under examination can be assessed in terms of holes, Betti numbers, vortices (Tozzi and Peters, 2020; Don et al., 2020). When a set inside a genus-zero manifold turns out to be a hole inside a genus-one manifold, we achieve the completeness of the content inside our "hole". It is noteworthy that a "punctured" approach to manifolds can be partially tackled by no point geometries (Di Concilio et al., 2018), that emphasize the role of holes and voids in the quantitative assessment of scientific issues (Tozzi and Peters, 2019). TO provide a working example, the occurrence of holes in video frames has been used for objects movements' recognition (Tozzi and Peters, 2020a). Figure 1A. Venn diagram for a subset x inside a square set U. Figure B. By shading the region inside the subset x, the complement ¬x is acheived. Figures 2C-F. A square set encompassing different subsets with diverse topological relationships. The orange subsets stand for the universe of discourse that the observer is analyzing at that moment. Figure 1C. The subset I intersects both the subsets E and P. Figure 1D. In this case, it is the subset P that intersects both I and E. Figure 1E. The subset I intersects the subset E and the subset P, the latter encompassing a subset where I I intersects both the subsets E I and P I . Figure 1F. The three subsets, their reciprocal relationships and their further subdivisions are not recognizable anymore, since they have been "erased" (black areas) from the square set.

WHEN THE PRINCIPLE OF NONCONTRADICTION FAILS
To ensure the foundation of our novel methodological OA, we are required to provide formalization and validation of punctured sets. Once attained that the complement of x can be achieved by turning x into ¬x, the Aristotelian logical principles become problematic. By one hand the identity principle is compromised since x has been artificially turned into ¬x. This problem is easily fixed, if we consider, in touch with Ockham, that a proposition entailing a contradiction encompasses two extremes that are not distinct entities, rather just distinct terms. Since contradictions concern just the concepts in the observer's mind, there is no contradiction in the real physical/biological entity under investigation (Ockham 1991, quodlibet 1.2). On another hand, the law on noncontradiction (henceforward LNC) does not hold anymore. This problem is difficult to tackle, since we might ask to ourselves: if we turn x into ¬x, does the system preserve its consistency? The answer is negative. Nevertheless, two ways are available to restore the impaired consistency: a) The duality principle (De Morgan duality) comes into play (Kleene 1952). The members of each pair are said dual to each other when all the values and operations are switched simultaneously. To provide an example, the duality principle permits to the Boolean algebra to remain unchanged when all dual pairs (such as, e.g., 0 and 1, ∧ and ∨) are unchanged. For every proposition involving logical addition and multiplication ("or" and "and"), there is a corresponding proposition in which the words "addition" and "multiplication" are interchanged. b) Another way to restore the consistency in our system (i.e., our universe of discourse) is to use a paraconsistent logical approach, in which x and ¬x are not mutually exclusive, rather become complementary. PEX asserts that ex falso quodlibet, i.e., from falsehood (or from contradiction) anything follows. PEX states that: i.e., for any statements p and q, if p and not-p are both true, then it logically follows that q is true. Once a contradiction has been asserted, both a positive statement and its own negation can be proven true. The inference from p and ¬p to an arbitrary conclusion leads to inconsistencies in formal axiomatic systems, since it makes it impossible to distinguish truth from falsehood. To avoid the dangers of meaningless statements implicit in PEX, LNC comes to the rescue: Since contradictories cannot be simultaneously true, LNC removes the inconsistency from the beginning and neutralizes PEX. PCLs, together with other non-classical logics such as relevance and minimal logic (Johansson 1973), suggest that the classical ex falso quodlibet is incorrect (Priest and Routley, 1989) since the inference from p and ¬p to an arbitrary conclusion does not hold (Varzi 2000). A logic is paraconsistent iff it is not the case for all sentences p, q that p, ¬p → q. This means that we cannot be certain of the truth of any proposition which is irreducible to PEX (Copleston 1974).
In logical terms, Autrecourt suggests that the following formulas cannot be inferred from LNC: This means that the following formulas do not hold true: Consequently, it is false that: Since p ∧ ¬p → q is false, Autrecourt rejects PEX as PCLs do (Fitch 2013).
Some of the Autrecourt's writings deny the logical and epistemological strength of the absolute certitude guaranteed by LNC (Groarke 1984;De Rijk 1994). The Treatise (Autrecourt 1939, pg. 237, chapter: an omne illud quod apparet sit) and Giles' Letter to Nicholas (De Rijk 1994) raise a complaint: the fact that God can make miracles seems to deny the existence of LNC. God can annihilate every proposition by miracle, and, if He should do so, LNC would not be valid since it would not even exist. If contradictories signify the same, the possibility to distinguish, e.g., between the propositions "God exists" and "God does not exist" fades away and the very firmness of LNC is undermined beyond repair (Beuchot, 2005). In conclusion, Autrecourt's account is in touch with PCL: they both aim to remove not just PEX, but also LNC.
The underrated empiriocriticst Richard Avenarius (1843-1896) investigated the laws of experience and knowledge (Russo Krauss, 2015). In his Kritik (Avenarius 1908) he postulated an axiom of knowledge stating that every human individual initially assumes: a) a spatial environment composed of manifold parts standing in relation of dependence to one another, and b) other human individuals making manifold describable statements. Every individual's experience always finds together, inextricably joined, both the I and the environment. It is noteworthy that the two components form for an unbreakable enduring coordination (Avenarius and Russo Krauss, 2017), since a not-I is assigned to every I. No complete description of what is found by the experience can be drawn regardless of the two components taken together. Therefore, the original experience of the human individual goes against a split between p and ¬p. Since what is found by the experience is twofold, i.e., an inextricably intermingled persistence of both I and not-I, Avenarius implicitly undermines not just the ontological component of PNC, but also its very logical component.

What next?
The historical digression into PNCs' precursors leads to remarkable consequences concerning our OA. The unfeasibility of p ∧ ¬p → q described by Autrecourt, Avenarius and PNCs are noticed also in natural language when two men hold opposite conclusions. One says: "I said this", while the other says: "No, I said that", even though something is clear to both. Everything which appears to be true is true according to the two contenders, even at the risk to predicate the truth and the falsity of the same thing in the logical discourse. Two (or more) individuals taking part in a discussion may disagree while being (self-) consistent, using some vague terms either purposefully or unintentionally (Ciuciura 2013). Conclusions are not really demonstrated since the opposite conclusions can be drawn from the evidence of each of the contenders. This approach is very similar to what suggested by the PCLs' non-adjunctive discursive logic: "from the fact that a thesis P and a thesis Q have been advanced in a discourse it does not follow that the thesis P∨Q has been advanced, because it may happen that P and Q have been advanced by different persons" (Jaśkowski 1999).
In touch with the criticism towards PNC, it is noteworthy that one of the PCLs, viz. the relevance logic, states that the antecedent and the consequent of implications must be relevantly related. This correlation is explicitly required by Autrecourt too, who says that the antecedent and the subsequent must share their contents. The consequent cannot be inferred from a doubtful antecedent, since antecedent and consequent obey the principle of identity and must be identical. A priori demonstrations hold just when the identity A=A holds between the consequent and the antecedent (dal Pra 1952; Maccagnolo 1953). The cognitive borders between logical and psychological approaches become rather fuzzy in Autrecourt. For example, Autrecourt asserts that it is unclear whether the consequent is equal to the antecedent, since it is not discernible whether a perceived thing is simple and indivisible (De Rijk 1994). Only God comprehends all the things in a single, simple apprehension. The strongest way to provide a bridge between mental contents, their supposed noumenal correlates and the ever-present principle of individuation is to build a probability, though probable in the mere sense of being of worthier assent than its contradictory opposite (McDermott 1973).
Novel lines of research involving PCLs are suggested by OA. Nicholas of Autrecourt states that the ultimate reason why the evident truths (including LNC) must be accepted is that they please our minds. A being is nobler that another if naturally pleases men more (Kennedy et al., 1971). In touch with Aristotle, the optimistic concept of cosmic goodness is the pivotal point of Autrecourt's system. Since the universe has complete goodness, falsehood is the evil of the intellect. The intellect is not made for being pleased with the false, since "what appears is, what is evident is true" (McDermott 1973). Cosmic goodness recalls the modern notion of the antropic principle (Dicke 1961): the scientific scrutiny of the Universe would not even be possible if the laws of the universe had been incompatible with the development of life. Since humans are still here, this means that the evolution did not permit to our senses to believe to false cues, otherwise we could not have survived. This takes us in the logical world of PCLs, where our intellect provides an effort to grab not just the being, but also the non-being (Maccagnolo 1953). In touch with cosmic goodness, PCLs hold that no true theory would ever contain inconsistencies. Although Gödel's theorems are usually used in classical logic, they may have a role also in paraconsistent logics if the notion of formal proof in Gödel's theorem is replaced with the usual notion of informal proof (Priest and Routley, 1989). In computational terms, PCLs (and the gnoseological belief in truth provided by evolution) allow the mechanisms of damage control to restore contradictions when information systems generate unavoidable errors.

LIVING BEINGS FROM OUTSIDE
Our OA stands for a methodological tool to investigate living systems. The focus shifts from the analysis of the living structure to the analysis of his own external environment. This methodological operation brings to extremes the main tenet of dynamic systems theory: "the living beings are embedded/embodied in their environmental niche" (Friston 2010). The object of scientific study, viz. the living cell, becomes a sort of behaviorist black box, totally devoid of information. The best information available for the biologists comes from the environment where life is embedded.
Considering that the cells can modify their surroundings, the study of the environmental changes allows to investigate what happens inside the cell too. We are no longer looking for the information inside living agents, rather for the changes in baseline information inside the environment. The OA approach suggests that, if a biologist wants to study a bacterial colony, he must not look at the colony, rather at the changes in the surrounding environment such as variations in temperature, humidity, chemical gradients, and so on.

Muscles of facial expression in extinct species of the genus homo.
Physical anthropologists have usually avoided the study of human facial expressions and nonverbal communication, leaving their interpretation mostly to psychologists (LaBarre, 1947;Birdwhistell, 1970). Primate muscles of facial expression (mimetic muscles) are unique in that they function either to open and close the apertures of the face or tug the skin into intricate movements (Goodmurphy and Ovalle, 1999). The importance of the face as a critical variable in social intelligence is related to positive fitness consequences ( (Fridlund, 1994;Schmidt and Cohn, 2001). The mimetic musculature, a discernible signal of others' social intentions, transmits close-proximity social information such as emotional states, individual recognition, mate, infant/caregiver interaction, promotion of social acceptance, moderation of the effects of social negative actions, territorial intentions and conflict of interests with strangers or competitors (Preuschoft, 2000;Burrows et al., 2006). Facial expressions are coordinated with social interaction and language at several levels (Stringer and Andrews, 2005), such as the use of mimetic muscles to articulate speech sounds (Massaro, 1998), the contribution of facial movements to the syntactic structure (Bavelas and Chovil, 1997) and the conversational signals (Ekman, 1979). Although many studies have described facial muscular features in Primate species (Pellatt, 1979;Swindler and Wood, 1982;Gibbs et al., 2002), scarce data are available in ancient species of the genus Homo. We suggest, motivated by our OA methodological approach, that he current state of research in facial expression, combined with the topical interest in social intelligence as a driving force in human evolution, calls for the study of mimetic muscles in paleoanthropology. The arrangement of mimetic muscles in modern Homo sapiens and in extinct human species such as, e.g., Homo neanderthalensis, erectus, heidelbergensis and ergaster (Figure 2), could be compared to provide phylogenetic perspective to the evolution of facial expression and its role in human social intelligence. The attachments of facial muscles could be evaluated relative to known bony landmarks, such as the Frankfurt Horizontal, nasion, infra-orbital foramen, zygomaticomaxillary and zygomaticotemporal sutures, maxillary incisive and mandibular incisive fossae, mental foramen. Once projected the surface attachments of every mimetic muscle to a computerized model of the skull, the bony origin of the following muscles of facial expression could be evaluated: corrugator supercilii ( In turn, due to the lack of well-defined bony markings (Stranding, 2004), the following muscles of facial expression cannot be evaluated, neither in modern nor in extinct human species: orbicularis oculi pars orbitalis, Horner's muscle, levator labii superioris alaeque nasi, depressor septi nasi, nasalis (traverse part), incisivi labii superioris, buccinator, incisivi labii inferioris, platysma. It has been suggested that the complexity of mimetic muscles increases from the most primitive Primates to the Hominidae, with the highest level of complexity to be found in Homo sapiens (Gregory, 1929;Schultz, 1969). As species get more closely related to Homo sapiens and social networks become more intricate, it is held that their communicative facial repertoire and underlying facial musculature might become more elaborate (Huber 1930;Preuschoft, 2000). However, the legitimacy of this hypothetical, hierarchical phylogenetic model has been called into question. For example, Burrows and Smith (2003) found greater complexity in the facial muscles of Otolemur than previously reported, while Burrows et al. (2006) advised that we are not allowed to claim greater complexity in Homo facial expression musculature compared with Pan troglodytes.   (Turing 1952). RD describes a system of chemical substances where random disturbances are caused by the competition between two active components termed activators and inhibitors (Deca, 2017). The balance between excitatory and inhibitory inputs produces different diffusion patterns of the traveling chemical substances (Kondo and Miura 2010). RD has been proven useful to describe the formation of travelling waves and wave-like phenomena, as well as self-organized patterns like stripes, hexagons or dissipative solitons. These patterns dubbed "Turing patterns" have been used to explain a wide range of biological features, including leopard spots, zebra stripes, shark denticles, zebrafish markings, avian feathers, lung branching morphogenesis (Xu et al., 2017), hippocampal grid cell's firing patterns (Kondo et al., 2009;Cooper et al., 2018). We hypothesize that an RD-like mechanism might explain the lack of neuronal colonization of the distal gut in HD. OA suggests a model of gut colonization where reagents (i.e., the neuroblasts from the neural crest) enter a cylinder (i.e., the enteric wall) from an extremity (i.e., the proximal gut) and homogeneously diffuse towards the other extremity (i.e., the distal gut). The neural density in HD could progressively decrease in the distal areas due to local inhibition factors counteracting the rostro-caudal diffusion of the neural progenitors. RD modeling of gut colonization requires a slightly modified Turing's activator-inhibitor model. In sum, RD models for intestinal migration of neuroblasts predict that the subtle balance between the concentrations of activators and inhibitors produces aganglionic, hypoganglionic or normoganglionic gut segments. A further unnoticed option must be considered: the occurrence of an hyperganglionic intestine, characterized by hyperplasia of submucosal and myenteric plexuses . In RD terms, hyperganglionosis might stand for the absence of the distal inhibitors (see Figure 3B, left side) required for the normal development of the healthy gut. Further, HD may be associated with the Waardenburg syndrome, which displays pigmentation changes resembling Turing-like patterns. An animal counterpart of human aganglionic megacolon, viz. the congenital abnormality of overo spotted horses termed "white foal syndrome" (McCabe et al., 1990), is characterized by lethal intestinal obstruction together with a nearly allwhite RD-like coat. The correlation between RD models and the real pattern of neural intestinal colonization is experimentally testable. The medical advantage would be the possibility to estimate of intestine length to remove during surgical procedures for HD. Figure 3A: Simulation of Turing's spots production. The circles in the left picture illustrate the concentration on activators (blue circles) and inhibitors (red circles). Their interaction generates spotted patterns in a two-dimensional lattice (right picture). The white spots in the right picture stand for the neural density in the intestinal layer. Figure  3B, left side. In the healthy embryonal gut, the neuronal progenitors from neural crest progressively colonize the intestinal wall following a proximal-distal progression (white arrow). The final number of neurons will be approximately the same in all the segments of the adult gut. Figure 3B, right side. In HD, the number of neurons tends distally to decrease during embryonal colonization, leading to the occurrence of hypo/aganglionic distal segments.
In terms of RD, the progression (white arrow) is counteracted by the higher concentration of inhibitory factors in the distal gut (red circles).

HUMAN BRAINS FROM OUTSIDE
OA may have consequences when psychologists and neuroscientists approach mental functions. According to a bodily view of pain, pains are objects located in body parts (Nie 2021). To quote Ockham, pain is in the foot, not in the head (Ockham 1991, 1-12). This would mean that mental representations can be studied from outside the brain. The cognitive functions of the central nervous system such as sensations, memory etc. (Gazzaniga 2013) become "holes" devoid of information that pave the way to a negative definition not just of mental activities, but also of the same consciousness. An OA formulation of consciousness suggests the following definition: "the consciousness is all that we cannot refer to the environment surrounding us". A shift is required from our mental activity termed attention (i.e., the selective focus on a discrete aspect of information manifested by an attentional bottleneck) to negative formulations. Therefore, we agree with Clark and Chalmers (1998), who advocate the active role of the environment in driving cognitive processes. Since human reasoners tend to lean heavily of environmental supports, the world plays an active causal role, becoming part of the cognitive process.

Extracellular flows in the brain.
Leaving temporarily apart what happens inside the neurons, OA suggests to assess the action potentials in terms of extracellular currents of charged particles. Starting from the claim that the brain currents exhibit long-range connections and collisionless collective behavior taking place inside the underrated extracellular neuronal space, we will discuss here how the extracellular electromagnetic currents generated by cortical neural sources could obey to Vlasov-like equations. . The occurrence and evolution of long-range interactions can be mathematically described in terms of a self-consistent collective electromagnetic field produced by charged ions. Since the main mechanism of molecular transport within extracellular spaces is the diffusion, and since the brain acts like a porous medium for substances that do not cross cellular boundaries, the well-established diffusion equations could be used to describe the collective trajectories of large molecular ensembles (Syková and Nicholson, 2008). Modified Vlasov-Maxwell equations (Vlasov 1939; Kotelenez and Kurtz, 2010) might be able to describe brain dynamics in terms of a system of charged particles interacting with the electromagnetic field produced by the cortical currents. In sum, long-range behavior can be portrayed in terms of extracellular space/interstitial fluids containing collisionless chemical ions that give rise to self-consistent collective fields ( Figure 4A). Depicting the brain currents in terms of fluxes taking place inside a sphere (Figures 4B-C), the long-range collisionless trajectories can be described as collective movements in the extracellular space. It is noteworthy that the time-evolution of such collisionless spherical movements can be assessed   Figure 4B. An oversimplified spherical model of McKean-Vlasov collective movements portrays the brain as a rigid ball containing charged particles, e.g., extracellular and intracellular ions. The intracellular neuronal compartment (inner red circle), consisting of about 85% of the brain volume, is surrounded by the thinner extracellular space (outer circle). The extracellular charged particles display reciprocal long-range interactions collectively described by the collisionless equations. Figure 4C illustrates a simulation of particle movements in a tiny zone of the extracellular compartment. These theoretical plots might be compared with real EEG or fMRI neurodata.
Hypothesis: elliptic curves in the brain? Despite the brain function has been traditionally studied in terms of neuronal tissue and neuronal networks, OA allows experimentally testable previsions of fully different approaches.
Recently introduced techniques such as Diffusion Tensor Imaging and Diffusion Tensor Tractography describe neural projections resembling a well-known mathematical object: the elliptic curves. Elliptic curves are produced by cubic equations and describe two-dimensional paths without cusps or intersections (Alizadeh et al., 2019). These curves are embedded in algebraic two-dimensional finite fields defined in terms of points and numbers, both integers and rational. The same waves roughly resembling elliptic curves can be traced in the wavefronts of EEG and fMRI patterns. Therefore, OA suggests the possibility to examine the brain anatomy/activity from afar, looking for the possible presence of mathematical subtending structures. The obvious question is: what for? What elliptic curves bring on the table in the assessment of the brain activity? Elliptic curves might stand for the anatomical neural projections detected by tractographic techniques that lie inside a finite field of the brain. A brain containing elliptic curves can be subdivided in numbered zones characterized by integer and rational numbers and assessed through number theory, complex analysis, algebraic geometry and representation theory. Elliptic curves are abelian, therefore are equipped with symmetries apparently hidden at first sight. This allows the matching of anatomical/functional neuronal features located in far-flung brain areas. Further, it is noteworthy that half of the elliptic curves displays a finite number of rational numbers, while the other half displays an infinite number of rational numbers. In operational terms, this means that half of the nervous patterns are continuous, while half are discontinuous and arranged in spatiotemporally quantized steps. The last, but not the least, elliptic curve is a type of cubic curve whose solutions are confined to a region of space that is topologically equivalent to a torus. This means that anatomical and functional nervous trajectories could be assessed in the easily manageable terms of trajectories occurring inside a torus, as suggested by Tozzi et al. (2017).

HUMANITIES FROM OUTSIDE
In humanities, due to the intrinsic lack of the" trial and error" inductive methods typical of the experimental procedures of hard sciences, metaphorical expressions about our world and its various segments rarely develop into well-defined, quantifiable, measurable, testable and operationalizable structures. OA suggests different interpretations of social and poetic matters, providing extensive rework and fresh philosophical and textual implications. Feria Still, on 8 July 1599, a surprising minute is reported in the same archive. Here you are the extant report of the proceedings we are interested in (Firpo, 1949): In brief, the report says that the Pope (and the Brotherhood) read the copy of a letter sent to the Venetian Inquisitor on June the 20 th . This letter was believed to be written by Celestino. To uncover the author of this manuscript, the Pope asked for a comparison with previous Celestino's writings. The result of the comparison is unknown: what we know so far is that Celestino was taken to the same roman prison where Giordano Bruno was caged and he was burnt alive in Campo dei Fiori in September, five months before Bruno himself suffered the same fate in the same place. What is certain is that a very speedy trial was concluded against Celestino, following procedures that were unusual for the otherwise careful and meticulous Roman Inquisition. How to explain the letter received by the Venetian Inquisitor? Influent scholars such as Firpo (1949), Marchetti (1979) and Maifreda (2018) unanimously held that the letter was anonymous. Once he came to know of the anonymous letter, the Pope Clemens VIII, suspecting that Celestino himself was behind the unsigned manuscript, asked for his Capuchin brothers who held his previous writings. Summarizing, the standard version argues in favour of self-denunciation: Celestino wrote to the Roman Inquisition and (anonymously) to the Venetian Inquisitor to inform himself. However, looking carefully at the extant report, an alternative version is worth to be put forward. The minute does not explicitly contend that the letter was anonymous. Furthermore, it is not stated that the Pope read the original manuscript, rather he went through a copy. The phrase: "ab ipso, ut creditor, scriptarum" ("the letter believed to be written by Celestino") might mean exactly the opposite of what has been traditionally convened. An unorthodox account of the minute can be hypothesized: the Pope (and the Brotherhood) read a copy of the letter signed by Celestino and sent to the Venetian Inquisitor. Though, the Pope and the Brotherhood noticed something suspicious concerning the manuscript, something against the ascription to Celestino. That's why the letter "was believed to be written by Celestino": it was signed by him, but it seemed like the manuscript was not in Celestin's wheelhouse. This interpretation implies that the Pope had reason to believe that the letter sent to the Venetian Inquisitor was not by Celestino, despite his signature. The Pope guessed that somebody was trying to pass himself off as Celestino. This unconventional interpretation paves the way to a fully novel understanding of Celestino's trial. Our role ends here: once we have raised the issues, it is not our task to speculate on alternative explanations for such an intricate affair.
Dante Aligheri's "amor, ch'a nullo amato…": an "ex nihilo" account of "a nullo". The renewed verse V, 104 of Dante Alighieri's "Hell" (Giacalone, 1982) is one of the most celebrated not just of the Divine Comedy, but also of the worldwide literature. Nevertheless, its meaning is still controversial and highly debated. "Amor, ch'a nullo amato amar perdona": the mainstream account points towards the following meanings: 1) "Love, that spares none of the loved from loving in return".
2) Or "Love, that does not allow not to love back".
3) Or "Love, that, when one is loved, does not allow that it be refused". 4) Or "if you love someone, that love will give back to you".
In the Amor Cortese's framework, summarized by Andrea Cappellano's "De Amore", love is a power unescapably pushing men towards women and vice versa, forcing everyone who is loved to love in turn (Malato 2018). However, this conventional reading does not sound logical: why do I have necessarily to love anyone who loves me? The neo-Plotinian, medieval account of a God who loves its Creatures suggests that reverberation and mirroring of Love occurs among entities equipped with different levels of Being (Katz 1950). Nevertheless, the claim "Love, which does not allow NOT to love back" is against the ordinary experience when referred to the everyday love affairs among human beings. We suggest a OA-framed interpretation: the words "a nullo" can be taken in the sense of the Latin "ex nihilo", i.e., "from nothing". In Dante's context, "ex nihilo" may stand for different meanings at diverse informative levels. In a philosophical and theological sense, Dante's "ex nihilo" might refer to the long-standing controversy about God's Creation "ex nihilo", i.e., a God who creates without manipulation of pre-existing matter (Gilson 1955). The idea that "nothing comes from nothing", first appeared in Parmenides' Physics. In the following centuries, creation ex nihilo became a typical christian issue (see, e.g., Basilius 1990) against the Platonic concept of uncreated matter. During the Medieval ages, the proposition "ex nihilo, nil fit ("out of nothing, comes nothing") was used by Scholastic theologians to claim that the Universe needs God as its cause, since something cannot be created from Nothing (Duncan 2011). This account is strictly correlated with the controversial Averroes' and Aquinas' account of causality. The cause/effect issue was tackled by the 1277 Condemnations, that provided a sharp critique of the "heretical" account of God as the First Cause able to produce just the First Effects (Klima et al., 2007;Marmura 2000). It is noteworthy that the Condemnations, that also dismissed Andrea Cappellano's "De Amore". Therefore, in Dante's Chant V, "ab nihilo" could stand for "sine causa", in touch with the 1277 Condemnations. Love is described here as a power issuing from nothing at all, "ab nihilo". Just like God is the First Cause who creates the world from nothing, Love lets people fall in love without recognizable causal relationships and preexisting background. Love, at least in the case of the illegitimate passion that links Paolo and Francesca, arises from absolute ignorance. In the social context of an adulterous love, "from nothing" might stand for: "ignoring the rules, the canonical laws". A love affair out of the sacrament of wedding must be condemned. Therefore, Love might correspond to the "for del dritto amore" mentioned in Hell XXX, 39, i.e., "love as a passion outside every rule of the legitimate Love".
Summarizing, if we consider our account as holding true, or at least possible, the proper semantic meaning of the verse "Amor, ch'a nullo amato amar perdona" would be: 1) "Love, that forgives the loved to love from nothingness". 2) Or "Love, that forgives who is loved to love back, against the laws".
3) Or "Love, that allows who is loved to love, also against natural and human laws". 4) "The power of love holds also against the natural laws and order, and against God's will. Other feasible interpretations, also raising from the same "ex nihilo"'s account suggested by OA, could be: 1) "Love, that allows, starting from nothing, an individual to love the loved one". 2) Or "Love, that forgives to love from nothingness the loved".
3) Or "Love, that absolves who loves the loved with no reason".
In summary, a free translation of Dante's verse could be: "Love is a strong, unreasonable power outside natural and human laws, which allows one to love another against all odds". This also testifies how Dante was fully aware of the theological and philosophical debates occurring before and after the 1277 Condemnations.
The mysteries of the Voynich manuscript. The Voynich manuscript (Beinecke MS 408) is a baffling fifteenthcentury codex including weird and elaborated illustrations such as otherworldly plants, unfamiliar constellations, herbal medicine drawings and enigmatic images of naked women swimming though fantastical tubes and green baths (Skinner et al., 2017). The manuscript is written by an unknown author in an unknown language and alphabet. From the rediscovery in 1912 by rare books dealer Wilfrid Voynich, its language has eluded decipherment (Clemens and Harkness, 2016). The 240-page manuscript is believed running left to right, with most of the characters composed of one or two simple pen strokes. The format is one column in the page body, with slightly indented right margin and with paragraph divisions, often with stars in the left margin (Shailor). OA suggested us to visually inspect the ink in digital reproductions (available at Beinecke Rare Book and Manuscript Library, Yale University: http://beinecke.library.yale.edu/dl_crosscollex/SearchExecXC.asp?srchtype=CNO and here: http://ixoloxi.com/voynich/pdf/en/vms-quire1-en.pdf ). It can be demonstrated that the codex is partially written from right to left (Figure 5). Therefore, contrary to the common belief, at least some rows of the Voynich manuscript run from right to left.

Figure 5.
Magnified view of the Voynich manuscript. In the row under examination, the pen stroke ran from right to left. When the ink started to fade, the Author dipped once again the pen in the ink and kept writing. This means that some rows are written from right to left, contrary to the common belief.

RECENT HISTORY FROM OUTSIDE
History, by its own nature, copes with intricate combination of theoretically and empirically wavering occurrences. This prevents a deepening of human events' dynamics through testable experimental procedures. Concept development and modelling, as well as others that might contribute to quantification of historical events, are strongly required. Take, for example, the kidnapping of the former Italian Prime Minister and then President of the relative majority party Christian Democracy Aldo Moro (Drake 1995). At 9 o'clock of March 16 th , 1978, he was kidnapped in Rome by left terrorists of the Red Brigades (Gotor 2018). His body was found 55 days after in the trunk of a Renault 4 car. According to the official version, confirmed by the same brigatists ten years after the event, killers brought him to a parking garage and shot him with two weapons, the body lying on the back inside the car trunk. This reconstruction of Moro's death is still highly debated (Moro 1998;Cucchiarelli 2016). The photographs taken during the body inspection (Questura di Roma, 1978) and the official report of the autopsy (Commissione, 1989) are freely available on Gero Grassi's website: http://www.gerograssi.it/cms2/index.php. Instead of looking at the body and the wounds, an OA approach suggest the investigation of outer elements, such as the jacket, gilet, cravat, shirt and underwear. The question here is: is it feasible to infer from the clothes the temporal sequence of the bullets and the position of Moro's body during the shots? Going through the autopsy report, it can be inferred that just two of the bullets pierced the jacket. The two holes in the jacket can be superimposed solely to two of the breaks: the entry wounds termed 1 and 2 in Figure 6, left side. During body inspection, after the removal of the gilet, the cravat looked ruffled and disarrayed ( Figure 7A). The explanation is straightforward, if we consider that the doctors reported the presence of napkins under the gilet, assembled in an effort to stop a previous bleeding. The cravat's displacement could have been caused by the insertion of the napkins under the gilet after the first shots ( Figure 7B). Looking at the breaks detectable on the cravat, it can be noticed that they were produced by bullets shot after the insertion of napkins. These holes correspond to the ones termed 5, 6 and 8 (and, possibly, 4) ( Figure 7C). In sum, contrary to the official version of one-step execution inside the car trunk with the body lying supine, OA points towards a multi-step execution, the first phases taking place outside the car. According to our analysis, the temporal series of events was the following: at first the President, standing upright or seated, was shot by a few bullets. Then the killers put napkins under the gilet to stop the haemorrhage. It followed the sequence of shots 5, 6 and 8 (and, possibly, 4) that pierced the cravat already dislocated by the napkins. Another last sequence was thrown when the body was already lying on the back inside the car: indeed, a bullet was found stucking the trunk's metal below Moro's corpse. In sum, OA is helpful in detecting hidden features in photographs of deceased subjects, contributing to elucidate cold forensic cases. We suggest that a closer computer-aided investigation of clothing can be used to shed new light on controversial historical events in which pictures are available: to provide a theoretical example, it would be feasible to describe the series of shots that reached Benito Mussolini's body during his contentious execution. Note that the two yellow holes piercing the jacket do not match any of the blue holes piercing the gilet. Figure 6B.
Once relocated the jacket via computer simulation, it is easy to notice that only two blue dots match the yellow dots, i.e., the blue dots termed 1 and 2. The location of the jacket during the shots that reached the holes 1 and 2 corresponds to an individual standing vertical. Therefore, Moro was upright, rather than lying on the back inside the car, when he was reached by these bullets. Figure 7A. The cravat under the gilet was warped and the necktie misplaced. This suggests that napkins were inserted under the gilet after the first shots to stop the bleeding (Figure 7B). Figure 7C. Superimposition of the entry breaks and the ruffled cravat. The borders of the gilet and the jacket are highlighted (black lines). It is easy to notice that the holes in the ruffled cravat match the shoots 5, 6 and 8 (and, possibly, 4). This means that the bullets reaching the cravat were not shot during the first phases of the execution. Sonny Liston and the torn tendon. The first boxing fight between Sonny Liston and Cassius Clay took place on February 25, 1964 in Miami Beach. Clay was declared the winner, since Liston astonishingly failed to answer the bell for the seventh round (Remnick, 2000). Liston stated he had to quit because of a left shoulder injury (Tosches, 2000). He had been long suffering from shoulders' bursitis and received cortisone shots, despite his notorious phobia for needles (Assael 2016 , shows that in the sixth round, i.e., the round before his quit, Liston moved the left arm in the ordinary way. None of the signs and symptoms of a rotator cuff tear could be noticed in his boxing activity. In the sixth round, Sonny raised the left arm 22 times and the right arm just 3 times to punch and counteract Clay's attacks. This means that Liston used mostly the left arm, e.g., the one that was later said to be injured. If Sonny had the left-arm partially or totally injured, it would have used the right one, to avoid the pain and to address the left shoulder's impaired motility. Therefore, Liston's left arm was not damaged enough to prevent him to fight.

CONCLUSIONS
We advocate a methodological innovation, i.e., an outer view of biophysical and humanistic issues. OA outcomes include the possibility that the setting choices of living systems are influenced by hidden variables correlated with still unknown environmental factors. In is well-known that the living organisms must be equipped with homeostatic mechanisms to cope with external changes in pH, temperature, osmolarity, bacteria, viruses, etc. Here we ask: how do the living organisms cope with changes in environmental factors that have been less scientifically explored? Are living beings equipped with still unknown homeostatic devices to tackle least studied external changes such as, e.g., magnetic fields, infrasounds, cosmic rays, visual or tactile noise, etc.? Are there hidden, still unexplored peripheral or internal receptors sensitive to different types of environmental changes? Active research of new receptors in ongoing. For a survey, see Tozzi (2015 . This amazing latter finding suggests that the ultrasonic components of the acoustic signal play a role in human mother-infant interaction. To show the huge potentialities of OA in scientific inquiry, we propose the last example, which demonstrates how an outer view might provide fresh insights in the interpretation of already achieved scientific results. According to OA, it might be hypothesized that a factor outside our body, say, e.g., the bacteria on our skin, could contribute to keep constant our bodily temperature. This claim is easily testable: do germfree rats have a temperature lower that the rats normally exposed to microbes? Looking at the literature, we found a paper by Kuger et al. (1990) testing whether the gut flora might influence the body temperature of rodents. Both germfree mice and rats given nonabsorbable antibiotics showed a marked decrease in both their daytime and nighttime temperatures. While the Authors concluded that these results support the hypothesis that gut flora has a tonic stimulatory effect on the body temperature of rodents, OA suggests a different explanation for this old, acquired data.

CODA: INFORMATION FROM OUTSIDE
The old-fashioned concept of active and potential intellects has a feasible counterpart in the modern-day concept of information. Here we show how this claim leads to unexpected, troubling consequences. Aristotle distinguishes between the active and the potential intellect (see De Anima: III,5 and Metaphysics: XII, 7-10). To know something, every man must be equipped with the ability to know it. This individual skill is the potential intellect, to be boosted by training and exercise. The active intellect is required to transform the potential knowledge of the potential intellect into actual knowledge. Some scholars such as Alexander of Aphrodisias merged the mortal, passive, potential intellect with the body and equated the omniscient, separable, impassible, unmixed, divine, immortal active intellect with the "unmoved mover" (Bazán 1989). In touch with OA, the Arabic thinkers Al-Farabi, Avicenna and Averroes located the active intellect outside the human soul, fully separated from the human being (Dales 1995). Two hierarchical emanations take place, from the First Cause to the supernal realm and from the transcendent active intellect to the lower world. The illumination of the human (potential) intellect is achieved through its conjunction with the transcendent (active) intellect (Davidson 1992). In sum, the whole human knowledge, past, present and future, is included in the active intellect, external to the body and shared by all the individuals. According to his skills, possibilities and available technology, the potential intellect of every individual catches a part of the fixed and stable cosmic knowledge provided by the active intellect.
Translating the account of the active intellect into the recent field of information and information entropies (Shannon 1948;Dick 1981;Wheeler 1990), we achieve the concept of total cosmic information. Cosmic information, that is eternal since can be neither created nor annihilated, could be either finite is the Universe is closed, or infinite if the Universe is open. It stands for the general active intellect, partially available for the potential intellect of every huma individual. Take a scientist studying an object. Fitted with natural skills and powered by technological devices, the scientist extracts chunks of the total information endowed in the object. The more the available technology, the more the scientist's potential intellect gets information from the agent intellect (in this case, the object under investigation). For example, when a scientist wants to scrutinize the infrared light emitted by an object, he requires a proper detector since his natural skills are not powerful enough to detect this feature. The more the scientist explores each possible feature of the object, the more he gets close to the complete knowledge. Things come to a full circle when we examine once again Richard Avenarius' links with PCLs. The Avenarius' I and the not-I are strictly correlated, although none of them prevails. This is a standpoint of the philosophical attitude termed "neutral monism": despite the I (i.e., the human mind) and the not-I (i.e., the body and the environment) have equal dignity, they must be subordinated to something neutral, hierarchically placed above them. Among the disparate efforts throughout the centuries to find this neutral principle, the attempt provided by Sayre (1976) is noteworthy. He presented an original approach to the subject of the mind-body problem, linking the physical sciences with the sciences of human behaviour and suggesting that the neutral principle above the mind and the body is the cybernetic concept of information. This means that the conceptual step from PNCs to active intellect/information is not a difficult road.
One might ask: what for? What does a rather analogical comparison between Aristotle's, Avicenna's and Averroe's intellects and Shannon's and Wheeler's information bring on the table? An outer approach to this question provides a striking response. The ancient accounts suggest a divine source for the active intellect, either angelic/supralunar, or a direct emanation of God. Nevertheless, the corresponding account of cosmic information as an unlimited, immortal whole has problems with the principle of individuation. As previously stated, the problem of individuation is far from removed from contemporary science and philosophy. How are things identified as distinguished from other things? How can the homogeneous matter give rise to different forms and qualities? These ancient questions are still relevant in far flung fields, in particular in the hotly debated field of information, since we might ask: if two files contain the same number of bits, which is the difference between their available information? The information enclosed in two expanded memories with the same number of bits is different by the inquirer's standpoint. For example, two 1GB Drive flash USBs might encompass either a Depeche Mode album or the second Symphony of Shostakovich, it does not matter. This leads to biological questions. Even though the human cells have the same DNA, a hepatocyte and a cardiac cell are clearly different: does it mean that their principle of individuation depends on their phenotype, or by the different bodily environment in which they are embedded? Do they encompass the same amount of information? In sum, the same fixed quantity of bits does not lead to the detection of the same available qualitative (we could use the term "semantic") content. This raises doubts as whether the tenet of the cosmic information extracted by the human mind holds true. Also, critiques have been raised to the role of information in mental activity, casting doubts on the adequacy of the information paradigm to describe the brain functions and on the assumed relationships between changes in entropies detected by the available neuro-techniques and mental tasks (Tozzi and Peters, 2020b).
The comparison between active intellect and information has paradoxically two opposite consequences: by one side it eradicates the divine concept of the knowledge and leaves just the quantitative concept of information; by another side, however, it introduces once again a metaphysical component, i.e., the presence of a vague, eternal substance permeating the universe and devoid of scientifically recognizable meaning. In other words, when we, in touch with the concept of the agent intellect, consider the cosmic information as the largest amount of bits, we are only allowing the metaphysical concept of God to sneak in the back of scientific matters. Therefore, a crucial question arises: when the scientist takes information from the object, is he extracting the information endowed in the object, or is it building information that does not exist inside the object? Is our qualitative mental information discovered, or is it invented? Do we really think that there is something there, outside us and inside the object? Isn't "an object encompassing the highest number of bits" a metaphysical concept?
The OA approach provides an alternative account: what is believed to be extracted from an object is not really extracted, rather is produced by our minds. Could we state, paraphrasing Aristotle, that semantic information is not actually any real thing before being thought by human individuals? Could we say that the mind is potentially whatever is thinkable, though actually is nothing until it has thought? Is active knowledge identical with its object? Is potential knowledge prior in time to actual knowledge? Is the knowledge alone the cause that produces the action? Does time exist without a clock? An unexplored connection among Autrecourt, paraconsistent logics and quantum mechanics (Brown 1993) might be useful to approach our questions. Against Aristotle, Autrecourt states that two points can touch one each other retaining its own different position. This apparently weird statement is in touch with the quantum concept of bosons' superposition. Bosons are not subject to the Pauli exclusion principle: any number of identical bosons can occupy the same quantum state (Yin et al., 2019). According to Autrecourt, a transition occurs from one state to the contradictory one in the absence of a real intrinsic change of any of the terms. Connectives such as "¬" do not mean anything, since they are syncategorematic terms lacking denotation and ontological status (Thijssen 1990). Agreeing with Ockham, Autrecourt seems to support the thesis that we have no knowledge of things, but only of terms, such that God and creatures become nothing. In accordance with this claim, recent approaches interpret quantum mechanics as a reference-frame theory pertaining to observer-dependent relational properties (see, e.g., Yang 2018). Amazingly, these extreme relational formulations of quantum mechanics have been experimentally supported by recent papers (The BIG Bell Test Collaboration 2018). These studies, in touch with paraconsistent logics and Autrecourt's Ockhamism, suggest that, contrary to the tenets of local realism, the properties of the physical world are dependent from the observer. In terms of information and active/potential intellects, we can just finish with a slogan: without a thermometer, an object does not have a temperature.

STATEMENT
The undersigned Author transfer all copyright ownership of the manuscript, in the event the work is published. The undersigned author warrants that the article is original, does not infringe on any copyright or other proprietary right of any third part, is not under consideration by another journal, and has not been previously published. The Author does not have any known or potential conflict of interest including any financial, personal or other relationships with other people or organizations within three years of beginning the submitted work that could inappropriately influence, or be perceived to influence, their work.