1. Introduction
“The best way to understand man is by creating him”
—José Negrete-Martínez
Complexity
1 has been studied since antiquity. Just to mention a few examples: Aristotle’s concept of “more than the sum of its parts” is related to emergence (See
Section 6); the Sanskrit term “tantra” (interwoven) has several parallels with complexity; and ecology has always been inherently complex. There are several historical examples of what would later be called artificial life (ALife)
2, with a mild surge after the publication of Mary Shelley’s “Frankenstein; or, The Modern Prometheus” in 1818 [
12,
111,
113]. There have been many artificial creatures, first as ancient myths, then with automata (possible with clockmaking technology required to precisely measure time for making accurate maps as Europeans navigated around the planet), and in the previous century with the development of the first digital computers [
15,
20,
118].
Still, the modern scientific study of complex systems and the field of artificial life (under that name) can be traced to the 1980s around the Santa Fe Institute (SFI, founded in 1984) and the nearby Los Alamos National Laboratory (LANL, created for the Manhattan Project) in Northern New Mexico. SFI (celebrating its anniversary) was the first research institution to use the name “complexity”, even when there were several places where similar research had been carried out. The first conference on Artificial Life (1987) took place in Los Alamos, while the second (1990) and third (1992) were in Santa Fe, all three were organized by Chris Langton (who coined the term ALife and worked at SFI for some years) and others. In 1991, Francisco Varela, Paul Bourgine, and others, organized the first European Conference on Artificial Life, with a perspective tending more towards cognitive science. Eventually, both “schools” converged.
I will not attempt to provide a historical account of complexity, ALife, or artificial intelligence (A.I.). My purpose is to notice the similarities and differences between the three fields as they share conceptual, methodological, and philosophical approaches.
In the next section, I’ll review the historical and technological circumstances that predated the development of complexity, ALife, and A.I. In
Section 3, I’ll mention common limitations that these fields face, along with the expectations they have generated. In subsequent sections, I’ll relate the concepts of interactions, self-organization, emergence, and balance to complexity, ALife, and A.I., before closing the paper with open questions.
2. Computers as Telescopes
“Where there is an observatory and a telescope, we expect that any eyes will see new worlds at once.”
—Henry David Thoreau
Why did complexity as we know it and “life as it could be” [
71] become popular in the 1980s and not before or after? Personal computers. Before then, digital computing was restricted to the few research institutions that could afford the expensive equipment (and thus there were few developments that would now be considered as complexity or ALife. In the case of A.I., while there were more projects funded by governments, companies had fallen into an “A.I. winter” because of unfulfilled expectations [
41]). PCs changed everything. The number of people who could exploit and explore new possibilities in information processing suddenly exploded.
As already mentioned, there were a few examples of what could be considered artificial life, e.g. [
15,
20,
118], while Alan Turing [
116], John von Neumann [
119], and others were interested in the potential ability of computers of modeling the human mind. We can say that
cybernetics [
8,
61,
97,
127]
3 set the basis for the scientific study of complex systems, intelligence, and life. This is because cybernetics was the first transdisciplinary effort to study phenomena independently of their substrate. Systems were studied in terms of their
organization, rather than in terms of their components. And because organization [
9,
10,
100,
117] can be described in terms of information [
93,
105], it became clear that the technology capable of increasing information processing (a.k.a. computation), storage, and transmission would be essential.
Something similar happened with fractals [
75], which were named only in 1975 by Benoît Mandelbrot. Still, some examples were already proposed in the late XIXth and early XXth centuries by Cantor, von Koch, Sierpiński, and others. Even more, Gaston Julia and Pierre Fatou studied iterative functions, which can be used to construct fractals. Still, these were mainly forgotten. But Mandelbrot had a huge advantage: access to computers that could
draw fractals as he worked at IBM Research
4.
Then, the interest in fractals exploded.
Before telescopes, no planet beyond Saturn could be detected. And our moon was the only satellite known. Galileo was able to see Jupiter’s four largest satellites with his telescope. More planets followed. Other galaxies were observed only less than a century ago as more powerful telescopes became available. The first exoplanet was detected in 1992. Now there are more than five thousand exoplanets confirmed in more than four thousand planetary systems. It was only because of these observations of exoplanets that we now know that most stars have planetary systems, even if we have yet to detect most of them.
Before microscopes, doctors were taught that disease was caused by the imbalance of “humors” (or astrological influence, from which the name of
influenza comes). It took more than two centuries for the germ theory of disease to be accepted. But without
seeing pathogens, how could we attempt to prevent and cure diseases caused by them? Leeches, of course
5.
Before computers, we did not have the proper tools to study complex systems. Just like our vision is limited to perceive the macro and the micro, our limited cognition restricted us to dealing with only a few variables, even if we had huge blackboards. As Heinz Pagels noted, computers are like telescopes for complexity [
88]. And artificial life. And artificial intelligence. All of these three have
information processing at its core. Thus, we could only begin to study them once information technology reached a level where enough information could be stored, transmitted, and processed to simulate intelligence, life, and complexity [
106].
Why New Mexico, “land of enchantment”? This is a trickier question. Better said, attempting to answer it has to be more subjective. Still, I can speculate that at the time there was enough talent (some Nobel prize winners) and freedom of research at LANL (for example, arXiv was created there by Paul Ginsparg in 1991). Unfortunately, as several colleagues who have worked at LANL told me, the situation changed at the Laboratory for different reasons, resulting in limited creativity and less people being attracted to it. Nevertheless, it seems to me that “back in the day”, it was remote enough so that non-mainstream ideas could be explored, but not too remote so that the successful ideas that were developed could spread.
3. Promises and Limits
“Every man takes the limits of his own field of vision for the limits of the world”.
—Arthur Schopenhauer
“The limits of my language means the limits of my world”.
—Ludwig Wittgenstein
One could naïvely think that we just need enough computational power to completely model and understand intelligence, life, and complexity. Many promises were made: robots smarter than humans in all domains, all diseases cured, genomes controlled exactly, future predicted precisely... all attempts have failed, and some researchers are still hopeful of achieving these goals with better models and faster computers. And many projects with these expectations are still being funded
6. Nevertheless, even before the first electronic computers were built, this approach was “doomed” by the limits of formal systems as proven by Gödel [
55], Turing [
115], Chaitin [
29,
31], and more.
In the late XIX
th century, Georg Cantor proposed set theory (for which he was ridiculed and ostracized), which later became the basis of modern mathematics. Paradoxes arose. Whitehead and Russell [
126] attempted unsuccessfully to overcome them. David Hilbert launched a program to try to prove that mathematics was complete (all statements can be proven true or false), consistent (no contradictions), and decidable (questions posed within mathematics could be answered). A young John von Neumann, then PhD student of Hilbert, was working on this topic, and probably that is why he was the only one that understood when Kurt Gödel presented his results proving that formal systems could not be complete nor consistent. Later Turing proved that mathematics was not decidable, for which he defined the concepts of Turing machine and computable numbers. The implications of these results are that formal systems are limited in ways that have yet to be completely understood. A sign of this is that we still attempt to use formal systems for tasks that would require going beyond those limits. Still, in many cases, partial success is better than nothing at all, especially since we have yet to find a suitable alternative.
Even when adaptation is widely used [
6], there is always a part of systems (axioms in the formal case, hardware or hard code in the engineering case) that cannot be changed. Still, we might argue that “real” intelligence, life, and complexity cannot change the laws of physics or chemistry, so in a sense they are also limited.
Independently on our definitions of intelligence, life, and complexity, we can say that artificial systems have yet to exhibit behavior as rich as the one of natural systems. Could this be because of the limits of formal systems? Or simply because we have yet to understand how nature changes itself?
Moreover, it might be that we want artificial systems to be simpler than natural ones. This is because we can attempt to better understand less detailed versions of natural systems.
In the case of artificial life, these limits have been evident in the study of
open-ended evolution [
90,
107,
112]. As Hernández, et al. [
58] showed, undecidability and irreducibility (which might be considered as desirable or undesirable but are precisely some of the limits of formal systems) are conditions for open-endedness.
For complexity, a relevant case is that of
emergence [
1,
16,
18,
102] (to be expanded in
Section 6). There are several notions and flavors of emergence. In general, it can be said that emergent properties are those present at one scale (usually lower or faster, but not necessarily) and not at another scale (usually higher or slower) [
50]. In particular, “strong emergence” is seen as problematic by some, since it usually implies
downward causation [
23,
27,
37,
40]. This means that emergent properties at a higher scale have a causal effect on elements at a lower scale. We have yet to find a formalism that properly describes downward causation, while some argue that it does not even exist (downward causation might be apparent, an epiphenomenon, but the laws of physics explain everything)
7. Could it be because of the same limits of formal systems? Nevertheless, for practical purposes, does it really matter? Even if
in theory everything could be reduced to physics,
in practice it is not. So, in any case, we do need descriptions at all levels to understand and face complexity.
For A.I., there have been several limits identified, one of the most relevant being that of
meaning [
79,
98]. In principle and practice, machines can simulate in a very sophisticated way our cognitive abilities. Still, do they “really” understand [
57,
104]? We might say that pragmatically it does not matter. But it should, as a feature of human cognition is the ability to change meanings arbitrarily and adaptively, which again seems limited by formal systems used to implement A.I. systems. There have been impressive advances within information theory, but methods for creating semantics and understanding are still at an early stage. Some people [e.g., [
3] have argued that the surprising capabilities of recent large language models could be considered as understanding, although this is still hotly debated [
80].
It might be that these limits are actually a feature, not a problem. We “just” need to accept them to be able to exploit them, rather than fight against them. Imagine that mathematics (or any formal system) was consistent, complete, and decidable. as Hilbert and others hoped for. Yes, we would have “absolute truths” and certainty. But would we have creativity? Innovation? Serendipity? It seems to me that many of the features of our world (without which we would not be here) require the limits we have been so eagerly trying to eliminate.
Even when there have been impressive advances in the scientific study of complexity, artificial life, and artificial intelligence, there are several open problems that may be related to the inherent limits of formal systems. Will we be able to go beyond them?
4. Interactions
“The aim of science is not things themselves, as the dogmatists in their simplicity imagine,
but the relations among things; outside these relations there is no reality knowable.”
— Henri Poincaré
Etymologically and conceptually, we can say that the most relevant feature of complex systems is
interactions [
33,
48]. Complexity comes from the Latin
plexus, which means entwined, and has some similarities with the Sanskrit
tantra. In both cases, interactions make it difficult to study or describe elements in isolation, just like threads in a fabric (which is the literal meaning of tantra). We can say that this is related to the concept of
tendrel (Tibetan, Sanskrit
Pratītyasamutpāda) from Buddhist philosophy, which could be translated as “interdependent origination”, “dependent arising", or simply “causation”. Tendrel notices that phenomena arise in relation to other phenomena. Nothing can be isolated, nor be caused only by itself or out of nothing. So, everything is related, directly or indirectly [
43,
49,
99].
Neither from itself nor from another,
Nor from both,
Nor without a cause,
Does anything whatever, anywhere arise.
—Nāgārjuna, Mūlamadhyamakakārikā 1:1
Traditional science and philosophy (since times of Galileo, Descartes, Newton, Laplace...) have been reductionist, in the sense that within this paradigm we try to simplify and isolate phenomena to predict and control them [
60,
83]. In other words, we aim at finding
fundamental “laws” and use them to obtain
a priori knowledge (predict the future),
reducing phenomena to the fundamental laws used to describe them. This has been extremely successful and led to impressive advances in engineering, medicine, and more. Still, this does not imply that reductionism does not have its limits nor that there might be more suitable descriptions of the world for certain purposes. Precisely when we have relevant interactions, reductionism is inadequate, as it neglects interactions and their implications.
“Reductionism is correct, but incomplete.”
—Murray Gell-Mann
There are several implications of interactions [
48], but I can say at a general level that the main one is that interactions may produce
information that was not present in initial nor boundary conditions. This inherently limits predictability [
47], as we cannot know
a priori which information will be generated. This is known as
computational irreducibility [
58,
128,
132]: there is no “shortcut” to the future, as information should be processed through interactions to reach it.
It should be noted that
in practice, computational irreducibility might not pose such a challenge as it does
in theory. If we are interested only about a particular context, we could potentially explore exhaustively, or at least systematically, all or several possibilities, and then
a posteriori be able to describe and predict the future of complex systems, including their emergent properties and variables. Still, if we are dealing with
non-stationary problems8, then even if we have a “full” understanding of a specific complex system, if the problem changes (which is not rare precisely because of interactions), it might be that new relevant information will arise and our understanding will be obsolete.
The fact that traditional tools (from reductionist science) are insufficient to study complex systems has led some researchers to seek alternatives [
59,
68], in part because we seem unable to address global challenges precisely because of their complexity.
The relevance of interactions and limits of predictability have been discussed mainly concerning complex systems, but they are relevant for ALife and A.I. as well. Interactions in ALife and A.I. systems can also generate novel information, limiting predictability, for better or for worse. There have been several attempts with varying degrees of success, but we still lack a general, common framework to describe, understand, and control complex systems. And it might be that such a framework could be developed within ALife or A.I., and then generalized for all complex systems.
Interactions limit the predictability of complex systems. Thus, in many cases, future information can only be knowna posteriori(because of computational irreducibility). This implies that traditional reductionist approaches and methods seem insufficient to properly understand complexity, life, and intelligence. Still, we have yet to develop widely accepted methods that show the desired sufficiency.
5. Self-Organization
“The beauty of a living thing is not the atoms that go into it,
but the way those atoms are put together.”
—Carl Sagan
There are several examples of self-organization in nature [
26]: flocks, schools, swarms, herds, crowds, etc. In these examples, there is no leader or external source telling individuals what to do, but the properties of the system are a result of the distributed
interactions of individuals. Thus, the study of self-organization is closely tied to complexity and information technology necessary to model it. Also, the term “self-organizing system” has its origins in cybernetics [
7,
9,
73,
117]. Nevertheless, there have been several examples of self-organization in physical and chemical systems [
11,
36,
56,
87,
103].
A system can be described as self-organizing when its components interact to produce a global pattern or behavior [
52]. This description can be useful when we are interested in relating multiple scales (elements and system, micro and macro), and how changes in one might affect the other (e.g., changes in individuals affect a society or changes in a society affect individuals).
If we are dealing with a complex problem, novel information can make it non-stationary, i.e., the problem changes. If the problem changes faster than the time required to find a novel solution through optimization or other traditional techniques, then the solutions will be obsolete. Self-organization can be a viable approach to develop adaptive solutions that are able to face non-stationary problems, because when the problems change, elements can adjust through their interactions [
42,
44].
Self-organization has been used broadly in ALife: for software (digital organisms), hardware (robots), and wetware (protocells). See [
53] for a review.
In A.I., self-organization has had a more limited use. Still, it could be argued that most artificial neural network models are implicitly self-organizing [
45], as their weights (interactions) are modified during the training phase. And explicitly, Kohonen networks are self-organizing [
69]. Also in robotics, self-organization has been relevant, implicitly or explicitly [
91].
Self-organization can be useful when multiple scales are modeled at the same time. It has been a relevant concept for complex systems and ALife, with a potential in A.I. that has yet to be fully explored.
6. Emergence
“You could not have evolved a complex system like a city or an organism — with an enormous number of components — without the emergence of laws that constrain their behavior in order for them to be resilient.”
—Geoffrey West
The concept of emergence has certain analogies with Aristotele’s “the whole being more than the sum of its parts”, where the “more” is the emergent bit. Emergence was popular in the XIXth century [
77,
78], but fell out of favor in the early XXth century due to the success of reductionist approaches. But when information technology allowed the scientific study of complex systems, emergence became relevant again [
17].
Still, emergence probably caused most of the confusion and skepticism around complexity in the 1980s and 1990s. In part, because some people described emergent properties as “surprising” or “unexpected”. Then, emergence should be a measure of our ignorance, because once we understand these properties, they are no longer surprising nor unexpected.
Nevertheless, there is nothing mysterious about emergence if properly described [
5]. In a general way, emergent properties are those present at one scale but not at another [
50]. For example, a bar of gold has color, conductivity, malleability, etc. Still, its components (gold atoms) do not have these properties, so we can call them emergent. In a similar way, it is accepted that cells are alive, but they are composed of molecules that are not alive. Whatever our definition of life, we can say that it emerges out of the interactions of molecules. It is accepted that a human is intelligent, but she is composed of cells that are not intelligent (in the same way). Whatever our definition of intelligence, we can say that intelligence emerges out of interactions of cells.
There are different flavors of emergence, some less controversial than others (see [
50] for a review). For example,
weak emergence [
16] is about properties described by an observer, such as gliders in the Game of Life [
19,
20]. Still, gliders do not change the rules of the Game of Life, and we only need these rules to compute the future states of the system.
Strong emergence [
14,
102] would be when having all information at one scale is not enough to derive information at another scale. In many cases, strongly emergent properties or information have a causal effect on the elements that produced them. For example, molecules form cells, but living cells make molecules that cannot be produced without biospheres. Also, individuals create social norms, and these norms promote and constrain the behaviors of individuals.
One could say that weak emergence is “in the observer”, while strong emergence “is real”. Some (reductionist) people do not believe strong emergence exists (e.g., [
125]), as it implies downward causation, and for them only “fundamental” phenomena described by physics are real. Independently on our notion of reality,
in practice, the laws of physics are not sufficient to describe, explain, and even less predict phenomena at higher scales (even fluid dynamics and chemistry, we do not have to go to life, intelligence, and culture.).
I conjecture that strongly emergent properties are not computable
in practice, and that is why a lower scale description is not enough to predict them. If there is no practical way in which the properties of one scale can be described in terms of the “laws” of another, then we should validly describe those properties as emergent. Of course, this cannot be proven, for reasons similar to why a number cannot be proven to be random [
30] or Kolmogorov’s complexity is not computable (in theory) [
34]. Note that this approach does not rely on downward causation, , but does not prevent it [
64].
For example, a person can be melted by the words of their loved one, but this cannot be derived from the laws of physics that describe the melting of matter, no matter how detailed a description one might have at the “fundamental” level. Certainly, the laws of physics are not being violated. They are simply not enough, as there is no
meaning in physics [
38,
46].
Emergence has been a central concept for complex systems and artificial life [
50]. Many ALife models have been used to better describe and understand different flavors of emergence, e.g. [
21,
22,
62,
76,
82,
95,
114,
121,
123].
In A.I., emergence has been less relevant. Still, unpredictable capabilities of large language models have been recently described as emergent [
124], sparking some controversy.
Emergence can be a useful concept when information is not present at one scale but is present at another. Even when it is prevalent, we lack the conceptual and formal tools to precisely speak and measure emergence in complex, living, and intelligent systems.
7. Balance
“Everything tends to a balance.”
In recent years, I have been developing a narrative of “balance”
9 to bring together concepts of the scientific study of complex systems, and to communicate them to a general audience. There are several historical examples of balance from ancient cultures, and those examples show it has been a common, long-standing practice to try to avoid extremes. Still,
criticality [
2,
11,
13,
32,
81,
84,
89,
96,
101,
108] can be seen as a type of balance between order and chaos [
67,
72]. Life (and computation) need some stability (order) to keep on functioning. But too much stability limits their adaptability. On the other extreme, too much variability (chaos) loses useful information. At “the edge”, evolution, life, and intelligence can emerge.
More generally, balance is a tautology, because we describe
a posteriori phenomena that survived and evolved as balanced, between “too few” and “too much”
change. Certainly, there can be “dynamic balance”, where the precise tradeoff varies and systems need to adapt (as exemplified by the slower-is-faster effect [
51]). Also, interactions, perturbations, or noise can increase the change in a system, for which
antifragility [
92,
110] is desirable. And we have recently shown that
heterogeneity can “extend” the “balanced” region of systems [
74,
101].
In A.I., a well-studied balance is that between exploration and exploitation in search [
35,
63], also known as search in breadth or depth, respectively (when solution spaces are represented as trees). In other words, to try to find the best solution to a problem, one can exploit current solutions and try to improve them or explore completely novel solutions with the hope that some might be better than current ones. Because the best strategy cannot be predetermined, as it depends on the problem space [
130,
131], the precise balance between exploration and exploitation will depend on the particular problem space that is searched.
Balance also offers a promising narrative to study evolution (natural and artificial) [
65,
66,
120], as by definition that which evolves needs to be balanced.
Phenomena that endure tend to avoid extremes, so they can be called “balanced”a posteriori(once they endured). Still, this tautology can be useful to bring together common concepts in complex systems, ALife, and A.I.
8. Inconclusion
“Being ill defined is a feature common to all important concepts.”
—Benoît Mandelbrot
I have mentioned conceptual similarities and challenges among complexity, ALife, and A.I. Still, there are many open questions.
There are no agreed definitions of complexity, life, or intelligence. But perhaps this is more a feature than a problem. If we could define one of these precisely, then we would not have so many open questions about them. And we do because their richness goes beyond our current understanding abilities. It remains to be seen whether we just need a revolution in science [
59,
68] to be able to understand them properly. Or it might be that there are aspects that inherently are beyond understanding as we know it [
129].
In practice, there have been several relevant recent advances that have generated great expectations. Whether we consider novel forms of life, either by exploiting current ones [
24,
25,
54,
70] or exploring novel ones [
28,
86,
94], we will take important steps to understand life on Earth and in other planets [
122].
Historically, A.I. has had its cycles of expectations (summers) and disappointments (winters). We have had several years of building expectations. For example, autonomous vehicles are still “two years away” after more than fifteen years. Deep neural networks and large language models have achieved impressive performances, but in the end, they are “just”
ad hoc statistical engines. It is not clear that by following the same approach something like “understanding meaning” could be achieved [
79]. Still, for many practical purposes, this is not relevant. Nevertheless, there are limits to what current approaches will be able to do.
As for the scientific study of complex systems, perhaps its success will be achieved when most disciplines complete integrating their concepts and methods and adopt them as their own, so few people would speak about “complexity economics” or “biological complexity”, simply because most people would be familiar with the relevant concepts and methods. Still, there will always be a narrow space for studying complexity per se, as the study of the interactions in systems at all scales.
The limitations outlined in this paper very well might be overcome. We have no clear idea of how this will be possible, but there are several promising explorations. If further research helps to better delineate the limits of science rather than going beyond them, this will certainly be useful and will allow us to make better decisions, even if it is just by knowing what we have no way of knowing.
Acknowledgments
I thank comments and feedback from Jan Dijksterhuis, Mario Franco, Stuart Kauffman, Amahury López-Díaz, Andrea Roli, David Wolpert, and members of the Foundations of Information Science mailing list hosted at the Universidad de Zaragoza. Steve Spero helped proofreading the paper.
References
- Abrahão, F. S. andZenil, H. (2022). Emergence and algorithmic information dynamics of systems and observers. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 380; (2227): 20200429. [Google Scholar]
- Adami, C. Self-organized criticality in living systems. Phys. Lett. A, 1995; 203, 29–32. [Google Scholar]
- Agüera y Arcas, B. (2022). Do Large Language Models Understand Us? Daedalus, 2022; 151, 183–197. [Google Scholar] [CrossRef]
- Aguilar, W. , Santamaría-Bonfil, G., Froese, T., and Gershenson, C. (2014). The past, present, and future of artificial life. Frontiers in Robotics and AI, 2014; 1 (8). [Google Scholar] [CrossRef]
- Anderson, P. W. More is different. Science, 1972; 177, 393–396. [Google Scholar]
- Ashby, W. R. (1947a). The nervous system as physical machine: With special reference to the origin of adaptive behavior. Mind, /: 44–59. URL http, 1947; 56, 44–59. [Google Scholar]
- Ashby, W. R. (1947b). Principles of the self-organizing dynamic system. Journal of General Psychology, 1947; 37, 125–128. [Google Scholar]
- Ashby, W. R. (1956). An Introduction to Cybernetics, Chapman & Hall: London. Available online: http://pcp.vub.ac.be/ASHBBOOK.html.
- Ashby, W. R. (1962). Principles of the self-organizing system. In Principles of Self-Organization, H. V. Foerster and G. W. Zopf, Jr., (Eds.). Pergamon, Oxford, 255–278.
- Atlan, H. (1974). On a formal definition of organization. Journal of Theoretical Biology, 1974; 45, 295–304. [Google Scholar]
- Bak, P. , Tang, C., and Wiesenfeld, K. (1987). Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett. [CrossRef]
- Ball, P. Man made: A history of synthetic life. Distillations, 2016. Available online: https://sciencehistory.org/stories/magazine/man-made-a-history-of-synthetic-life/.
- Balleza, E. , Alvarez-Buylla, E. R., Chaos, A., Kauffman, S., Shmulevich, I., and Aldana, M. (2008). Critical dynamics in genetic regulatory networks: Examples from four kingdoms. PLoS ONE, 2008; 3, e2456. [Google Scholar]
- Bar-Yam, Y. A mathematical theory of strong emergence using multiscale variety. Complexity, 2004; 9, 15–24. [Google Scholar] [CrossRef]
- Barricelli, N. (1954). Esempi numerici di processi di evoluzione. Methodos.
- Bedau, M. A. (1997). Weak emergence. In Philosophical Perspectives: Mind, Causation, and World, J. Tomberlin, (Ed.). Vol. 11. Blackwell, Malden, MA, USA, 375–399. URL http://people.reed.edu/~mab/papers/weak.emergence.pdf.
- Bedau, M. A. andHumphreys, P., Eds. (2007). Emergence: Contemporary Readings in Philosophy and Science.
- Bedau, M. A. andHumphreys, P., Eds. (2008). Emergence: Contemporary readings in philosophy and science.
- Beer, R. D. (2014). The cognitive domain of a glider in the game of life. Artificial Life. [CrossRef]
- Berlekamp, E. R. , Conway, J. H., and Guy, R. K. (1982). Winning Ways for Your Mathematical Plays, /: Press, London. URL http.
- Bersini, H. (2006). Formalizing emergence: the natural after-life of artificial life. See [39], 41–60. [CrossRef]
- Beuls, K. andSteels, L. (2013). Agent-Based Models of Strategies for the Emergence and Evolution of Grammatical Agreement. PLoS ONE. [CrossRef]
- Bitbol, M. (2012). Downward causation without foundations. Synthese. [CrossRef]
- Blackiston, D. , Kriegman, S., Bongard, J., and Levin, M. (2023). Biological robots: Perspectives on an emerging interdisciplinary field. Soft Robotics. [CrossRef] [PubMed]
- Blackiston, D. , Lederer, E., Kriegman, S., Garnier, S., Bongard, J., and Levin, M. (2021). A cellular platform for the development of synthetic living machines. Science Robotics.
- Camazine, S. , Deneubourg, J.-L., Franks, N. R., Sneyd, J., Theraulaz, G., and Bonabeau, E. (2003). Self-Organization in Biological Systems, 7104. [Google Scholar]
- Campbell, D. T. (1974). `Downward causation’ in hierarchically organized biological systems. In Studies in the Philosophy of Biology, F. J. Ayala and T. Dobzhansky, (Eds.). Macmillan, New York City, NY, USA, 179–186.
- Čejková, J. , Banno, T., Hanczyc, M. M., and Štěpánek, F. (2017). Droplets as liquid robots. Artificial Life. [CrossRef]
- Chaitin, G. J. (1974). Information-theoretic limitations of formal systems. J. ACM. [CrossRef]
- Chaitin, G. J. (1975). Randomness and mathematical proof. Scientific American, 1975; 232, 47–52. Available online: http://tinyurl.com/y4tvm9.
- Chaitin, G. J. (2004). Irreducible complexity in pure mathematics. Arxiv preprint math/0411091. URL http://arxiv.org/abs/math/0411091.
- Chialvo, D. R. (2010). Emergent complex neural dynamics. Nature Physics, 2010; 6, 744–750. [Google Scholar] [CrossRef]
- De Domenico, M. , Camargo, C., Gershenson, C., Goldsmith, D., Jeschonnek, S., Kay, L., Nichele, S., Nicolás, J., Schmickl, T., Stella, M., Brandoff, J., Salinas, Á. J. M., and Sayama, H. (2019). Complexity explained: A grassroot collaborative initiative to create a set of essential concepts of complex systems. URL https://complexityexplained.github.io.
- Delahaye, J.-P. andZenil, H. (2012). Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness. Applied Mathematics and Computation, 2012; 219, 63–77. [Google Scholar] [CrossRef]
- Downing, K. L. (2015). Intelligence Emerging: Adaptivity and Search in Evolving Neural Systems, MIT Press: Cambridge, MA, USA. Available online: https://ieeexplore.ieee.org/book/7120879/.
- Eigen, M. andSchuster, P. (1979). The hypercycle, a principle of natural self-organization, Springer-Verla.
- Farnsworth, K. D. , Ellis, G. F. R., and Jaeger, L. (2017). Living through downward causation: From molecules to ecosystems. In From Matter to Life: Information and Causality, S. I. Walker, P. C. W. Davies, and G. F. R. Ellis, (Eds.). Cambridge University Press, Cambridge, UK, 303–333.
- Farnsworth, K. D. , Nelson, J., and Gershenson, C. (2013). Living is information processing: From molecules to global systems. Acta Biotheoretica, 2013; 61, 203–222. [Google Scholar]
- Feltz, B. , Crommelinck, M., and Goujon, P., Eds. (2006). Self-organization and Emergence in Life Sciences, Synthese Library, vol. 331. Springer.
- Flack, J. C. (2017). Coarse-graining as a downward causation mechanism. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2017; 2109, 20160338. Available online: https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2016.0338.
- Floridi, L. (2020). Ai and its new winter: from myths to realities. Philosophy & Technology, 33, 1–3. [CrossRef]
- Frei, R. andDi Marzo Serugendo, G. (2011). Advances in complexity engineering. International Journal of Bio-Inspired Computation, 2011; 3, 199–212. Available online: http://www.reginafrei.ch/pdf/IJBIC030401%20FREI%20published.pdf.
- Garfield, J. L. (1995). The fundamental wisdom of the middle way: Nagarjuna’s Mulamadhyamakakarika, Oxford University Press: Oxford, UK.
- Gershenson, C. (2007). Design and Control of Self-organizing Systems, CopIt Arxives: Mexico. TS0002EN. Available online: https://copitarxives.fisica.unam.mx/TS0002EN/TS0002EN.html.
- Gershenson, C. (2010). Computing networks: A general framework to contrast neural and swarm cognitions. Paladyn, Journal of Behavioral Robotics, 1, 147–153. [CrossRef]
- Gershenson, C. (2012). The world as evolving information. In Unifying Themes in Complex Systems, A. Minai, D. Braha, and Y. Bar-Yam, (Eds.). Vol. VII. Springer, Berlin Heidelberg, 100–115. URL http://arxiv.org/abs/0704.0304.
- Gershenson, C. (2013a). Facing complexity: Prediction vs. adaptation. In Complexity Perspectives on Language, Communication and Society, A. Massip and A. Bastardas, (Eds.). Springer, Berlin Heidelberg, 3–14. URL http://arxiv.org/abs/1112.3843.
- Gershenson, C. (2013b). The implications of interactions for science and philosophy. Foundations of Science, /: URL http, 2013; 18, 781–790. [Google Scholar]
- Gershenson, C. (2023a). Complexity and Buddhism: Understanding interactions. Buddhism Today, 52, 44–48.
- Gershenson, C. (2023b). Emergence in Artificial Life. Artificial Life, 29, 153–167. [CrossRef]
- Gershenson, C. andHelbing, D. (2015). When slower is faster. Complexity, 21, 9–15. [CrossRef]
- Gershenson, C. and Heylighen, F. (2003). When can we call a system self-organizing? In Advances in Artificial Life, 7th European Conference, ECAL 2003 LNAI 2801, W. Banzhaf, T. Christaller, P. Dittrich, J. T. Kim, and J. Ziegler, (Eds.). Springer, Berlin, 606–614. URL http://arxiv.org/abs/nlin.AO/0303020.
- Gershenson, C. , Trianni, V., Werfel, J., and Sayama, H. (2020). Self-organization and artificial life. Artificial Life, 26, 391–408. [CrossRef]
- Gibson, D. G. , Glass, J. I., Lartigue, C., Noskov, V. N., Chuang, R.-Y., Algire, M. A., Benders, G. A., Montague, M. G., Ma, L., Moodie, M. M., Merryman, C., Vashee, S., Krishnakumar, R., Assad-Garcia, N., Andrews-Pfannkoch, C., Denisova, E. A., Young, L., Qi, Z.-Q., Segall-Shapiro, T. H., Calvey, C. H., Parmar, P. P., Hutchison, C. A., Smith, H. O., and Venter, J. C. (2010). Creation of a bacterial cell controlled by a chemically synthesized genome. Science, 329, 52–56.
- Gödel, K. (1931). Über formal unentscheidbare sätze der principia mathematica und verwandter systeme i. Monatshefte für mathematik und physik, 38, 173–198.
- Haken, H. (1981). Synergetics and the problem of selforganization. In Self-Organizing Systems: An Interdisciplinary Approach, G. Roth and H. Schwegler, (Eds.). Campus Verlag, New York, pp. 9–13.
- Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 1990; 42, 335–346. Available online: https://www.sciencedirect.com/science/article/pii/0167278990900876.
- Hernández-Orozco, S. , Hernández-Quiroz, F., and Zenil, H. (2018). Undecidability and irreducibility conditions for open-ended evolution and emergence. Artificial Life, 24, 56–70.
- Heylighen, F. , Beigi, S., and Vidal, C. (2024). The third story of the universe: an evolutionary worldview for the noosphere. Working paper, CLEA/Human Energy.
- Heylighen, F. , Cilliers, P., and Gershenson, C. (2007). Complexity and philosophy. In Complexity, Science and Society, J. Bogg and R. Geyer, (Eds.). Radcliffe Publishing, Oxford, 117–134. URL http://arxiv.org/abs/cs.CC/0604072.
- Heylighen, F.; Joslyn, C. Cybernetics and second order cybernetics. Encyclopedia of Physical Science and Technology Joslyn, C., Vol. 4., 155–170. [Google Scholar]
- Hidalgo, J. , Grilli, J., Suweis, S., Maritan, A., and Muñoz, M. A. (2016). Cooperation, competition and the emergence of criticality in communities of adaptive systems. Journal of Statistical Mechanics: Theory and Experiment, 2016; 033203. [Google Scholar] [CrossRef]
- Hills, T. T. , Todd, P. M., Lazer, D., Redish, A. D., and Couzin, I. D. (2015). Exploration versus exploitation in space, mind, and society. Trends in Cognitive Sciences, 19, 46–54. [CrossRef]
- Hoel, E. P. , Albantakis, L., and Tononi, G. (2013). Quantifying causal emergence shows that macro can beat micro. Proceedings of the National Academy of Sciences, 2013; 110, 19790–19795. [Google Scholar]
- Jablonka, E. andLamb, M. J. (2006). Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life, MIT Press: Cambridge, MA, USA. Available online: http://mitpress.mit.edu/books/evolution-four-dimensions.
- Kauffman, S. andRoli, A. (2021). The world is not a theorem. Entropy, 2021; 23. [Google Scholar]
- Kauffman, S. A. (1993). The Origins of Order, Oxford University Press: Oxford, UK.
- Kauffman, S. A. andRoli, A. (2023). A third transition in science? Interface Focus, 2023; 13, 20220063. Available online: https://royalsocietypublishing.org/doi/abs/10.1098/rsfs.2022.0063.
- Kohonen, T. (2000). In Self-Organizing Maps, 3rd ed.; Springer.
- Kriegman, S. , Blackiston, D., Levin, M., and Bongard, J. (2020). A scalable pipeline for designing reconfigurable organisms. Proceedings of the National Academy of Sciences, 117, 1853–1859.
- Langton, C. (1989). Artificial life. In Artificial life, C. Langton, (Ed.). Santa Fe Institute Studies in the Sciences of Complexity. Addison-Wesley, Redwood City, CA, pp. 1–47.
- Langton, C. G. (1990). Computation at the edge of chaos: Phase transitions and emergent computation. Physica D, 42, 12–37.
- Lendaris, G. G. (1964). On the definition of self-organizing systems. Proceedings of the IEEE, 52, 324–325. Available online: http://tinyurl.com/23zlnb.
- López-Díaz, A. J. , Sánchez-Puig, F., and Gershenson, C. (2023). Temporal, structural, and functional heterogeneities extend criticality and antifragility in random Boolean networks. Entropy, 25. Available online: https://www.mdpi.com/1099-4300/25/2/254.
- Mandelbrot, B. (1982). The fractal geometry of nature, WH Freeman.
- Martínez, G. , Adamatzky, A., and Alonso-Sanz, R. (2012). Complex dynamics of elementary cellular automata emerging in chaotic rules. International Journal of Bifurcation and Chaos, 22, 1250023.
- McLaughlin, B. P. (1992). The rise and fall of British emergentism. In Emergence or reduction? Essays on the prospects of nonreductive physicalism, Beckerman, Flohr, and Kim, (Eds.). Walter de Gruyter, Berlin, 49–93.
- Mengal, P. (2006). The concept of emergence in the XIXth century: from natural theology to biology. See [39], 215–224.
- Mitchell, M. (2020). On crashing the barrier of meaning in artificial intelligence. AI Magazine, 1609; 41, 86–92. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1609/aimag.v41i2.5259.
- Mitchell, M. (2023). Ai’s challenge of understanding the world. Science, 8175; 382, eadm8175. Available online: https://www.science.org/doi/abs/10.1126/science.adm8175.
- Mora, T. andBialek, W. (2011). Are biological systems poised at criticality? Journal of Statistical Physics, 144, 268–302. [CrossRef]
- Moreno, A. andRuiz-Mirazo, K. (2009). The problem of the emergence of functional diversity in prebiotic evolution. Biology & Philosophy, 24, 585–605. [CrossRef]
- Morin, E. (2007). Restricted complexity, general complexity. In Philosophy and Complexity, C. Gershenson, D. Aerts, and B. Edmonds, (Eds.). Worldviews, Science and Us. World Scientific, Singapore, 5–29. URL https://arxiv.org/abs/cs/0610049.
- Muñoz, M. A. (2018). Colloquium: Criticality and dynamical scaling in living systems. Rev. Mod. Phys. 90, 031001. Available online: https://link.aps.org/doi/10.1103/RevModPhys.90.031001.
- Mukherjee, S. (2022). The Song of the Cell: An Exploration of Medicine and the New Human, The Bodley Head, London, UK.
- Muñuzuri, A. P. andPérez-Mercader, J. (2022). Unified representation of life’s basic properties by a 3-species stochastic cubic autocatalytic reaction-diffusion system of equations. Physics of Life Reviews, 41, 64–83.
- Nicolis, G. andPrigogine, I. (1977). Self-Organization in Non-Equilibrium Systems: From Dissipative Structures to Order Through Fluctuations, Wiley, Chichester.
- Pagels, H. R. (1989). The Dreams of Reason: The Computer and the Rise of the Sciences of Complexity, Bantam Books: New York City, NY, USA.
- Pascual, M. andGuichard, F. (2005). Criticality and disturbance in spatial ecological systems. Trends in Ecology & Evolution, 20, 88–95. Available online: https://www.sciencedirect.com/science/article/pii/S0169534704003428.
- Pattee, H. H. andSayama, H. (2019). Evolved open-endedness, not open-ended evolution. Artificial Life, 25, 4–8.
- Pfeifer, R. , Lungarella, M., and Iida, F. (2007). Self-organization, embodiment, and biologically inspired robotics. Science, 318, 1088–1093.
- Pineda, O. K. , Kim, H., and Gershenson, C. (2019). A novel antifragility measure based on satisfaction and its application to random and biological Boolean networks. Complexity, 2019, 1-. [CrossRef]
- Prokopenko, M. , Boschetti, F., and Ryan, A. (2009). An information-theoretic primer on complexity, self-organisation and emergence. Complexity, 15, 11–28. [CrossRef]
- Rasmussen, S. , Bedau, M. A., Chen, L., Deamer, D., Krakauer, D. C., Packard, N. H., and Stadler, P. F., Eds. (2008). Protocells: Bridging Nonliving and Living Matter, MIT Press: Cambridge, MA, USA.
- Roli, A. andKauffman, S. A. (2020). Emergence of organisms. Entropy, 22.
- Roli, A. , Villani, M., Filisetti, A., and Serra, R. (2018). Dynamical criticality: Overview and open questions. Journal of Systems Science and Complexity, 31, 647–663. [CrossRef]
- Rosenblueth, A. , Wiener, N., and Bigelow, J. (1943). Behavior, purpose and teleology. Philosophy of Science, 10, 18–24.
- Rota, G. C. (1986). In memoriam of Stan Ulam —the barrier of meaning. Physica D, 2, 1–3.
- Rovelli, C. (2021). Helgoland: Making Sense of the Quantum Revolution, Riverhead Books: New York City, NY, USA.
- Rupe, A. andCrutchfield, J. P. (2024). On principles of emergent organization. Physics Reports, 1071, 1–47. Available online: https://www.sciencedirect.com/science/article/pii/S0370157324001327.
- Sánchez-Puig, F. , Zapata, O., Pineda, O. K., Iñiguez, G., and Gershenson, C. (2023). Heterogeneity extends criticality. Frontiers in Complex Systems, 2023; 1. Available online: https://www.frontiersin.org/articles/10.3389/fcpxs.2023.1111486.
- Schmickl, T. (2022). Strong emergence arising from weak emergence. Complexity, 2022; 2022, 9956885. [Google Scholar] [CrossRef]
- Schweitzer, F. , Ed. (1997). Self-Organization of Complex Structures: From Individual to Collective Dynamics, Gordon and Breach: London.
- Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3, 417–424.
- Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423. [CrossRef]
- Simon, H. A. (1996). The Sciences of the Artificial, 3rd, *!!! REPLACE !!!*, Ed.; MIT Press: Cambridge, MA, USA.
- Standish, R. K. (2003). Open-ended artificial evolution. International Journal of Computational Intelligence and Applications, 3, 167–175.
- Stanley, H. E. (1987). Introduction to phase transitions and critical phenomena, Oxford University Press: Oxford, UK.
- Steels, L. andBrooks, R. (1995). The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents, Lawrence Erlbaum Associates: New York City, NY, USA.
- Taleb, N. N. (2012). Antifragile: Things That Gain From Disorder, Random House: London, UK.
- Taylor, T. (2024). An Afterword to Rise of the Self-Replicators: Placing John A. Etzler, Frigyes Karinthy, Fred Stahl, and Others in the Early History of Thought About Self-Reproducing Machines. Artificial Life, 30, 91–105. [CrossRef]
- Taylor, T. , Bedau, M., Channon, A., Ackley, D., Banzhaf, W., Beslon, G., Dolson, E., Froese, T., Hickinbotham, S., Ikegami, T., et al. (2016). Open-ended evolution: perspectives from the oee workshop in York. Artificial Life, 22, 408–423.
- Taylor, T. andDorin, A. (2020). Rise of the Self-Replicators: Early Visions of Machines, AI and Robots That Can Reproduce and Evolve, Springer. Available online: https://www.tim-taylor.com/selfrepbook/.
- Torres-Sosa, C. , Huang, S., and Aldana, M. (2012). Criticality is an emergent property of genetic networks that exhibit evolvability. PLoS Comput Biol, 8, e1002669.
- Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42, 230–265. Available online: http://www.abelard.org/turpap2/tp2-ie.asp.
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
- von Foerster, H. (1960). On self-organizing systems and their environments. In Self-Organizing Systems, M. C. Yovitts and S. Cameron, (Eds.). Pergamon, New York, pp. 31–50.
- von Neumann, J. (1966). The Theory of Self-Reproducing Automata, University of Illinois Press: Champaign. Edited by A. W. Burks.
- von Neumann, J. andMorgenstern, O. (1944). Theory of Games and Economic Behavior, Princeton University Press. Available online: http://en.wikipedia.org/wiki/Theory_of_games_and_economic_behavior.
- Wagner, A. (2005). Robustness and Evolvability in Living Systems, Princeton University Press: Princeton, NJ. Available online: http://www.pupress.princeton.edu/titles/8002.html.
- Walker, S. I. (2014). Top-down causation and the rise of information in the emergence of life. Information, 5, 424–439. Available online: https://www.mdpi.com/2078-2489/5/3/424.
- Walker, S. I. , Bains, W., Cronin, L., DasSarma, S., Danielache, S., Domagal-Goldman, S., Kacar, B., Kiang, N. Y., Lenardic, A., Reinhard, C. T., Moore, W., Schwieterman, E. W., Shkolnik, E. L., and Smith, H. B. (2018). Exoplanet biosignatures: Future directions. Astrobiology. [CrossRef] [PubMed]
- Watson, R. A. , Mills, R., and Buckley, C. L. (2011). Global adaptation in networks of selfish components: Emergent associative memory at the system scale. Artificial Life, 17, 147–166.
- Wei, J. , Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., et al. (2022). Emergent abilities of large language models. arXiv:2206.07682.
- Weinberg, S. (1993). Dreams of a Final Theory: The Search for the Fundamental Laws of Nature, Vintage.
- Whitehead, A. N. andRussell, B. (1910–13). Principia Mathematica, Cambridge University Press: Cambridge, UK.
- Wiener, N. (1948). Cybernetics; or, Control and Communication in the Animal and the Machine. Wiley and Sons: New York.
- Wolfram, S. (2002). A New Kind of Science, Wolfram Media: Champaign, IL, USA. Available online: http://www.wolframscience.com/thebook.html.
- Wolpert, D. H. (2022). What can we know about that which we cannot even imagine?
- Wolpert, D. H. and Macready, W. G. (1995). No free lunch theorems for search. Tech. Rep. SFI-WP-95-02-010, Santa Fe Institute.
- Wolpert, D. H. andMacready, W. G. (1997). No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation, 1, 67–82.
- Zenil, H. , Ed. (2013). Irreducibility and Computational Equivalence: 10 Years After Wolfram’s A New Kind of Science, Emergence, Complexity and Computation, Springer: Berlin, Heidelberg.
| 1 |
The word “complexity” comes from the Latin plexus, which could be translated as “entwined”. We can thus say that complex systems are those whose elements are difficult to separate [ 33]. This is because there are relevant interactions among them [ 48]. Thus, the traditional reductionist approach that simplifies and isolates in order to predict is inadequate to study complexity [ 47]. |
| 2 |
Artificial life applies the synthetic method to biology [ 109]: building systems that attempt to reproduce properties of living systems to understand them better [ 4]. |
| 3 |
Also known as “control and communication in animals and machines” [ 127]. |
| 4 |
Well, he was also student of Julia. And his uncle Szolem (who knew Sierpiński) had suggested him to work on iterative functions. And he was extremely smart. |
| 5 |
Certainly, the history of pahtology is much more complex than that [ 85]. |
| 6 |
I am not suggesting that the failed attempts will never be achieved. Nor that relevant progress has not been made. My argument explained below is that we will not achieve them with the limited methods we have now, although this does not imply that new methods may be eventually developed that might overcome the present limits. |
| 7 |
One example comes from personal conversations with David Wolpert, who does not believe on downward causation, but concedes that it might be that in some cases, in practice it might be easier to predict lower scale phenomena from higher scale properties, similar to one way functions used in cryptography: in reality, the higher scale is caused by the lower one, but in practice, it is not computable. Another view is that speaking about causality between scales is a conceptual mistake, since independently of observers, phenomena occur at all scales [p. 31 [ 44]. It is only our descriptions that represent limited aspects of phenomena at particular scales. |
| 8 |
A stationary problem does not change in time, so once a solution is found, it will remain valid. A non-stationary problem does change in time, so novel solutions should be found, ideally as fast as the problem changes [ 44]. |
| 9 |
We can roughly define balance as that which avoids extremes. |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).