Preprint
Article

Complexity, Artificial Life, and Artificial Intelligence

This version is not peer-reviewed.

Submitted:

26 April 2024

Posted:

28 April 2024

Read the latest preprint version here

Abstract
The scientific fields of complexity, artificial life (ALife), and artificial intelligence (A.I.) share several commonalities: historic, conceptual, methodological, and philosophical. It was possible to develop them only because of information technology, while their origins can be traced back to cybernetics. In this perspective, I'll revise the expectations and limitations of these fields, some of which have their roots in the limits of formal systems. I will use interactions, self-organization, emergence, and balance to compare different aspects of complexity, ALife, and A.I. The paper poses more questions than answers, but hopefully it will be useful to align efforts in these fields towards overcoming --- or accepting --- their limits.
Keywords: 
complexity; emergence; self-organization; balance
Subject: 
Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

“The best way to understand man is by creating him”
José Negrete-Martínez
Complexity has been studied since antiquity. There are also several historical examples of artificial life (ALife), with a mild surge after the publication of Mary Shelley’s “Frankenstein; or, The Modern Prometheus” in 1818 (See [10,105,107]). Still, the modern scientific study of complex systems and the field of artificial life originated both in the 1980s around the Santa Fe Institute (SFI, founded in 1984) and the nearby Los Alamos National Laboratory (LANL, created for the Manhattan project) in Northern New Mexico.
I will not attempt to provide a historical account of complexity and ALife. My purpose is to notice the similarities and differences between both fields, and why not, also artificial intelligence (A.I.): conceptual, methodological, and philosophical.
In the next section, I’ll review the historical and technological circumstances that predated the development of complexity, ALife, and A.I. In Section 3, I’ll mention common limitations that these fields face, along with the expectations they have generated. In subsequent sections, I’ll relate the concepts of interactions, self-organization, emergence, and balance to complexity, ALife, and A.I., before closing the paper with open questions.

2. Computers as Telescopes

“Where there is an observatory and a telescope, we expect that any eyes will see new worlds at once.”
Henry David Thoreau
Why complexity as we know it and “life as it could be” [68] were developed in the 1980s and not before or after? Personal computers. Before then, digital computing was restricted to few research institutions who could afford the expensive equipment. PCs changed everything. The amount of people who could exploit and explore new possibilities in information processing suddenly exploded.
As already mentioned, there were a few examples of what could be considered artificial life, e.g. [13,112], while Alan Turing [110], John von Neumann [113], and others were interested in the potential ability of computers of modeling the human mind. Actually, we can say that cybernetics [6,58,92,119] set the basis for the scientific study of complex systems, intelligence, and life. This is because cybernetics was the first transdisciplinary effort to study phenomena independently of their substrate. In other words, systems were studied in terms of their organization, rather than in terms of their components. And since organization [7,8,95,111] can be described in terms of information [88,100], it became clear that the technology capable of increasing information processing (a.k.a. computation), storage, and transmission would be essential.
Something similar happened with fractals [72], which were named only in 1975 by Benoît Mandelbrot. Still, some examples were already proposed in the late XIXth and early XXth centuries by Cantor, von Koch, Sierpiński, and others. Even more, Gaston Julia and Pierre Fatou had studied iterative functions, which can be used to construct fractals. Still, these were mainly forgotten. But Mandelbrot had a huge advantage: access to computers that could draw fractals, since he worked at IBM Research1. Then, the interest for fractals exploded.
Before telescopes, no planet beyond Saturn could be detected. And only our moon. Galileo was able to see Jupiter’s four largest satellites with his telescope. More planets followed. With more powerful telescopes, other galaxies were observed less than a century ago. The first exoplanet was detected in 1992. Now there are more than five thousand exoplanets confirmed in more than four thousand planetary systems. It was only because of these observations of exoplanets that we now know that most starts have planetary systems, even if we have yet to detect most of them.
Before microscopes, doctors were taught that disease was caused by the imbalance of “humors” (or astrological influence, from which the name of influenza comes). Still, it took more than two centuries for the germ theory of disease to be accepted. But without seeing pathogens, how could we attempt to prevent and cure disease caused by them? Leeches, of course.
Before computers, we did not have proper tools to study complex systems. Just like our vision is limited to perceive the macro and the micro, our limited cognition restricted us to dealing with only a few variables, even if we had huge blackboards. As Heinz Pagels noted, computers are like telescopes for complexity [83]. And artificial life. And artificial intelligence. All of these three have information processing at its core. Thus, we could only begin to study them once information technology reached a level where enough information could be stored, transmitted, and processed to simulate intelligence, life, and complexity [101].
Why New Mexico, “land of enchantment”? This is a trickier question. Better said, attempting to answer it has to be more subjective. Still, I can speculate that at the time there was enough talent (some Nobel prize winners) and freedom of research at LANL (for example, arXiv was created there by Paul Ginsparg in 1991). Unfortunately the situation changed at LANL for different reasons, resulting in limited creativity and less people being attracted to it. Nevertheless, “back in the day”, it was remote enough so that non-mainstream ideas could be explored, but not too remote so that the good ideas that came out could spread.

3. Promises and Limits

“Every man takes the limits of his own field of vision for the limits of the world”.
Arthur Schopenhauer
“The limits of my language means the limits of my world”.
Ludwig Wittgenstein
One could naïvely think that we just need enough computational power to completely model and understand intelligence, life, and complexity. Many promises were made: robots smarter than humans, diseases cured, genomes controlled, future predicted precisely... all attempts have failed, and some researchers are still hopeful of achieving these goals with better models and faster computers. And many projects with these expectations are still being funded. Nevertheless, even before the first electronic computers were built, this approach was “doomed” by the limits of formal systems that were proven by Gödel [52], Turing [109], Chaitin [27,29], and more.
Even when adaptation is widely used [4], there is always a part of systems (axioms in the formal case, hardware or hard code in the engineering case) that cannot be changed. Still, we might argue that “real” intelligence, life, and complexity cannot change the laws of physics or chemistry, so in a sense they are also limited.
Independently on our definition of intelligence, life, or complexity, we can say that artificial systems have yet to exhibit behavior as rich as the one of natural systems. Could this be because of the limits of formal systems? Or simply because we have yet to understand how nature changes itself?
Moreover, it might be that we want artificial systems to be simpler than natural ones. This is because we can attempt to better understand less detailed versions of natural systems.
In the case of artificial life, these limits have been evident in the study of open ended evolution [85,102,106]. As Hernández, et al. [55] showed, undecidability and irreducibility (which might be considered as desirable or undesirable, but are precisely some of the limits of formal systems) are conditions for open-endedness.
For complexity, a relevant case is that of emergence [1,14,16,97] (to be expanded in Section 6). There are several notions and flavors of emergence. In general, it can be said that emergent properties are those present at one scale (usually lower/faster, but not necessarily) and not at another scale (usually higher/slower) [47]. In particular, “strong emergence” is seen as problematic by some, since it usually implies downward causation [21,25,35,38]. This means that emergent properties at a higher scale have a causal effect on elements at a lower scale. We have yet to find a formalism that properly describes downward causation, while some argue that it does not even exist (downward causation might be apparent, an epiphenomenon, but the laws of physics explain everything). Could it be because of the same limits of formal systems? Nevertheless, for practical purposes, does it really matter? Even if in theory everything could be reduced to physics, in practice it is not. So in any case we do need descriptions at all levels to understand and face complexity.
For A.I., there have been several limits identified, one of the most relevant being that of meaning [76,93]. In principle and in practice, machines can simulate in a very sophisticated way our cognitive abilities. Still, do they “really” understand [54,99]? We might say that pragmatically it does not matter. But it should, as a feature of human cognition is the ability to arbitrarily and adaptively change meanings, which again seems limited by formal systems used to implement A.I. systems. There have been impressive advances within information theory, but methods for creating semantics and understanding are still at an early stage.
It might be that these limits are actually a feature, not a problem. We “just” need to accept them to be able to exploit them, rather than fight against them. Imagine that mathematics (or any formal system) was consistent, complete, and decidable. as Hilbert and others hoped for. Yes, we would have “absolute truths” and certainty. But would we have creativity? Innovation? Serendipity? It seems to me that many of the features of our world (without which we would not be here) require the limits we have been so eagerly trying to eliminate.

4. Interactions

“The aim of science is not things themselves, as the dogmatists in their simplicity imagine,
but the relations among things; outside these relations there is no reality knowable.”
Henri Poincaré
Etymologically and conceptually, we can say that the most relevant feature of complex systems are interactions [31,45]. Complexity comes from the Latin plexus, which means entwined, and has some similarities with the Sanskrit tantra. In both cases, interactions make it difficult to study or describe elements in isolation, just like threads in a fabric. We can say that this is related to the concept of tendrel (Tibetan, Sanskrit Pratītyasamutpāda) from Buddhist philosophy, which could be translated as “interdependent origination”, “dependent arising", or simply “causation”. Tendrel notices that phenomena arise in relation to other phenomena. Nothing can be isolated, nor be caused only by itself or out of nothing. So everything is related, directly or indirectly [40,46,94].
Neither from itself nor from another,
Nor from both,
Nor without a cause,
Does anything whatever, anywhere arise.
—Nāgārjuna, Mūlamadhyamakakārikā 1:1
Traditional science and philosophy (since times of Galileo, Descartes, Newton, Laplace...) have been reductionist, in the sense that within this paradigm we try to simplify and isolate phenomena to predict and control them [57,79]. In other words, we aim at finding fundamental “laws” and use them to obtain a priori knowledge (predict the future), reducing phenomena to the fundamental laws used to describe them. This has been extremely successful and led to impressive advances in engineering, medicine, and more. Still, this does not imply that reductionism does not have its limits nor that there might be more suitable descriptions of the world for certain purposes. Precisely when we have relevant interactions, reductionism is inadequate, as it neglects interactions and their implications.
“Reductionism is correct, but incomplete.”
Murray Gell-Mann
There are several implications of interactions [45], but I can say at a general level that the main one is that interactions may produce information that was not present in initial nor boundary conditions. This inherently limits predictability [44], as we cannot know a priori which information will be generated. This is known as computational irreducibility [55,120,123]: there is no “shortcut” to the future, as information should be processed through interactions to reach it.
It should be noted that in practice, computational irreducibility might not pose such a challenge as it does in theory. If we are interested only about a particular context, we could potentially explore exhaustively, or at least systematically, all or several possibilities, and then a posteriori be able to describe and predict the future of complex systems, including their emergent properties and variables. Still, if we are dealing with non-stationary problems, then even if we have a “full” understanding of a particular complex system, if the problem changes (which is not rare precisely because of interactions), it might be that new relevant information will arise and our understanding will be obsolete.
The fact that traditional tools (from reductionist science) are insufficient to study complex systems has led some researchers to seek alternatives [56,65], in part because we seem unable to address global challenges precisely because of their complexity.
The relevance of interactions and limits of predictability have been discussed mainly in relation to complex systems, but they are relevant for ALife and A.I. as well. Interactions in ALife and A.I. systems can also generate novel information, limiting predictability, for better or for worse. There have been several attempts with varying degrees of success, but we still lack a general, common framework to describe, understand, and control complex systems. And it might be that such a framework could be developed within ALife or A.I., and then generalized for all complex systems.

5. Self-Organization

“The beauty of a living thing is not the atoms that go into it,
but the way those atoms are put together.”
Carl Sagan
There are several examples of self-organization in nature [24]: flocks, schools, swarms, herds, crowds. In these examples, there is no leader or external source telling individuals what to do, but the properties of the system are a result of the distributed interactions of individuals. Thus, the study of self-organization is closely tied to complexity and information technology necessary to model it. Also, the term “self-organizing system” has its origins in cybernetics [5,7,70,111]. Nevertheless, there have been also several examples of self-organization in physical and chemical systems [9,34,53,82,98].
A system can be described as self-organizing when its components interact to produce a global pattern or behavior [49]. This description can be useful when we are interested in relating multiple scales (elements and system, micro and macro), and how changes in one might affect the other (e.g. changes in individuals affect a society or changes in a society affect individuals).
If we are dealing with a complex problem, novel information can make it “non-stationary”, i.e. the problem changes. If the problem changes faster than the time required to find a novel solution through optimization or other traditional technique, then the solutions will be obsolete. Self-organization can be a viable approach to develop adaptive solutions that are able to face non-stationary problems, because when the problems change, elements can adjust through their interactions [39,41].
Self-organization has been used broadly in ALife: for software (digital organisms), hardware (robots), and wetware (protocells). See [50] for a review.
In A.I., self-organization has had a more limited use. Still, it could be argued that most artificial neural network models are implicitly self-organizing [42], as their weights (interactions) are modified during the training phase. And explicitly, Kohonen networks are self-organizing [66]. Also in robotics, self-organization has been relevant, implicitly or explicitly [86].

6. Emergence

“You could not have evolved a complex system like a city or an organism — with an enormous number of components — without the emergence of laws that constrain their behavior in order for them to be resilient.”
Geoffrey West
The concept of emergence has certain analogies with Aristotele’s “the whole being more than the sum of its parts”, where the “more” is the emergent bit. Emergence was popular in the XIXth century [74,75], but fell out of favor in the early XXth century due to the success of reductionist approaches. But when information technology allowed the scientific study of complex systems, emergence became relevant again [15].
Still, probably emergence caused most of the confusion and skepticism around complexity in the 1980s and 1990s. In part, because some people described emergent properties as “surprising” or “unexpected”. Then, emergence should be a measure of our ignorance, because once we understand these properties, they are no longer surprising nor unexpected.
Nevertheless, there is nothing mysterious about emergence if properly described [3]. In a general way, emergent properties are those present at one scale but not at another [47]. For example, a bar of gold has color, conductivity, malleability, etc. Still, its components (gold atoms) do not have these properties, so we can call them emergent. In a similar way, it is accepted that cells are alive, but they are composed of molecules that are not alive. Whatever our definition of life, we can say that it emerges out of the interactions of molecules. It is accepted that a human is intelligent, but she is composed of cells that are not intelligent (in the same way). Whatever our definition of intelligence, we can say that intelligence emerges out of interactions of cells.
There are different flavors of emergence, some less controversial than others (see [47] for a review). For example, weak emergence [14] is about properties described by an observer, such as gliders in the Game of Life [17,18]. Still, gliders do not change the rules of the Game of Life, and we only need these rules to computer the future states of the system. Strong emergence [12,97] would be when having all information at one scale is not enough to derive information at another scale. In many cases, strongly emergent properties or information have a causal effect on the elements that produced them. For example, molecules form cells, but living cells make molecules that cannot be produced without biospheres. Also, individuals create social norms, and these norms promote and constrain the behaviors of individuals.
One could say that weak emergence is “in the observer”, while strong emergence “is real”. Some (reductionist) people do not believe strong emergence exists, as it implies downward causation, and for them only “fundamental” phenomena described by physics are real. Independently on our notion of reality, in practice, the laws of physics are not sufficient to describe, explain, and even less predict phenomena at higher scales (even fluid dynamics and chemistry, we do not have to go to life, intelligence, and culture.).
I conjecture that strongly emergent properties are not computable in practice, and that is why a lower scale description is not enough to predict them. If there is no practical way in which the properties of one scale can be described in terms of the “laws” of another, then we should validly describe those properties as emergent. Of course, this cannot be proven, for reasons similar to why a number cannot be proven to be random [28] or Kolmogorov’s complexity is not computable (in theory) [32]. Note that this approach does not rely on downward causation, but neither restricts it [61].
For example, a person can be melted by the words of their loved one, but this cannot be derived from the laws of physics, no matter how detailed a description one might have at the “fundamental” level. Certainly, the laws of physics are not being violated. They are simply not enough, as there is no meaning in physics [36,43].
Emergence has been a central concept for complex systems and artificial life [47]. Many ALife models have been used to better describe and understand different flavors of emergence, e.g. [19,20,59,73,78,90,108,116,117].
In A.I., emergence has been less relevant. Still, unpredictable capabilities of large language models have been recently described as emergent [118], sparking some controversy.

7. Balance

“Everything tends to a balance.”
In recent years, I have been developing a narrative of “balance” to bring together concepts of the scientific study of complex systems, with the purpose of communicating them to a general audience. There are several historical examples of balance from ancient cultures, and it is common knowledge to try to avoid extremes. Still, criticality [2,9,11,30,77,80,84,91,96,103] can be seen as a type of balance between order and chaos [63,69]. Life (and computation) need some stability (order) to keep on functioning. But too much stability limits their adaptability. On the other extreme, too much variability (chaos) loses useful information. At “the edge”, evolution, life, and intelligence can emerge.
More generally, balance is a tautology, because we describe a posteriori phenomena that survived/evolved as balanced, between “too few” and “too much” change. Certainly, there can be “dynamic balance”, where the precise tradeoff varies and systems need to adapt (as exemplified by the slower-is-faster effect [48]). Also, interactions, perturbations, or noise can increase the change in a system, for which antifragility [87,104] is desirable. And we have recently shown that heterogeneity can “extend” the “balanced” region of systems [71,96].
In A.I., a well studied balance is that between exploration and exploitation in search [33,60], also known as search in breadth or depth, respectively (when solution spaces are represented as trees). In other words, to try to find the best solution to a problem, one can exploit current solutions and try to improve them, or explore completely novel solutions with the hope that some might be better than current ones. Since the best strategy cannot be prestated, as it depends on the problem space [121,122], the precise balance between exploration and exploitation will depend on the precise problem space that is searched.
Balance also offers a promising narrative to study evolution (natural and artificial) [62,64,114], as by definition that which evolves needs to be balanced.

8. Inconclusion

“Being ill defined is a feature common to all important concepts.”
Benoît Mandelbrot
I have mentioned conceptual similarities and challenges among complexity, ALife, and A.I. Still, there are many open questions.
There are no agreed definitions of complexity, life, nor intelligence. But perhaps this is more a feature than a problem. If we could define one of these precisely, then we would not have so many open questions about them. And we do because their richness goes beyond our current understanding abilities. It remains to be seen whether we just need a revolution in science [56,65] to be able to understand them properly. Or it might be that there are aspects that inherently are beyond understanding as we know it.
Still, in practice, there have been several relevant recent advances that have generated great expectations. Whether we consider novel forms of life, either exploiting current ones [22,23,51,67] or exploring novel ones [26,81,89], these are relevant for understanding life on Earth but also in other planets [115].
Historically, A.I. has had its cycles of expectations (summers) and disappointments (winters). We have had several years of building expectations. For example, autonomous vehicles are still “two years away” after more than fifteen years. Deep neural networks and large language models have achieved impressive performances, but at the end, they are “just” ad hoc statistical engines. It is not clear how following the same approach something like “understanding meaning” could be achieved [76]. Still, for many practical purposes, this is not relevant. Nevertheless, there are limits to what current approaches will be able to do.
As for the scientific study of complex systems, perhaps its success will be achieved when most disciplines complete integrating their concepts and methods and adopt them as their own, so few people would actually speak about “complexity economics” or “biological complexity”, simply because most people would be familiar with the relevant concepts and methods. Still, there will always be a narrow space for studying complexity per se, as the study of the commonalities of systems at all scales.

Acknowledgments

I thank comments and feedback from Jan Dijksterhuis, Mario Franco, Stuart Kauffman, Amahury López-Díaz, Andrea Roli, David Wolpert, and members of the Foundations of Information Science mailing list hosted at the Universidad de Zaragoza.

References

  1. Abrahão, F. S. & Zenil, H. (2022). Emergence and algorithmic information dynamics of systems and observers. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 380, 20200429.
  2. Adami, C. Self-organized criticality in living systems. Phys. Lett. A 1995, 203, 29–32. [Google Scholar] [CrossRef]
  3. Anderson, P. W. More is different. Science 1972, 177, 393–396. [Google Scholar] [CrossRef] [PubMed]
  4. Ashby, W. R. The nervous system as physical machine: With special reference to the origin of adaptive behavior. Mind 1947, 56, 44–59. [Google Scholar] [CrossRef] [PubMed]
  5. Ashby, W. R. Principles of the self-organizing dynamic system. Journal of General Psychology 1947, 37, 125–128. [Google Scholar] [CrossRef] [PubMed]
  6. Ashby, W. R. (1956). An introduction to cybernetics. Chapman & Hall.
  7. Ashby, W. R. (1962). Principles of the self-organizing system. Foerster, H. V. & Zopf, Jr., G. W. (eds.), Principles of self-organization, pp. 255–278, Pergamon.
  8. Atlan, H. On a formal definition of organization. Journal of Theoretical Biology 1974, 45, 295–304. [Google Scholar] [CrossRef] [PubMed]
  9. Bak, P.; Tang, C.; Wiesenfeld, K. Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett. 1987, 59, 381–384. [Google Scholar] [CrossRef] [PubMed]
  10. Ball, P. (2016). Man made: A history of synthetic life. Distillations.
  11. Balleza, E.; Alvarez-Buylla, E. R.; Chaos, A.; Kauffman, S.; Shmulevich, I.; Aldana, M. Critical dynamics in genetic regulatory networks: Examples from four kingdoms. PLoS ONE 2008, 3, e2456. [Google Scholar] [CrossRef] [PubMed]
  12. Bar-Yam, Y. A mathematical theory of strong emergence using multiscale variety. Complexity 2004, 9, 15–24. [Google Scholar] [CrossRef]
  13. Barricelli, N. Esempi numerici di processi di evoluzione. Methodos 1954, 6, 45–68. [Google Scholar]
  14. Bedau, M. A. (1997). Weak emergence. Tomberlin, J. (ed.), Philosophical perspectives: Mind, causation, and world, vol. 11, pp. 375–399, Blackwell.
  15. Bedau, M. A. & Humphreys, P. (eds.) (2007). Emergence: Contemporary readings in philosophy and science. MIT Press.
  16. Bedau, M. A. & Humphreys, P. (eds.) (2008). Emergence: Contemporary readings in philosophy and science. MIT Press.
  17. Beer, R. D. The cognitive domain of a glider in the game of life. Artificial Life 2014, 20, 183–206. [Google Scholar] [CrossRef]
  18. Berlekamp, E. R., Conway, J. H., & Guy, R. K. (1982). Winning ways for your mathematical plays, vol. 2: Games in Particular. Academic Press.
  19. Bersini, H. (2006). Formalizing emergence: The natural after-life of artificial life. Feltz et al. [37], pp. 41–60.
  20. Beuls, K.; Steels, L. Agent-Based Models of Strategies for the Emergence and Evolution of Grammatical Agreement. PLoS ONE 2013, 8, e58960+. [Google Scholar] [CrossRef]
  21. Bitbol, M. Downward causation without foundations. Synthese 2012, 185, 233–255. [Google Scholar] [CrossRef]
  22. Blackiston, D.; Kriegman, S.; Bongard, J.; Levin, M. Biological robots: Perspectives on an emerging interdisciplinary field. Soft Robotics 2023, 10, 674–686. [Google Scholar] [CrossRef] [PubMed]
  23. Blackiston, D., Lederer, E., Kriegman, S., Garnier, S., Bongard, J., & Levin, M. (2021). A cellular platform for the development of synthetic living machines. Science Robotics, 6.
  24. Camazine, S., Deneubourg, J.-L., Franks, N. R., Sneyd, J., Theraulaz, G., & Bonabeau, E. (2003). Self-organization in biological systems. Princeton University Press.
  25. Campbell, D. T. (1974). `Downward causation’ in hierarchically organized biological systems. Ayala, F. J. & Dobzhansky, T. (eds.), Studies in the philosophy of biology, pp. 179–186, Macmillan.
  26. Čejková, J.; Banno, T.; Hanczyc, M. M.; Štěpánek, F. Droplets as liquid robots. Artificial Life 2017, 23, 528–549. [Google Scholar] [CrossRef] [PubMed]
  27. Chaitin, G. J. Information-theoretic limitations of formal systems. J. ACM 1974, 211, 403–424. [Google Scholar] [CrossRef]
  28. Chaitin, G. J. (1975). Randomness and mathematical proof. Scientific American, 232, 47–52.
  29. Chaitin, G. J. (2004). Irreducible complexity in pure mathematics, arxiv preprint math/0411091.
  30. Chialvo, D. R. Emergent complex neural dynamics. Nature Physics 2010, 6, 744–750. [Google Scholar] [CrossRef]
  31. De Domenico, M., et al. (2019). Complexity explained: A grassroot collaborative initiative to create a set of essential concepts of complex systems.
  32. Delahaye, J.-P.; Zenil, H. Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness. Applied Mathematics and Computation 2012, 219, 63–77. [Google Scholar] [CrossRef]
  33. Downing, K. L. (2015). Intelligence emerging: Adaptivity and search in evolving neural systems. MIT Press.
  34. Eigen, M. & Schuster, P. (1979). The hypercycle, a principle of natural self-organization. Springer-Verlag.
  35. Farnsworth, K. D., Ellis, G. F. R., & Jaeger, L. (2017). Living through downward causation: From molecules to ecosystems. Walker, S. I., Davies, P. C. W., & Ellis, G. F. R. (eds.), From matter to life: Information and causality, pp. 303–333, Cambridge University Press.
  36. Farnsworth, K.D.; Nelson, J.; Gershenson, C. Living is information processing: From molecules to global systems. Acta Biotheoretica 2013, 61, 203–222. [Google Scholar] [CrossRef] [PubMed]
  37. Feltz, B., Crommelinck, M., & Goujon, P. (eds.) (2006). Self-organization and emergence in life sciences, vol. 331 of Synthese Library. Springer.
  38. Flack, J. C. Coarse-graining as a downward causation mechanism. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 2017, 375, 20160338. [Google Scholar] [CrossRef] [PubMed]
  39. Frei, R.; Di Marzo Serugendo, G. Advances in complexity engineering. International Journal of Bio-Inspired Computation 2011, 3, 199–212. [Google Scholar] [CrossRef]
  40. Garfield, J. L. (1995). The fundamental wisdom of the middle way: Nagarjuna’s Mulamadhyamakakarika. Oxford University Press.
  41. Gershenson, C. (2007). Design and control of self-organizing systems. CopIt Arxives, tS0002EN.
  42. Gershenson, C. Computing networks: A general framework to contrast neural and swarm cognitions. Paladyn, Journal of Behavioral Robotics 2010, 1, 147–153. [Google Scholar] [CrossRef]
  43. Gershenson, C. (2012). The world as evolving information. Minai, A., Braha, D., & Bar-Yam, Y. (eds.), Unifying themes in complex systems, vol. VII, pp. 100–115, Springer.
  44. Gershenson, C. (2013). Facing complexity: Prediction vs. adaptation. Massip, A. & Bastardas, A. (eds.), Complexity perspectives on language, communication and society, pp. 3–14, Springer.
  45. Gershenson, C. The implications of interactions for science and philosophy. Foundations of Science 2013, 18, 781–790. [Google Scholar] [CrossRef]
  46. Gershenson, C. Complexity and Buddhism: Understanding interactions. Buddhism Today 2023, 52, 44–48. [Google Scholar]
  47. Gershenson, C. Emergence in Artificial Life. Artificial Life 2023, 29, 153–167. [Google Scholar] [CrossRef] [PubMed]
  48. Gershenson, C.; Helbing, D. When slower is faster. Complexity 2015, 21, 9–15. [Google Scholar] [CrossRef]
  49. Gershenson, C. & Heylighen, F. (2003). When can we call a system self-organizing? Banzhaf, W., Christaller, T., Dittrich, P., Kim, J. T., & Ziegler, J. (eds.), Advances in artificial life, 7th european conference, ECAL 2003 LNAI 2801, pp. 606–614, Springer.
  50. Gershenson, C.; Trianni, V.; Werfel, J.; Sayama, H. Self-organization and artificial life. Artificial Life 2020, 26, 391–408. [Google Scholar] [CrossRef] [PubMed]
  51. Gibson, D. G., et al. (2010). Creation of a bacterial cell controlled by a chemically synthesized genome. Science, 329, 52–56.
  52. Gödel, K. Über formal unentscheidbare sätze der principia mathematica und verwandter systeme i. Monatshefte für mathematik und physik 1931, 38, 173–198. [Google Scholar] [CrossRef]
  53. Haken, H. (1981). Synergetics and the problem of selforganization. Roth, G. & Schwegler, H. (eds.), Self-organizing systems: An interdisciplinary approach, New York, pp. 9–13, Campus Verlag.
  54. Harnad, S. The symbol grounding problem. Physica D: Nonlinear Phenomena 1990, 42, 335–346. [Google Scholar] [CrossRef]
  55. Hernández-Orozco, S.; Hernández-Quiroz, F.; Zenil, H. Undecidability and irreducibility conditions for open-ended evolution and emergence. Artificial Life 2018, 24, 56–70. [Google Scholar] [CrossRef] [PubMed]
  56. Heylighen, F., Beigi, S., & Vidal, C. (2024). The third story of the universe: An evolutionary worldview for the noosphere. Working paper, CLEA/Human Energy.
  57. Heylighen, F., Cilliers, P., & Gershenson, C. (2007). Complexity and philosophy. Bogg, J. & Geyer, R. (eds.), Complexity, science and society, pp. 117–134, Radcliffe Publishing.
  58. Heylighen, F. & Joslyn, C. (2001). Cybernetics and second order cybernetics. Meyers, R. A. (ed.), Encyclopedia of physical science and technology, vol. 4, pp. 155–170, Academic Press, 3rd edn.
  59. Hidalgo, J.; Grilli, J.; Suweis, S.; Maritan, A.; Muñoz, M.A. Cooperation, competition and the emergence of criticality in communities of adaptive systems. Journal of Statistical Mechanics: Theory and Experiment 2016, 2016, 033203. [Google Scholar] [CrossRef]
  60. Hills, T. T.; Todd, P. M.; Lazer, D.; Redish, A. D.; Couzin, I. D. Exploration versus exploitation in space, mind, and society. Trends in Cognitive Sciences 2015, 19, 46–54. [Google Scholar] [CrossRef] [PubMed]
  61. Hoel, E. P.; Albantakis, L.; Tononi, G. Quantifying causal emergence shows that macro can beat micro. Proceedings of the National Academy of Sciences 2013, 110, 19790–19795. [Google Scholar] [CrossRef] [PubMed]
  62. Jablonka, E. & Lamb, M. J. (2006). Evolution in four dimensions: Genetic, epigenetic, behavioral, and symbolic variation in the history of life. MIT Press.
  63. Kauffman, S. A. (1993). The origins of order. Oxford University Press.
  64. Kauffman, S. & Roli, A. (2021). The world is not a theorem. Entropy, 23.
  65. Kauffman, S. A.; Roli, A. A third transition in science? Interface Focus 2023, 13, 20220063. [Google Scholar] [CrossRef] [PubMed]
  66. Kohonen, T. (2000). Self-organizing maps. Springer, 3rd edn.
  67. Kriegman, S.; Blackiston, D.; Levin, M.; Bongard, J. A scalable pipeline for designing reconfigurable organisms. Proceedings of the National Academy of Sciences 2020, 117, 1853–1859. [Google Scholar] [CrossRef] [PubMed]
  68. Langton, C. (1989). Artificial life. Langton, C. (ed.), Artificial life, Redwood City, CA, pp. 1–47, Santa Fe Institute Studies in the Sciences of Complexity, Addison-Wesley.
  69. Langton, C. G. Computation at the edge of chaos: Phase transitions and emergent computation. Physica D 1990, 42, 12–37. [Google Scholar] [CrossRef]
  70. Lendaris, G. G. On the definition of self-organizing systems. Proceedings of the IEEE 1964, 52, 324–325. [Google Scholar] [CrossRef]
  71. López-Díaz, A. J.; Sánchez-Puig, F.; Gershenson, C. Temporal, structural, and functional heterogeneities extend criticality and antifragility in random Boolean networks. Entropy 2023, 25. [Google Scholar] [CrossRef] [PubMed]
  72. Mandelbrot, B. (1982). The fractal geometry of nature. WH Freeman.
  73. Martínez, G.; Adamatzky, A.; Alonso-Sanz, R. Complex dynamics of elementary cellular automata emerging in chaotic rules. International Journal of Bifurcation and Chaos 2012, 22, 1250023. [Google Scholar] [CrossRef]
  74. McLaughlin, B. P. (1992). The rise and fall of British emergentism. Beckerman, Flohr, & Kim (eds.), Emergence or reduction? essays on the prospects of nonreductive physicalism, pp. 49–93, Walter de Gruyter.
  75. Mengal, P. (2006). The concept of emergence in the XIXth century: From natural theology to biology. Feltz et al. [37], pp. 215–224.
  76. Mitchell, M. On crashing the barrier of meaning in artificial intelligence. AI Magazine 2020, 41, 86–92. [Google Scholar] [CrossRef]
  77. Mora, T.; Bialek, W. Are biological systems poised at criticality? Journal of Statistical Physics 2011, 144, 268–302. [Google Scholar] [CrossRef]
  78. Moreno, A.; Ruiz-Mirazo, K. The problem of the emergence of functional diversity in prebiotic evolution. Biology & Philosophy 2009, 24, 585–605. [Google Scholar]
  79. Morin, E. (2007). Restricted complexity, general complexity. Gershenson, C., Aerts, D., & Edmonds, B. (eds.), Philosophy and complexity, pp. 5–29, Worldviews, Science and Us, World Scientific.
  80. Muñoz, M. A. Colloquium: Criticality and dynamical scaling in living systems. Rev. Mod. Phys. 2018, 90, 031001. [Google Scholar] [CrossRef]
  81. Muñuzuri, A. P.; Pérez-Mercader, J. Unified representation of life’s basic properties by a 3-species stochastic cubic autocatalytic reaction-diffusion system of equations. Physics of Life Reviews 2022, 41, 64–83. [Google Scholar] [CrossRef] [PubMed]
  82. Nicolis, G. & Prigogine, I. (1977). Self-organization in non-equilibrium systems: From dissipative structures to order through fluctuations. Wiley.
  83. Pagels, H. R. (1989). The dreams of reason: The computer and the rise of the sciences of complexity. Bantam Books.
  84. Pascual, M.; Guichard, F. Criticality and disturbance in spatial ecological systems. Trends in Ecology & Evolution 2005, 20, 88–95. [Google Scholar]
  85. Pattee, H. H.; Sayama, H. Evolved open-endedness, not open-ended evolution. Artificial Life 2019, 25, 4–8. [Google Scholar] [CrossRef] [PubMed]
  86. Pfeifer, R.; Lungarella, M.; Iida, F. Self-organization, embodiment, and biologically inspired robotics. Science 2007, 318, 1088–1093. [Google Scholar] [CrossRef] [PubMed]
  87. Pineda, O. K.; Kim, H.; Gershenson, C. A novel antifragility measure based on satisfaction and its application to random and biological Boolean networks. Complexity 2019, 2019, 10. [Google Scholar] [CrossRef]
  88. Prokopenko, M.; Boschetti, F.; Ryan, A. An information-theoretic primer on complexity, self-organisation and emergence. Complexity 2009, 15, 11–28. [Google Scholar] [CrossRef]
  89. Rasmussen, S., Bedau, M. A., Chen, L., Deamer, D., Krakauer, D. C., Packard, N. H., & Stadler, P. F. (eds.) (2008). Protocells: Bridging nonliving and living matter. MIT Press.
  90. Roli, A. & Kauffman, S. A. (2020). Emergence of organisms. Entropy, 22.
  91. Roli, A.; Villani, M.; Filisetti, A.; Serra, R. Dynamical criticality: Overview and open questions. Journal of Systems Science and Complexity 2018, 31, 647–663. [Google Scholar] [CrossRef]
  92. Rosenblueth, A.; Wiener, N.; Bigelow, J. Behavior, purpose and teleology. Philosophy of Science 1943, 10, 18–24. [Google Scholar] [CrossRef]
  93. Rota, G. C. In memoriam of Stan Ulam —the barrier of meaning. Physica D 1986, 2, 1–3. [Google Scholar] [CrossRef]
  94. Rovelli, C. (2021). Helgoland: Making sense of the quantum revolution. Riverhead Books.
  95. Rupe, A.; Crutchfield, J. P. On principles of emergent organization. Physics Reports 2024, 1071, 1–47. [Google Scholar] [CrossRef]
  96. Sánchez-Puig, F., Zapata, O., Pineda, O. K., Iñiguez, G., & Gershenson, C. (2023). Heterogeneity extends criticality. Frontiers in Complex Systems, 1.
  97. Schmickl, T. Strong emergence arising from weak emergence. Complexity 2022, 2022, 9956885. [Google Scholar] [CrossRef]
  98. Schweitzer, F. (ed.) (1997). Self-organization of complex structures: From individual to collective dynamics. Gordon and Breach.
  99. Searle, J. R. Minds, brains, and programs. Behavioral and brain sciences 1980, 3, 417–424. [Google Scholar] [CrossRef]
  100. Shannon, C. E. A mathematical theory of communication. Bell System Technical Journal 379–423 and 623–656. 1948, 27. [Google Scholar] [CrossRef]
  101. Simon, H. A. (1996). The sciences of the artificial. MIT Press, 3rd edn.
  102. Standish, R. K. (2003). Open-ended artificial evolution. International Journal of Computational Intelligence and Applications, 3, 167–175.
  103. Stanley, H. E. (1987). Introduction to phase transitions and critical phenomena. Oxford University Press.
  104. Taleb, N. N. (2012). Antifragile: Things that gain from disorder. Random House.
  105. Taylor, T. An Afterword to Rise of the Self-Replicators: Placing John A. Etzler, Frigyes Karinthy, Fred Stahl, and Others in the Early History of Thought About Self-Reproducing Machines. Artificial Life 2024, 30, 91–105. [Google Scholar] [CrossRef]
  106. Taylor, T., et al. (2016). Open-ended evolution: Perspectives from the oee workshop in York. Artificial Life, 22, 408–423.
  107. Taylor, T. & Dorin, A. (2020). Rise of the self-replicators: Early visions of machines, ai and robots that can reproduce and evolve. Springer.
  108. Torres-Sosa, C.; Huang, S.; Aldana, M. Criticality is an emergent property of genetic networks that exhibit evolvability. PLoS Comput Biol 2012, 8, e1002669. [Google Scholar] [CrossRef] [PubMed]
  109. Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 42, 230–265.
  110. Turing, A. M. Computing machinery and intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  111. von Foerster, H. (1960). On self-organizing systems and their environments. Yovitts, M. C. & Cameron, S. (eds.), Self-organizing systems, New York, pp. 31–50, Pergamon.
  112. von Neumann, J. (1966). The theory of self-reproducing automata. University of Illinois Press, edited by A. W. Burks.
  113. von Neumann, J. & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press.
  114. Wagner, A. (2005). Robustness and evolvability in living systems. Princeton University Press.
  115. Walker, S. I., et al. (2018). Exoplanet biosignatures: Future directions. Astrobiology, 18, 779–824, pMID: 29938538.
  116. Walker, S. I. Top-down causation and the rise of information in the emergence of life. Information 2014, 5, 424–439. [Google Scholar] [CrossRef]
  117. Watson, R. A.; Mills, R.; Buckley, C. L. Global adaptation in networks of selfish components: Emergent associative memory at the system scale. Artificial Life 2011, 17, 147–166. [Google Scholar] [CrossRef]
  118. Wei, J., et al. (2022). Emergent abilities of large language models, arXiv:2206.07682.
  119. Wiener, N. (1948). Cybernetics; or, control and communication in the animal and the machine.. Wiley and Sons.
  120. Wolfram, S. (2002). A new kind of science. Wolfram Media.
  121. Wolpert, D. H. & Macready, W. G. (1995). No free lunch theorems for search. (working paper) SFI-WP-95-02-010, Santa Fe Institut.
  122. Wolpert, D. H. & Macready, W. G. (1997). No Free Lunch Theorems for Optimization. IEEE Transactions on Evolutionary Computation, 1, 67–82.
  123. Zenil, H. (ed.) (2013). Irreducibility and computational equivalence: 10 years after wolfram’s a new kind of science. Emergence, Complexity and Computation, Springer.
1
Well, he was also student of Julia. And his uncle Szolem (who knew Sierpiński) had suggested him to work on iterative functions. And he was extremely smart.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.

Altmetrics

Downloads

521

Views

599

Comments

0

Subscription

Notify me about updates to this article or when a peer-reviewed version is published.

Email

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated