Preprint
Article

This version is not peer-reviewed.

Phylogenetically-Mixed Agent Architectures

Submitted:

08 April 2026

Posted:

13 April 2026

You are already at the latest version

Abstract
One way to construct a generalist architecture for computational agents is to assemble different modules for different functions. Yet merely designing this using a top-down approach does not capture how the agent continually produces behavioral states and interacts with the world. A better approach is to evolve a variety of components with a relational history, and then combine the best candidate components into a modular system. We use agentic coding techniques to build a pipeline that implements an evolutionary process of diversification, recombination, and selection. As an initial demonstration of our pipeline, we utilize a toy synthetic dataset of simple shapes and a dataset based on Braitenberg’s Vehicles. In each case, the approach to phylogenetic mixing is to generate variety, select the most viable forms, and then composing an architecture. The resulting components are phylogenetically mixed in that the best components often do not share the same evolutionary history. This assembly process occurs through hypergraph construction: hypergraphs can be used to identify nested or categorical relationships. This generalist architecture could then perform a wide variety of tasks with the ability to connect between domains.
Keywords: 
;  ;  ;  

Introduction

Building agentic cognition with truly adaptive, animal-like behavior is a persistent challenge. A key part of this challenge is that biological intelligence is difficult to replicate in artificial systems. One way to approach this is to mimic the attributes of neural circuits (van Hemmen et.al, 2014). In particular, the concept of interacting components provides us with powerful associative architectures. Subsumption architectures assume that intelligent systems are hierarchical, which makes them amenable to control (Brooks, 1986) and subject to compositionality (Alicea et.al, 2024). Connectionist systems provide a means to bias or weight a set of interacting components using a single parameter (Thomas and McClelland, 2008; Valle-Lisboa et.al, 2023). Yet there is a competing view of cognition: the phylogenetic view (Cisek, 2009). The phylogenetic view claims that cognition is the product of variation and selection (Alicea, 2026). This results in a wide range of nervous systems that produce a wide range of behaviors that reflect environmental (or situational) adaptation. In the case of human cognition, its unique characteristics have their roots and analogues in other species (MacLean, 2016). This relates to the nature of phylogenetic cognition: traits can either be shared amongst multiple species in the same lineage or the product of independent innovations across different lineages.
Through the dual processes of common descent and evolutionary convergence, the tree of life has yielded a diverse range of neural architectures and circuits (Ghiselin, 2016; Moczek, 2023). Through a combination of common ancestry and convergent evolution, the substrates of intelligent behavior are highly variable across development, embodiments, and ecological niche (Eppe and Oudeyer, 2021; Marshall et.al, 2021). One drawback to this is that any one mode of intelligence is narrow in scope, and when mimicked in the artificial context does not translate into a broader set of capabilities (Levin, 2022). However, evolution can produce a diverse set of specialized modules, which when arrayed in the form of a network can serve as the best of both worlds: specialized modules that process information in different ways, leading to a more intelligent whole. This is not merely a recapitulation of cognitive modularity (Fodor, 1983; Margolis and Laurence, 2023); as combining evolutionary history and multilevel network relations can help guide us to strategies of architectural integration.
The idea behind a phylogenetically-mixed architecture is simple: we use a generative process to produce a variety of motifs and structures for information processing. Our approach is distinct from neuroevolutionary approaches (Miikkulainen, 2025) in that the architectures can be much more flexible in terms of their bio-inspiration. In our case, the generative process comes from not a neural network, but a much more generic approach. Reservoir networks and forward diffusion can produce a highly variable latent space for a wide range of input data, from simple shapes to complex agent morphologies. We then use an evolutionary algorithm to evaluate these variants in terms of common ancestry (conservation) and convergence (creativity). To assemble the architecture, we use the highest fitness motifs and structures and recombine them into single structures. We can then evolve the mixed structures in a manner similar to biological hybridogenesis (Lavanchy and Schwander, 2019), where the hybrid offspring of two different species inactivates one genome and produces gametes with the other. In this case, we can encode the new phenotype as a hybrid genotype that has a new fitness function. In addition, the use of hypergraph architectures can help us build more precise constructs for information integration and meta-modeling (Chauhan et.al, 2024; Pedersen et.al, 2025).

Methods

To generate phylogenetically-mixed architectures, we introduce a pipeline with three components: a source, a generator, and a selector. The source is the original source of variation, and we model this with either a reservoir network or a diffusion process. Our generator is implemented as a Generative Adversarial Network (GAN). The process of selection is implemented as an evolutionary algorithm, particularly two alternate fitness functions which selects various features of our shape populations. We then use a hypergraph architecture to describe the multiple layers of interactions amongst the different components of the phylogenetically-mixed architecture.

Open-Source Code

All code was written in Python 3, deployed in CoLab Notebooks, and is available at http://www.github.com/OREL-group/ Phylogenetically-Mixed-Architectures.

AI-assisted Pipeline

We used CoPilot (Microsoft Prometheus) version 1.25121.73.0 to generate the code for each step in the pipeline. All code was verified by executing the code in a Python environment. Further development was done by probing the chat session and additional literature review.

Shape Dataset

A shape dataset of 500 shapes is created to demonstrate our pipeline. We use circles, squares, and triangles (each representing roughly a third of our dataset), and define them on a two-dimensional coordinate system representing a latent space. Figure 1 demonstrates an example dataset generated by a generic GAN over time. Each shape is located along its centroid value in bivariate space.

Braitenberg Vehicle Dataset

A dataset of behaviors and morphologies observed amongst Braitenberg’s Vehicles (Braitenberg, 1984) are used to construct a bipartite hypergraph and an addressable phylogeny. A series of categorical and binary variables are coded using agentic and manual methods, respectively. For each vehicle’s behavioral repertoire, its thematic category, wiring patterns, connection polarity, and complexity bands (Eq. 1) are assembled from prior descriptions (Shaikh and Rano, 2020; Hotton and Yoshimi, 2024). In the case of our bipartite hypergraphs, thematic categories are mapped to specific vehicle (embodied agent) types. For our addressable phylogenies, a single vehicle body is specified for all possible configurations.

Reservoir Network

A reservoir computing network (Verstraeten, 2007; te Vrugt, 2024) consists of an input, a reservoir, and an output. The inputs come in the form of our shape dataset. Reservoirs are non-linear, high-dimensional networks that convert these data into a set of temporal-dependent states that allows the output r(t) to represent transitions from one shape to another. The stochastic nature of reservoir (or echo) networks allows us to more easily discover probabilistic states (Ehlers et.al, 2025). We use a scaling parameter to control the degree of noise injected into the reservoir network. The resulting context vector (a record of state over time), or r(t), serves as the input of our GAN. Table 1 compares this with the use of diffusion models.

Diffusion Process

Diffusion is an alternate approach to reservoir networks, and involves both a forward step and a reverse step. When utilized, the forward step replaces the reservoir network. For the forward step, a time-aware context vector c(t) is created. When combined with the GAN + GA component, this results in a temporal generative model. In the case of our reverse diffusion step, outputs from the GAN’s discriminator component (y) are denoised, allowing us to refine what is generated by the GAN implementation. Replacing a reservoir network with a diffusion process offers greater control. Table 1 compares this with the use of reservoir network models.

GAN Implementation

GANs (Goodfellow et.al, 2014) is a generative method in which two networks (a generator and a discriminator) work in an adversarial manner to generate data with similar statistical features as exist in the training set. The adversarial relationship mimics a zero-sum game in which one network (e.g., generator) gains at the expense of the other network (e.g., discriminator). In this case, we use a GAN to generate a base population of different shapes in varying proportions. We can vary the batch size (number of samples processed in a single bidirectional pass through the network) and epochs (number of passes through the network) to vary how much of the latent space is covered over time as well as retain the variability of shapes. A high batch size and low epoch number (400, 50) yields a broad distribution of multiple shapes, while a high batch size and high epoch number (200, 100) yields a small and tight cluster of the same shape. While the generator acts as an engine of variety, the discriminator evaluates the plausibility of a generated shape and resemblance to other shapes. This is further enforced by the fitness function.

Evolutionary Algorithm

We use PyTorch to implement a genetic algorithm (GA). The fitness function is applied to the GAN output z(i). The GA records the best shapes for each generation. Crossover and mutation is used to replenish the original population at each generation. Figure 2 shows the results of selection over time plotted in bivariate latent space.
Novelty Score. Our novelty score quantifies how different a new solution is compared to what has been observed previously. Novelty relies upon exploration of the fitness space, which tends to reward morphologies that are far from the existing population. In terms of a tree topology, high-novelty individuals form new branches, while low-novelty individuals attach to existing branches. This emphasizes leaps over short periods of evolutionary time. The novelty score considers all ancestors and computes distances from the new individual zi, and joins with the most different ancestor, equivalent to the greatest distance.

Evolutionary Trees

We construct evolutionary trees using a number of criteria. For the shapes dataset, the tree is a directed network that shows time-ordered pairwise distances for all shapes in ensemble z. This allows us to understand the level of diversity produced by the generative processes in our pipeline over time. Clustering approaches can be used to guide parent assignment, finding the most similar ancestors. Nearest ancestor approaches are suitable for building a tree that predicts common ancestry. By contrast, we can further use a parentage rule that encourages connections to the most novel ancestor. This is done by favoring maximal distances between ancestors and descendents, thus being suitable for characterizing creativity and convergent evolution.

Hypergraphs

Hypergraphs model complex, multi-entity relationships, with nodes within nodes and edges within edges. We are interested in hypergraphs that yield nested relationships, or relationships that incorporate multiple properties at multiple scales. We can use a traditional unordered topology for describing morphological entities, or a bipartite topology that deals with relational concepts.

Quasi-Experimental Approach

To demonstrate how phylogenetically-mixed architectures can be produced, we experiment with a number of techniques at each step. Figure 3 shows this process. We begin with a means to generate a network or other means to determine the connectivity between variants.

Core Architectural Pipeline

We began by creating a small shapes dataset, which consisted of circles, triangles, and squares of various sizes. The first step is to generate a population of shapes resampled from the three primitives. This can be done with either a reservoir network or a more generalized diffusion process. From our original population of shapes, a latent space is created describing all possible variations for each shape, as well as the transitions between shapes. The latent space is a product of our GAN, and defines all possible variants in shape space.

Complexity Bands

For the Braitenberg Vehicle analysis, complexity bands are defined as each set of objective attributes of a vehicle. There are six bands for all vehicles discussed in Braitenberg (1984), ranging from very simple vehicles to simple sensorimotor vehicles to vehicles that have foresight and planning capabilities
C = wsS + wmM + wwW + wnN + wiI + wbB
where the w parameters are weights for different vehicle attributes: ws is for sensors, wm is for motors, ww is for wiring, wi is for internal state, wn is for nonlinearity, and wb is for behavioral sophistication. S is the normalized sensor count, M is the normalized motor (effector) count, W is the wiring complexity score (presence of crossed links and mediated connections), N is the nonlinearity/sign score (excitatory, inhibitory, and nonlinear connections), I is the internal state score, and B is the behavioral sophistication score. The complexity of an internal state is defined by the presence of thresholds or memory components. The band categories are described in Table 2.

Fitness Function

Our common ancestry fitness function rewards plausible shapes that do not require large jumps in the latent space. This is stated mathematically as an evolution equation
z(t) = 1.5𝐷 + c + s−2 ‖𝑧−𝑧prev
where c is the center of the object, s is the size of the object, D is the distance between two objects, and z is the one-hot transformation of the shape resulting from c, s. This version of the fitness function considers both the plausibility of the generated shape and its smoothness. Smoothness is defined by small, incremental jumps in latent space. By contrast, the creative fitness function encourages novelty by allowing for large jumps in the latent space and thus discounting similarity. This version of the fitness function is stated mathematically as
z(t) = 1.5𝐷 + c + s + 𝜆uu(𝑧)
where λ is a weight that penalizes temporal distance, u is the degree of uniqueness. All g(i) produced by the GAN are evaluated by the evolutionary algorithm, which selects according to a fitness criterion determined by the latent space. In this sense, this part of the model is recursive.

Results

To demonstrate how this approach might be useful to construct architectures for embodied agents, we will use visualizations and analysis of generic embodied agents, our synthetic dataset, and the variety of Braitenberg Vehicles sensu Braitenberg (1984). The first demonstration is to compare the outputs of a reservoir network (Figure 4, TOP) and the forward diffusion process (Figure 4, BOTTOM) for a generic agent morphology. The outputs for the reservoir network appear to be much more stable, but also exhibit much less variety. By contrast, the forward diffusion process yields more variety. This should not be surprising given the role of noise in the forward process, however, a characterization of the forward and reverse diffusion process together also yields a similar level of morphological variety.

Initialization of Populations

The reservoir network (Figure 4, TOP) generates 50 agent morphologies for 50 timesteps for a randomly initialized recurrent weight matrix W and input matrix Win form an echo state network. For each agent, a short random input sequence represents the final reservoir state as a compact latent descriptor. A deterministic projection of the reservoir state produces a distribution of sensors/motors, their angular positions around the body, the body’s radius, and a wiring matrix of signed weights. The mapping is smoothly controlled by the reservoir’s latent space. The forward diffusion process (Figure 4, BOTTOM) generates 100 agent morphologies for 50 timesteps by mapping reservoir states to morphological parameters such as sensors, motors, and wiring. A forward diffusion process is implemented in latent space, which adds Gaussian noise across 50 timesteps.

Diffusion Noise and Evolvability

The β value can affect the output of the diffusion model to make the input to the GAN + GA steps more or less stable. This can mimic evolvability in natural populations. When β is close to zero, the resulting shapes remain stable, while the model itself retains temporal stability. By contrast, when β approaches 0.5, the resulting shapes become less stable and the model loses memory. In this sense, minimizing β can make our pipeline non-Markovian.

Evolution and Selection

According to our pipeline, a GAN is used to generate variation, while a fitness function is used to select from that population. Over time, this provides us with a series of lineages, which can be stated as a relational tree. Figure 6 shows the difference between the output of our creativity fitness function (Figure 6, TOP) and a tree description of this ensemble (Figure 6, BOTTOM). In Figure 6, BOTTOM, our tree metric is a novelty score, which joins shapes that are most distant from the previous shape. Therefore, not only does our fitness function encourage novelty, our neighbor-joining criterion does as well.

Hypergraph Representations of Variation

The final component of our model post-pipeline is to conceptualize the termini of each phylogenetic branch as nodes in a hypergraph. We use a conceptual hypergraph in Figure 7 to demonstrate the way in which nodes are both embedded into larger categories and how those larger categories are connected to one another. Figure 7A shows that three hypernodes can characterize the entire network of neurons. A set of three hypernodes (shown in red, blue, and orange for distinction) are the product of a phylogeny, or a generated phenotypic configuration, and can be quite distantly related. In the case of Figure 7B, we observe a close-up of a single hypernode (defined in blue). This hypernode contains a triangle motif, which is similar to the triangles generated and evolved in our shapes pipeline. In terms of our shapes dataset, a set of hypernodes constituting a hypergraph would be composed of triangle, square, and circular network motifs (containing three, four, and variable nodes connected in a circle).
Figure 5. An example of how our motifs are generated and evolve. TOP: a population of shapes generated by a GAN from input r(t). BOTTOM: selection of these shapes via common ancestry fitness function.Figure 6. An evolutionary tree for the output of our evolutionary algorithm (z(i) > z). TOP: an ensemble of shapes resulting from the creativity fitness function. BOTTOM: an evolutionary tree that describes the ensemble.
Figure 5. An example of how our motifs are generated and evolve. TOP: a population of shapes generated by a GAN from input r(t). BOTTOM: selection of these shapes via common ancestry fitness function.Figure 6. An evolutionary tree for the output of our evolutionary algorithm (z(i) > z). TOP: an ensemble of shapes resulting from the creativity fitness function. BOTTOM: an evolutionary tree that describes the ensemble.
Preprints 207260 g005
Figure 6. An evolutionary tree for the output of our evolutionary algorithm (z(i) > z). TOP: an ensemble of shapes resulting from the creativity fitness function. BOTTOM: an evolutionary tree that describes the ensemble.
Figure 6. An evolutionary tree for the output of our evolutionary algorithm (z(i) > z). TOP: an ensemble of shapes resulting from the creativity fitness function. BOTTOM: an evolutionary tree that describes the ensemble.
Preprints 207260 g006
Figure 7. An example of a hypergraph that contributes to a phylogenetically-mixed architecture.
Figure 7. An example of a hypergraph that contributes to a phylogenetically-mixed architecture.
Preprints 207260 g007

Analysis with Embodied Agents

How does the hypergraph represent actual embodied agents? For this demonstration, we can use Braitenberg Vehicles (Braitenberg, 1984) to show how hypergraphs represent an agent’s feature space. This characterizes all variants across a generated phylogeny, and makes it easier to recombine them in a functional way. Table 3 shows a list of features by vehicle type as defined by Braitenberg (1984). Figure 8 shows a hypergraph derived from data in Table 1 and Table 3.

Discussion

We experimented with both reservoir networks and diffusion models for our first step. The reason for this is to take advantage of the pros or each model, offering a mix of capabilities when the pipeline is scaled up (Nobukawa et.al, 2025). The choice of either approach depends on the complexity of the agent bodies and the desired depth of evolutionary history. Reservoir networks require fewer trainable parameters while also replicating nonlinear dynamics. This is due to the high-dimensional nature of reservoir networks: discrete states are identified from what is essentially a complex system (Tanaka et.al, 2019). While we do not observe this in our shapes dataset, this might be desirable for curved bodies with many articulating parts (Alicea et.al, 2023). On the other hand, diffusion models destroy and restore structure to input in order to learn structure more deeply (Sohl-Dickstein et.al, 2015). This mimics the self-assembly process that is characterized by developmental and morphogenetic processes.
There are additional caveats that affect model selection. Reservoir networks tend to be incapable of rich representations for larger problem spaces. This can be problematic when dealing with the complex types of agent design we might require when scaling up our phylogenies and hypergraphs. As we saw for our β value result, the forward and reverse diffusion processes can offer a stability in the training process that counters the limited stability of GANs. However, diffusion models can become computationally complex when scaled up to larger problem spaces, which means that other options might be needed to optimize very large architectures.
Thinking more broadly in terms of embodiment, our model might offer a means to compose the best elements of each evolutionary lineage. As a means to assemble the morphology, compositionality also serves to order the assembly of the internal model. In the case of Braitenberg Vehicles, we can propose addressable phylogenies, or tracking the underlying trait evolution via common ancestry (Figure 9). Addressable phylogenies are encodings that evolve stepwise, yielding lineages with gains and losses in phenotypic traits. In Figure 9, a full configuration resembling a developmental body plan (Willmore, 2012) is used to define the evolving phenotype. The full configuration is equivalent to a phylogeny’s evolutionary potential. Traits that have yet to be evolved are in a state of 0, while traits that are expressed are in a state of 1. Representing evolutionary history in this way provides a means to unify our two fitness functions and produce a wide range of options for our phylogenetically-mixed architecture.

References

  1. Alicea, B. Phylogenetic Models of Embodied Agents: an eco-evo-devo approach. IOP Conference Series: Materials Science and Engineering; 2026; 1343, p. 012006. [Google Scholar]
  2. Alicea, B.; Chakrabarty, R.; Dvoretskii, S.; Gopiiswaminathan, A.V.; Lim, A.; Parent, J. Continual Developmental Neurosimulation Using Embodied Computational Agents. IOP Conference Series: Materials Science and Engineering; 2024; 1321, p. 012013. [Google Scholar]
  3. Alicea, B.; Gordon, R.; Parent, J. Embodied Cognitive Morphogenesis as a Route to Intelligent Systems. Royal Society Interface Focus 2023, 13(3), 20220067. [Google Scholar] [CrossRef] [PubMed]
  4. Braitenberg, V. Vehicles: experiments in synthetic psychology; MIT Press; Cambridge, MA, 1984. [Google Scholar]
  5. Brooks, R. A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation 1986, 2(1), 14–23. [Google Scholar] [CrossRef]
  6. Chauhan, V.K.; Zhou, J.; Lu, P.; Molaei, S.; Clifton, D.A. A brief review of hypernetworks in deep learning. arXiv, 2024; arXiv:2306.06955. [Google Scholar]
  7. Cisek, P. Resynthesizing behavior through phylogenetic refinement. Attention, Perception, and Psychophysics 2009, 81(7), 2265–2287. [Google Scholar] [CrossRef] [PubMed]
  8. Ehlers, P.J.; Nurdin, H.L.; Soh, D. Stochastic reservoir computers. Nature Communications 2025, 16, 3070. [Google Scholar] [CrossRef] [PubMed]
  9. Eppe, M.; Oudeyer, P.Y. Intelligent behavior depends on the ecological niche. KI-Künstliche Intelligenz 2021, 35(1), 103–108. [Google Scholar] [CrossRef]
  10. Fodor, J.A. (1983). The Modularity of Mind, MIT Press, Cambridge, MA. Ghiselin, M.T. (2016). Homology, convergence and parallelism. Philosophical Transactions of the Royal Society B, 371 (1685), 20150035.
  11. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. Proceedings of Neural Information Processing Systems 2014, 27, 2672–2680. [Google Scholar]
  12. Lavanchy, G.; Schwander, T. Hybridogenesis. Current Biology 2019, 29(3), 3p539. [Google Scholar] [CrossRef] [PubMed]
  13. Levin, M. Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Frontiers in Systems Neuroscience 2022, 16, 768201. [Google Scholar] [CrossRef] [PubMed]
  14. MacLean, E.L. Unraveling the evolution of uniquely human cognition. PNAS 2016, 113(23), 6348–6354. [Google Scholar] [CrossRef] [PubMed]
  15. Margolis, E.; Laurence, S. Making sense of domain specificity. Cognition 2023, 240, 105583. [Google Scholar] [CrossRef] [PubMed]
  16. Marshall, P.J.; Houser, T.M.; Weiss, S.M. The Shared Origins of Embodiment and Development. Frontiers in Systems Neuroscience 2021, 15, 726403. [Google Scholar] [CrossRef] [PubMed]
  17. Miikkulainen, R. Neuroevolution insights into biological neural computation. Science 2025, 387(6735). [Google Scholar] [CrossRef] [PubMed]
  18. Moczek, A.P. When the end modifies its means: the origins of novelty and the evolution of innovation. Biological Journal of the Linnean Society 2023, 139, 433–440. [Google Scholar] [CrossRef]
  19. Nobukawa, S.; Bhattacharya, A.K.; Hirose, A. Editorial: Deep neural network architectures and reservoir computing. Frontiers in Artificial Intelligence 2025, 8, 1676744. [Google Scholar] [CrossRef] [PubMed]
  20. Pedersen, J.W.; Plantec, E.; Nisioti, E.; Barylli, M.; Montero, M.; Korte, K.; Risi, S. Hypernetworks That Evolve Themselves. arXiv, 2025; arXiv:2512.16406. [Google Scholar]
  21. Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the ICLR; 2015; pp. 2256–2265. [Google Scholar]
  22. Tanaka, G.; Yamane, T.; Heroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: a review. Neural Networks 2019, 115, 100–123. [Google Scholar] [CrossRef] [PubMed]
  23. te Vrugt, M. An introduction to reservoir computing. arXiv, 2024.
  24. Thomas, M.S C.; McClelland, J.L. Sun, R., Ed.; Connectionist models of cognition. In Cambridge handbook of computational psychology; Cambridge University Press; Cambridge, UK, 2008; pp. pgs. 23–58. [Google Scholar]
  25. Valle-Lisboa, J.C.; Pomi, A.; Mizraji, E. Multiplicative processing in the modeling of cognitive activities in large neural networks. Biophysical Reviews 2023, 15(4), 767–785. [Google Scholar] [CrossRef] [PubMed]
  26. van Hemmen, J.L.; Schuz, A.; Aertsen, A. Structural aspects of biological cybernetics: Valentino Braitenberg, neuroanatomy, and brain function. Biological Cybernetics 2014, 108, 517–525. [Google Scholar] [CrossRef] [PubMed]
  27. Verstraeten, D.; Schrauwen, B.; d’Haene, M.; Stroobandt, D. An experimental unification of reservoir computing methods. Neural Networks 2007, 20(3), 391–403. [Google Scholar] [CrossRef] [PubMed]
  28. Willmore, K.E. The Body Plan Concept and Its Centrality in Evo-Devo. Evolution: Education and Outreach 2012, 5, 219–230. [Google Scholar] [CrossRef]
Figure 1. A shape dataset (n=500) representing circles, squares, and triangles is shown. Output of a generic GAN is shown.
Figure 1. A shape dataset (n=500) representing circles, squares, and triangles is shown. Output of a generic GAN is shown.
Preprints 207260 g001
Figure 2. An example of a GAN/GA hybrid model showing generated and evolved shapes over time plotted in bivariate latent space.
Figure 2. An example of a GAN/GA hybrid model showing generated and evolved shapes over time plotted in bivariate latent space.
Preprints 207260 g002
Figure 3. Basic pipeline for generating and selecting motifs. This example shows a reservoir network (or a forward diffusion process) with the output function r(t). These provide r(t) to the GAN, which consists of a generator and discriminator. When the forward diffusion process produces r(t), the output of the GAN z(i) is provided to a reverse diffusion process. z(i) provides an input to our model of selection (genetic algorithm) with a fitness function z(t).
Figure 3. Basic pipeline for generating and selecting motifs. This example shows a reservoir network (or a forward diffusion process) with the output function r(t). These provide r(t) to the GAN, which consists of a generator and discriminator. When the forward diffusion process produces r(t), the output of the GAN z(i) is provided to a reverse diffusion process. z(i) provides an input to our model of selection (genetic algorithm) with a fitness function z(t).
Preprints 207260 g003
Figure 4. TOP: 50 agent morphologies generated by a reservoir network. BOTTOM: 100 agent morphologies generated by a forward diffusion process. For the body shell (nodes), sensors (red) and motors (blue) are arrayed in a triangular shape. The wiring (edges) are inhibitory (red) or excitatory (blue).
Figure 4. TOP: 50 agent morphologies generated by a reservoir network. BOTTOM: 100 agent morphologies generated by a forward diffusion process. For the body shell (nodes), sensors (red) and motors (blue) are arrayed in a triangular shape. The wiring (edges) are inhibitory (red) or excitatory (blue).
Preprints 207260 g004aPreprints 207260 g004b
Figure 8. A bipartite hypergraph based on all Braitenberg Vehicles as proposed by Braitenberg (1984).
Figure 8. A bipartite hypergraph based on all Braitenberg Vehicles as proposed by Braitenberg (1984).
Preprints 207260 g008
Figure 9. A phylogenetic model of Braitenberg Vehicles showing the process of addressable phylogenies.
Figure 9. A phylogenetic model of Braitenberg Vehicles showing the process of addressable phylogenies.
Preprints 207260 g009
Table 1. Differences between reservoir network and diffusion models.
Table 1. Differences between reservoir network and diffusion models.
Reservoir Networks Diffusion Models
Primary output High-dimensional state vectors used by a trained linear readout. Denoised samples produced by iterative reverse diffusion.
Inference cost and latency Low per step; single forward pass per timestep. High: iterative sampling (many steps) unless using accelerated samplers.
Interpretability Moderate: readout weights interpretable; reservoir dynamics opaque. Low: deep denoisers are black boxes; intermediate noisy states are uninterpretable.
Robustness and stability Good for stable temporal embeddings; sensitive to hyperparameters (spectral radius). Sensitive to noise schedule and model capacity; sampling stability improved by recent methods.
Ideal Tasks Real-time control, low compute budgets, small datasets. High-quality generative tasks, complex data distributions, conditional synthesis.
Table 2. Categories for each complexity band, defined as 0.17 unit intervals from 0 to 1.
Table 2. Categories for each complexity band, defined as 0.17 unit intervals from 0 to 1.
Band Description
1 Single sensor-effector connection
2 Simple sensor-motor couplings that perform simple behaviors (phototaxis)
3 Added selection or preference mechanisms
4 Concept-like or multi-stage behaviors
5 Chaining, rule use, or internal state dynamics
6 Foresight, planning, or complex internal models
Table 3. A minimal canonical list of features by vehicle groups as proposed by Braitenberg (1984).
Table 3. A minimal canonical list of features by vehicle groups as proposed by Braitenberg (1984).
Vehicle Description Theme Wiring Sign Complexity
1 Getting Around Locomotion Uncrossed Inhibitory 2
2a Fear/Aggression variant A Tropotaxis Crossed Inhibitory 2
2b Fear/Aggression variant B Tropotaxis Uncrossed Excitatory 2
3a Love/Liking variant A Tropotaxis Uncrossed Excitatory 2
3b Love/Liking variant B Tropotaxis Crossed Excitatory 2
4 Values and Special Tastes Preferences 3
5 Logic Logic 3
6 Selection Selection 3
7 Concepts Concepts 4
8 Space, Things, and Movements Spatial 4
9 Shapes Perception 4
10 Getting Ideas Ideas 5
11 Rules and Regularities Rules 5
12 Trains of Thought Chains 5
13 Foresight Foresight 6
14 Egotism and Optimism Personality 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated