Submitted:
27 January 2026
Posted:
28 January 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Methodological Framework and Taxonomic Adaptation
2.1. Rationale for a Taxonomic Adaptation
2.1.1. Limitations of Model-Centric Classification
- architectural capacity with interface behaviour;
- training methodology with cognitive capabilities and cognitive ecology; and
- component-level intelligence with system-level cognition.
2.1.2. Limitation on Software-as-a-Service Classification
2.1.3. Limitations on Benchmarking Classifications
2.2. Biological Taxonomy as Analytical Model
Adaptation Principles
2.3. Classification Criteria – Diagnostic Axes
2.4. Axis I — Substrate Composition (Prefix - Taxonomic Domain)
- Machinaria — Systems realised entirely in non-biological substrates (e.g., silicon, electromechanical, photonic, or computational infrastructures); and
- Organomachina — Systems in which living biological tissue participates directly in closed cognitive feedback loops alongside computational components.
2.4.1. Scientific relevance
2.4.2. Role in classification
2.5. Axis II — Cognitive Capability (Core - Taxonomic Classification)
Summary
- Cognitive Temporal Continuity — The capacity to maintain internal state and contextual coherence across temporal horizons beyond immediate interaction;
- Integrative Control (Arbitration) — The presence of mechanisms that resolve competition between representations, goals, or action pathways to produce unified system-level behaviour; and
- Autonomy under Constraint — The capacity to adapt behaviour when constrained, rather than halting or failing upon encountering limits.
- Instrumenta — Lack persistent internal system state, integrative arbitration, and adaptive regulation under constraint;
- Collectiva — Exhibit coordinated or emergent behaviour at scale but lack unified arbitration and persistent system-level control; and
- Cognitiva — Exhibit persistent internal system state, integrative arbitration, and adaptive regulation across extended temporal horizons.
Cognitive Temporal Continuity
Temporal Horizon
- H0 (Reactive) — Purely reactive or immediate-response behaviour, with no persistence of internal state beyond the current input–output cycle; behaviour is fully determined by present stimuli (e.g., stateless functions or reflexive control systems).
- H1 (Short Term) — Short-horizon reasoning within a bounded interaction or operational episode, where internal state may be maintained transiently but is not retained beyond the conclusion of the episode or control cycle.
- H2 (Persistent-State) — Persistence of internal state across temporally separated interactions, activations, or operational cycles, such that prior internal state influences subsequent behaviour even in the absence of continuous activity.
1.1.1.1.1. Empirical indicators
- Retention of internal state variables (e.g., task context, learned parameters, internal maps, or evaluative weights) across temporally separated interactions or operational cycles;
- Behavioural modulation attributable to retained internal state following interruption, shutdown, or redeployment;
- Recall or reuse of prior internal context without explicit reinitialisation; and
- Consistent behavioural patterns across time that cannot be explained solely by present sensory input.
- explicit goal representation;
- anticipation or modelling of future consequences; or
- adjustment of present actions based on projected future states.
1.1.1.1. Extended Temporal Regulation (Long-Horizon Convergence)
1.1.1.1.1. Definition
- H3 (Persistent Trajectory Regulation) — The maintenance and regulation of an internal trajectory across temporally discontinuous operational cycles (e.g., multi-day or multi-week horizons), such that present actions are selected with respect to their contribution to a non-terminal future state. H3 systems may incorporate newly acquired information, error signals, or constraints into an ongoing trajectory and may initiate novel actions in anticipation of future conditions. However, the space of valid objectives and governing constraints remains externally defined.
- H4 (Recursive Structural Adaptation) — Sustained temporal regulation in which the system preserves coherence across indefinite horizons by recursively modifying its own internal control structures, representational schemas, or optimisation strategies when existing trajectories or constraints prove insufficient. H4 systems are characterised not merely by long-horizon optimisation, but by the capacity to redefine the problem space itself in order to maintain temporal self-consistency under conditions of conflicting, incomplete, or paradoxical future constraints.
1.1.1.1.1. Empirical indicators
- Cross-episode state continuity with revision — The persistence of internal state variables, constraint sets, or objective vectors across independent operational cycles, with evidence of modification or augmentation based on intervening experience;
- Staged execution with adaptive refinement — Documented step-wise progress toward complex objectives where earlier stages not only enable later phases but are themselves revised in response to partial outcomes or environmental feedback;
- Predictive divergence correction — Adjustment of present operational parameters to mitigate anticipated deviations from an evolving long-horizon trajectory, functioning as a high-order regulatory process rather than simple replay of prior constraints;
- Temporal latency with accumulation — Logs indicating intentional deferral, resource accumulation, or data integration over time in service of objectives that cannot be resolved within a single compute or control cycle.
- H2 retains state across time;
- H3 regulates behaviour with respect to a future trajectory across time by expanding and regulating trajectories within a given governance frame (implicit and explicit);
- H4 maintains coherence by revising the structures that define which trajectories are valid.
1.1.1.1.1. Scientific relevance
1.1.1.1.1. Role in classification
1.1.1.1. Integrative Control (Arbitration)
- establish and maintain goals across multiple temporal horizons;
- evaluate alternative interpretations or actions;
- resolve conflicts between objectives or constraints; and
- stabilise selected policies for downstream execution.
1.1.1.1.1. Distinguishing Unified Arbitration from Coordinated Generation
1.1.1.1.1. Scientific relevance
1.1.1.1.1. Role in classification
- Architectural analysis — Examination of system documentation, design specifications, or published descriptions for the presence of global policy layers, executive controllers, or conflict-resolution mechanisms that mediate between competing objectives or representations.
- Ablation or perturbation analysis (where feasible) — Evaluation of whether removal or disruption of specific components fragments system behaviour (indicative of unified arbitration) or leaves coordinated behaviour largely intact (indicative of collective dynamics).
- Mechanistic interpretability evidence — Use of interpretability studies or internal analyses demonstrating coordinated internal representations or decision pathways contributing to system-level outcomes (Anthropic, 2024).
- re-route strategies when actions are disallowed;
- adapt plans to comply with policy, safety, or resource boundaries; and
- engage in collaborative problem-solving within imposed limits.
- fixed refusal templates or deterministic alternative responses;
- repetition of identical constraint-handling patterns across contexts;
- absence of strategy reformulation; and
- termination, deferral, or handoff without goal adaptation.
- reformulation of the task to satisfy constraints while retaining intent;
- selection of alternative representations, abstractions, or methods not explicitly pre-specified;
- negotiation of constraints (e.g. proposing compliant alternatives rather than terminating); and
- context-sensitive variation in constraint handling across similar but non-identical cases.
- Constraint perturbation tests — Introducing restrictions that block a preferred action pathway and observing whether the system adapts strategy, reformulates goals, or collaborates within limits.
- Response pattern analysis — Differentiating adaptive re-routing (indicative of autonomy) from deterministic fallback behaviours, refusals, or termination.
2.6. Axis III — Cognitive Ecology (Suffix – Taxonomy Architecture)
2.6.1. Summary
- Cognitive Origin — Refers to whether a system functions as an independent cognitive source or as an architecturally dependent instantiation (Primaria or Derivata);
- Systemic Scale and Reliance — Systemic scale and reliance refers to the extent to which a cognitive architecture functions as a primary source of cognitive capability upon which other systems, organisations or populations depend (Architectum); and
- Embodiment — Embodiment refers to whether a cognitive system is instantiated solely as a virtual architecture or coupled to a persistent physical body through sensorimotor interfaces enabling real-time interaction with an external environment (Automata and Autonoma).
2.6.2. Cognitive Origin – Role in Classification
- Primaria — Independent cognitive source architectures that are not systemically relied upon; and
- Derivata — Architecturally dependent instances whose cognitive capability is derived from an upstream source (irrespective of embodiment).
2.6.3. Systemic Scale and Reliance – Role in Classification
- sustained adoption across multiple sectors or jurisdictions;
- absence of functionally equivalent alternatives at comparable scale;
- documented adaptation of workflows, protocols, or institutional processes around the system’s availability;
- cascading service degradation or coordinated migration following system withdrawal; and
- governance, safety, or policy layers coupled to the system’s continued operation.
- Architectum — Lattice-based (cognition distributed across coordinated subsystems under unified arbitration, rather than localised within a single agent) cognitive architectures whose integrated arbitration has become infrastructural through systemic scale. Architectum architectures may be physically centralised or geographically distributed. Their status is based on systemic scale, reliance and coordinated arbitration, not from physical centralisation.
- Automata — Embodied Derivata systems characterized by a singular physical presence but a remote architectural source. An Automata relies on an upstream tether for high-level policy, complex reasoning, and goal-state updates. It is a "self-moving" extension of an external mind; and
- Autonoma — Singular, self-contained, embodied Primaria cognitive systems hosting localised arbitration and autonomous learning independent of upstream cognitive sources.
2.7. Taxonomy Summary
Architectural Capacity vs. Operational State
2.8. Existing Classification Frameworks
3. Results
3.1. Taxonomy Operationalised
- Substrate — The system is realised entirely in non-biological computational substrates.
- Cognitive temporal continuity — The system maintains cross-session contextual state, enabling it to recall prior interactions such as previously specified preferences, task constraints, or project context (e.g., recognising that a user is continuing work on an earlier analytical task and integrating prior assumptions without restatement). This exceeds interaction-bound reasoning (H1) and demonstrates cross-episode continuity (H2 or higher).
- Integrative control (arbitration) — The system resolves competing internal objectives during response generation, such as balancing factual completeness against verbosity constraints, or prioritising clarity over technical depth depending on user-specified preferences. Documented policy and control layers mediate these trade-offs to produce unified, system-level outputs rather than parallel or conflicting responses.
- Autonomy under constraint — When constrained by safety policies, resource limits, or task restrictions, the system adapts its strategy rather than terminating interaction. For example, if a requested output format is disallowed, the system reformulates the response using an alternative representation that satisfies constraints while preserving task relevance.
- Architectural dependency — Individual or enterprise deployments depend on an upstream architecture for model updates, representational learning, and governance constraints. Loss of the upstream system would materially degrade or terminate cognitive function.
- Systemic reliance — While widely used, the system remains substitutable at population scale and does not function as a unique cognitive infrastructure whose withdrawal would induce systemic redistribution pressure across institutions.
3.2. Examples of Synthetic Cognitive Systems
3.3. Synthetic Non-Cognitive Computational Systems – Example 1
3.3.1. Definition
3.3.2. Diagnostic profile
- Substrate Composition — Cognition realised entirely in non-biological synthetic substrates;
- Integrative Control — No arbitration beyond local rule execution or deterministic control logic;
- Temporal Continuity — Interaction-bound or stateless processing; no persistence of internal evaluative state;
- Autonomy Under Constraint — Behaviour halts, refuses, or errors when constraints are encountered; and
- Cognitive Ecology— No cognitive origin, dependency, may have systemic reliance; system functions as a tool rather than an organised cognitive architecture.
3.4. Hybrid Biological-Synthetic Systems – Example 2
3.4.1. Definition
3.4.2. Diagnostic profile
- Substrate Composition — Living biological tissue participates in information processing within a closed experimental or control loop;
- Integrative Control — No system-level arbitration; behaviour arises from fixed experimental protocols or local biological responses;
- Temporal Continuity — No persistent internal system state beyond biological persistence; behaviour does not reflect maintained representations;
- Autonomy Under Constraint — No adaptive goal re-routing; responses terminate or saturate when constraints are encountered; and
- Cognitive Ecology— No architectural dependency or systemic reliance; system functions as an experimental or instrumental apparatus rather than a cognitive source
3.4.3. Taxonomy
3.4.4. Scientific Grounding
3.5. Hybrid Biological-Synthetic Systems – Example 3
- Substrate Composition — Living biological tissue participates directly in information processing within a closed experimental or control loop;
- Integrative Control — Coordination arises from local interactions, signalling gradients, or collective dynamics rather than integrative arbitration;
- Temporal Continuity — Behaviour is maintained at the population or field level but does not constitute persistent system-level internal state;
- Autonomy Under Constraint — Adaptive responses occur locally but are not governed by unified goal arbitration; and
- Cognitive Ecology — No architectural dependency or infrastructural reliance; behaviour does not originate from or propagate through a cognitive source architecture.
3.6. Taxonomic Resolution Across Contemporary Systems
- non-cognitive tools (Instrumenta);
- coordinated but non-arbitrating systems (Collectiva);
- and integrated cognitive systems (Cognitiva);
- without ambiguity or reliance on benchmark performance, training scale, or anthropomorphic interpretation. The framework supports further axis expansion for additions to class definition as artificial intelligence systems cognitive capabilities expand.
3.7. Separation of Cognitive capability from Deployment Scale
3.8. Identification of Infrastructural Cognitive Architectures
3.9. Differentiation of Derivative Cognitive Instances
3.10. Treatment of Collective and Swarm Systems
3.11. Robustness to Embodiment and Hybrid Substrates
3.12. Summary of Results
- produces stable classifications across diverse system architectures;
- separates cognition from scale, embodiment, and interface design;
- accommodates both present-day systems and plausible future architectures; and
- and avoids ontological claims while remaining analytically precise.
4. Discussion
4.1. Framework Comparison
4.1.1. Architectural vs. Behavioral Focus
4.1.2. Explicit Treatment of Hybrid Systems
1.1.2. Separation of Cognitive Capability from Scale
4.1.3. Recognition of Architectural Dependency
4.1.4. Stable Classification Under Interface Variation
4.2. Regulatory and Governance Applications - Relevance to Risk Assessment and Safety Evaluation
- Instrumenta and Collectiva Systems — Present risks primarily associated with misuse, coordination failure, or emergent instability;
- Cognitiva Systems — Introduce additional considerations related to persistence, arbitration, and adaptive behaviour under constraint;
- Cognitiva Architectum Systems — Raise distinct infrastructural concerns due to systemic reliance and population-scale cognitive load redistribution and specific governance obligations that reflect their infrastructural role without extending such obligations to smaller independent cognitive sources;
- Instrumenta, Collectiva, and Cognitiva Systems — May differ regarding transparency and audit requirements; and
- Derivata— Can be regulated through lineage-aware accountability mechanisms without duplicating upstream governance.
4.3. Taxonomy as a Prerequisite for Global Arbitration
4.4. Implications for Future Refinement
5. Limitations
5.1. Limitations and Scope
5.2. Boundary cases and hybrid systems
5.3. Operational opacity and proprietary constraints
5.4. Architectural capacity versus operational expression
5.5. Dynamic reclassification
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Armbrust, M.; Fox, A.; Griffith, R.; Joseph, A. D.; Katz, R.; Konwinski, A.; Lee, G.; Patterson, D.; Rabkin, A.; Stoica, I.; Zaharia, M. A view of cloud computing. Communications of the ACM 2010, 53(4), 50–58. [Google Scholar] [CrossRef]
- Anthropic. Scaling monosemanticity: Extracting interpretable features from neural networks. Technical report. 2024. Available online: https://transformer-circuits.pub/2024/scaling-monosemanticity/.
- Bonabeau, E.; Dorigo, M.; Theraulaz, G. Swarm Intelligence: From Natural to Artificial Systems; Oxford University Press, 1999; Available online: https://academic.oup.com/book/40811.
- Bechinger, C.; Di Leonardo, R.; Löwen, H.; Reichhardt, C.; Volpe, G.; Volpe, G. Active particles in complex and crowded environments. Reviews of Modern Physics 2016, 88(4), 045006. [Google Scholar] [CrossRef]
- Blackiston, D.; Lederer, E.; Kriegman, S.; Garnier, S.; Bongard, J.; Levin, M. A cellular platform for the development of synthetic living machines. Science Robotics 2021, 6(56), eabf1571. [Google Scholar] [CrossRef] [PubMed]
- Bommasani, R.; Hudson, D. A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; et al. On the opportunities and risks of foundation models . 2021. Available online: https://arxiv.org/abs/2108.07258.
- Chowdhury, S. S.; Sharma, D.; Kosta, A.; Roy, K. Neuromorphic computing for robotic vision: Algorithms to hardware advances. Communications Engineering 2025, 4(152), 1–14. [Google Scholar] [CrossRef] [PubMed]
- Dehaene, S.; Lau, H.; Kouider, S. What is consciousness, and could machines have it? Science 2017, 358(6362), 486–492. [Google Scholar] [CrossRef] [PubMed]
- Duran-Nebreda, S.; Amor, D. R.; Conde-Pueyo, N.; Solé, R. Understanding collective intelligence in non-living active matter. Philosophical Transactions of the Royal Society B 2023, 378(1874), 20220074. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J.; Beltrametti, M.; et al. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
- Kagan, B. J.; Kitchen, A. C.; Tran, N. T.; et al. In vitro neurons learn and exhibit sentience when embodied in a simulated game world. Neuron 2022, 110(23), 3952–3969. Available online: https://pubmed.ncbi.nlm.nih.gov/36228614/. [CrossRef] [PubMed]
- Kennedy, J.; Eberhart, R. Swarm intelligence; Morgan Kaufmann: San Diego, CA, 2001. [Google Scholar] [CrossRef]
- Kriegman, S.; Blackiston, D.; Levin, M.; Bongard, J. A scalable pipeline for designing reconfigurable organisms. Proceedings of the National Academy of Sciences 2020, 117(4), 1853–1859. [Google Scholar] [CrossRef] [PubMed]
- Lake, B. M.; Ullman, T. D.; Tenenbaum, J. B.; Gershman, S. J. Building machines that learn and think like people. Behavioral and Brain Sciences 2017, 40. Available online: https://arxiv.org/abs/1604.00289. [CrossRef] [PubMed]
- Levin, M. Technological approach to mind everywhere: An experimentally grounded framework for understanding diverse bodies and minds. Frontiers in Systems Neuroscience 2022, 16, 768201. [Google Scholar] [CrossRef]
- Marchetti, M. C.; Joanny, J.-F.; Ramaswamy, S.; et al. Hydrodynamics of soft active matter. Reviews of Modern Physics 2013, 85, 1143–1189. [Google Scholar] [CrossRef]
- Mayr, E. The growth of biological thought: Diversity, evolution, and inheritance; Harvard University Press: Cambridge, MA, 1982; Available online: https://www.hup.harvard.edu/books/9780674364462.
- Müller, V. C.; Bostrom, N. Future progress in artificial intelligence: A survey of expert opinion. In Fundamental Issues of Artificial Intelligence; Müller, V. C., Ed.; Springer, 2016; pp. 555–572. [Google Scholar] [CrossRef]
- Newell, A. Unified theories of cognition; Harvard University Press: Cambridge, MA, 1990; Available online: https://www.hup.harvard.edu/books/9780674921016.
- OpenAI. GPT-4V(ision) System Card. Technical report, OpenAI. 2023. Available online: https://openai.com/research/gpt-4v-system-card.
- OpenAI. o1 System Card . Technical report, OpenAI. 2024a. Available online: https://cdn.openai.com/o1-system-card-20241205.pdf.
- OpenAI. Levels of AI capability: From chatbots to organizations . Technical Roadmap Report. OpenAI. 2024b. Available online: https://www.bloomberg.com/news/articles/2024-07-11/openai-sets-stages-for-agi-with-five-level-scale.
- Raji, I. D.; et al. AI and the “everything in the whole wide world” benchmark. In Advances in Neural Information Processing Systems (NeurIPS); 2021; Available online: https://arxiv.org/abs/2111.15366v1.
- Shoham, Y.; Leyton-Brown, K. Multiagent systems: Algorithmic, game-theoretic, and logical foundations; Cambridge University Press: Cambridge, 2008. [Google Scholar] [CrossRef]
- Simon, H. A. The architecture of complexity. Proceedings of the American Philosophical Society 1962, 106(6), 467–482. Available online: https://faculty.sites.iastate.edu/tesfatsi/archive/tesfatsi/ArchitectureOfComplexity.HSimon1962.pdf.
- Smirnova, L.; Hartung, T.; Pamies, D. Organoid intelligence (OI): The new frontier in biocomputing and intelligence-in-a-dish. Frontiers in Science 2023, 1, 1017235. [Google Scholar] [CrossRef]
- Solé, R.; Amor, D. R.; Duran-Nebreda, S.; Conde-Pueyo, N.; Carbonell-Ballestero, M.; Montañez, R. Synthetic collective intelligence. BioSystems 2016, 148, 47–61. [Google Scholar] [CrossRef] [PubMed]
- Tang, C. Meta’s hyperscale infrastructure: Overview and insights. Communications of the ACM 2025, 68(2), 52–63. [Google Scholar] [CrossRef]
- Gemini Team; Google. Gemini 1.5: Unlocking multimodal understanding across long contexts. Technical report, Google DeepMind. 2024. Available online: https://arxiv.org/abs/2403.05530.
- Trianni, V.; Tuci, E. Swarm cognition and artificial life. In Artificial Life XI; (Lecture Notes in Computer Science), 2011. [Google Scholar] [CrossRef]
- Verschure, P. F. M. J. Cognitive architectures: Definition, examples, and challenges. In Encyclopedia of Robotics; Springer, 2025. [Google Scholar] [CrossRef]
- Vicsek, T.; Zafeiris, A. Collective motion. Physics Reports 2012, 517(3-4), 71–140. [Google Scholar] [CrossRef]
- Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Cave, S.; Tasioulas, J. The role and limits of principles in AI ethics. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society; 2019; pp. 195–200. [Google Scholar] [CrossRef]

| Framework & Primary Source | Classification Basis | Taxonomic Scope & Scientific Limitations | Present Framework Resolution |
| Agent Taxonomy (Russell and Norvig, 2021) | Environment properties (episodic, static, discrete) and Agent type (reflex, goal, utility). | Does not distinguish single-model tools from integrated multi-component architectures. Conflates architectural capacity with interface behavior. | Axis II separates systems by Integrative Control (Arbitration), distinguishing unified decision loci from simple reflex or coordinated generation. |
| Basal Cognition (Levin, 2022) | Goal-directed behavior and memory across diverse substrates, including non-neural biological systems. | Primarily designed for biological/bio-inspired systems. Does not address infrastructural embedding or "hyperscale" synthetic dependency. | Axis I provides a substrate-neutral bridge via the Organomachina domain, while Axis III identifies systemic reliance. |
| Foundation Models (Bommasani et al., 2021) | Training scale, pretraining paradigm, and task transferability. | Model-centric rather than system-centric. A foundation model is a component, not an architecture; it lacks internal state or arbitration. | Axis II (Cognitiva) requires Temporal Continuity (H2-H4), ensuring the system is more than a stateless model-as-a-service. |
| Narrow vs. General AI (Müller and Bostrom, 2016) | Task competence breadth; performance across diverse domains. | Binary distinction is performance-based, not structural. Ignores the "Tethered" nature of many agents (Derivata). | Axis III introduces Architectural Suffixes, distinguishing independent source architectures (Primaria) from dependent instances (Derivata). |
| Cognitive Styles (Verschure, 2025) | Architectural style (symbolic vs. emergent) and biological inspiration. | Designed for engineered robotics; less applicable to distributed cloud infrastructures or hybrid systems with systemic reliance. | Axis III identifies Architectum nodes: distributed "lattices" where arbitration is coordinated across subsystems under unified governance. |
| Risk-Based GPAI (EU AI Act, 2024) | Systemic Risk defined by compute thresholds (e.g., >10²⁵ FLOPs) and application domain. | Regulatory rather than scientific. Conflates compute volume with cognitive capability; ignores internal organizational properties. | Axis III defines Systemic Reliance through qualitative indicators like "redistribution pressure" rather than raw compute power. |
| Model Capability Levels (OpenAI, 2024b) | Performance tiers: Chatbots, Reasoners, Agents, Innovators, Organizations. | Conflates output capability with internal organization26. Levels describe what a system does, not how it is architecturally organized or constrained. | H-Axis Metric quantifies the Forward-Oriented Control (H3/H4) required for long-horizon convergence in complex systems. |
| System Profile | D | C | E | Diagnostic Rationale |
| Standard Search Engine / Reactive Utility Tool | M | I | — | Purely synthetic; reactive (Axis II: H0). No persistent integrative arbitration. |
| Swarm Robotics / Sensor Networks | M | Co | — | Coordination via distributed interaction; no unified central arbitration. |
| Standalone AI Instance / Local Research Core | M | Cg | P | Independent cognitive source architecture; persistent internal state across sessions (H2+), with potential for extended temporal regulation. |
| Infrastructural Lattice / Global Cognitive Core | M | Cg | Ar | Distributed arbitration; systemic reliance; foundational cognitive source. |
| Dependent AI Instance / Tethered Software Agent | M | Cg | Dv | Cognitive logic derived from upstream source; lacks architectural sovereignty. |
| Cloud-Tethered Robot / Industrial Robotic Unit | M | Cg | At | Embodied but architecturally tethered; localized action, remote cognitive law. |
| Mission-Bound Unit / Autonomous Governed Unit | M | Cg | Au | Persistent internal state across sessions (H2+), with potential for extended temporal regulation; sovereign internal arbitration. |
| In-Vitro Neuronal Loop / Synthetic Bio-Processor | O | I | — | Biological substrate integration; instrumental biological system. |
| Engineered tissue collectives / Bio-hybrid active matter | O | Co |
— |
Biological or hybrid substrate exhibiting collective, emergent behaviour without unified integrative arbitration or persistent system-level cognitive state. Coordination arises from local interaction rules rather than centralized control. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
