Submitted:
17 May 2025
Posted:
19 May 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. First Research Phase: 123BNAI Formula – Digital DNA for AI Cloning


- Transfer Learning Coefficient: This parameter is crucial as it measures the ability of an AI model to transfer knowledge learned from one task to another. Transfer learning is vital for reducing the time and resources needed to train new models, thus allowing for more efficient cloning.
- Meta-Learning Coefficient: Meta-learning allows a model to learn how to learn, enhancing its ability to quickly adapt to new tasks or domains. This is critical for cloning models that can evolve and improve autonomously.
- Regularization Factor: Including this term ensures that the cloned model does not become overly complex, avoiding overfitting while maintaining good generalization. This is important to ensure that the clone retains the fundamental characteristics of the original AI without losing its effectiveness.
| Parameter | Operational Definition | Method/Metric |
|---|---|---|
| A – Adaptability | AI ’ s ability to adjust to domain variations | Accuracy on out-of-distribution datasets |
| Eg – Generational Evolution | Average percentage performance improvement between model versions | Comparison on standard benchmarks |
| G – Model Size | Total number of parameters | ![]() |
| Gc – Growth Factor | Rate of increase in model complexity over time | Percentage change in the number of parameters |
| H – Learning Entropy | Amount of information acquired during training | Shannon entropy |
| I – AI Interconnection | Model ’ s ability to integrate knowledge from other models (e.g., via transfer learning) | Performance percentage difference |
| LL – Current Learning Level | Model convergence state | Number of epochs for loss stabilization |
| P – Computational Power | Resources used during training and inference | FLOPS |
| Q – Response Precision | Model ’ s accuracy in providing correct outputs | Accuracy/F1-score |
| R – Robustness | Stability against perturbations | Performance under adversarial attacks |
| S – Current State | Aggregated evaluation of the model ’ s current performance | Normalized score |
| U – Autonomy | AI ’ s operational independence from external resources | Ratio of local to external operations |
| V – Response Speed | Model ’ s average inference time | Measured in milliseconds |
| T – Tensor Dimensionality | Architectural complexity expressed in number of layers | Normalization against standard models |
| O – Computational Efficiency | Ratio between achieved performance and computational cost (energy, time) | Accuracy/FLOPS |
| B – Bias | Systematic disparities in performance | Fairness metrics |
| C – Model Complexity | Overall complexity measure of the architecture | Total number of operations |
| E – Learning Error | Average training error | Normalized loss |
| M – Memory Capacity | Amount of information the model can store and retrieve | Dimension of state vectors |
| t – Evolutionary Time | Time elapsed since the last significant model update | Expressed in time units |

- Extract Original BNAI Profile
- Train the BNAI Network
- Generate BNAI Clone
- Validation and Iteration
- Positive capabilities and performance (adaptability, generational evolution, model size, etc.),
- Effects of transfer learning and meta-learning through
- A regularization mechanism via
, - A forgetting decay factor that penalizes models with high evolutionary time relative to their memory capacity.
1.2. Second Research Phase : 8BNAI: Ethics, Safety, and Responsible Innovation
- Computational stability, through the use of logarithmic transformations and sigmoid functions that mitigate the effects of extreme values.
- Clarity and coherence, by employing explicit nomenclature and decomposing complex parameters into subcomponents.
- Adherence to ethical and security requirements, integrating parameters that evaluate transparency, fairness, and privacy protection.
- Dynamic balancing, achieved through the assignment of weights (defined by consensus or Bayesian optimization) and the use of balancing coefficients.
- Robust empirical validation, supported by sensitivity analysis, unit tests, and diverse case studies.
1.3. 9BNAI: 10Ethics, Safety, and Responsible Innovation

1.4. Formula Structure
where:
- -
- Why: AI systems increasingly interact with humans in decision-making processes that can have significant ethical implications. Ensuring fairness, transparency, and privacy in AI operations is not just a regulatory requirement but a moral imperative to prevent bias, discrimination, or privacy breaches.
- -
- Integration: The ethical score, Eeth, quantifies how well the AI adheres to these principles, promoting models that not only perform well but do so without compromising ethical standards.
- ○
- Why: As AI models become more integral to various applications, their vulnerability to adversarial attacks or misuse grows. Ensuring the security of AI systems is critical to maintain trust, protect users, and safeguard data integrity.
- ○
- Integration: The security score, Ssafe, evaluates the robustness of the AI against adversarial inputs and its ability to handle out-of-distribution data, ensuring that the AI remains reliable and secure in its operations.
-
A – Adaptability:Definition: The ability of AI to modify its behavior in response to domain variations or degraded conditions.Measurement: Percentage variation in performance (accuracy, F1-score) on out-of-distribution datasets compared to a baseline.
-
Eg – Generational Evolution:Definition: Average percentage increase in performance between successive versions of the model.Measurement: Benchmark comparisons (e.g., ImageNet for vision or GLUE for NLP) between versions (V1 vs. V2).
-
G – Model Size:Definition: Represents computational capacity based on the total number of parameters.

-
H – Learning Entropy:Definition: Amount of information acquired during training.Measurement: Difference between input data entropy and residual entropy (e.g., through cross-entropy calculation).
-
I– AI Interconnection:Definition: Model’s ability to integrate knowledge from other models (e.g., via transfer learning).Measurement: Percentage difference in performance between a model trained from scratch and one using transfer learning.
-
L– Current Learning Level:Definition: Model convergence state.Measurement: Number of epochs needed to stabilize the loss or the final normalized loss value.
-
P– Computational Power:Definition: Resources used during training and inference.Measurement: FLOPS (theoretical and actual) and FLOPS/performance ratio.
-
Q– Response Precision:Definition: Model accuracy in providing correct outputs.Measurement: Accuracy, F1-score, or equivalent metrics on benchmark datasets.
-
R– Robustness:Definition: Model stability under perturbations (adversarial attacks or noisy data).Measurement: Percentage variation in performance under specific attacks (e.g., FGSM, PGD).
-
U – Autonomy:Definition: Operational independence from external resources.Measurement: Ratio of locally executed operations to delegated ones.
-
VV– Response Speed:Definition: Average model inference time.Measurement: Measured in milliseconds or seconds on standardized test sets.
-
O – Computational Efficiency:Definition: Ratio between achieved performance and computational cost (in terms of FLOPS or energy consumption).Measurement: Accuracy divided by cost index.
-
B– Bias:Definition: Systematic disparities in performance across different data subsets.Measurement: Fairness metrics and error analysis.
-
C – Model Complexity:Definition: Overall architectural complexity.Measurement: Total number of operations per inference or index derived from layers and connectivity.
-
E– Learning Error:Definition: Average training errors.Measurement: Normalized loss value (e.g., MSE, cross-entropy).
-
M– Memory Capacity:Definition: Amount of information the model can store and retrieve.Measurement: Hidden state size or equivalent parameters in memory-based models (e.g., LSTM, Transformers).
-
t – Evolutionary Time:Definition: Time elapsed since the last significant model update.Measurement: Expressed in months (or years).Integrated Aspects
-
Ethics (Ethics):Definition: Evaluates fairness, transparency, and privacy compliance.Measurement: Weighted sum of fairness, privacy, and transparency metrics.
-
Security (Security):Definition: Measures the model’s resilience to adversarial inputs and its ability to detect out-of-distribution data.Measurement: Combination of maintained accuracy under adversarial attacks and OOD performance.
1.5. 11Empirical Validation and Testing
-
Specialized vs. Generalist Models:Comparing models like AlphaFold (specialized) and GPT-4 (generalist) to verify the robustness of the BNAI score in different contexts.
-
Open-Source Models:Testing models like LLaMA to ensure framework applicability in resource-constrained environments.
1.6. Cloning Process for BNAI: Ethics, Safety, and Responsible Innovation
- Original AI (Digital DNA): The process starts by extracting the digital profile from the original AI, which includes its learned behaviors, knowledge, and capabilities.
-
Profile Extraction: From this, a comprehensive BNAI profile is created. This includes:
- Base Parameters: Parameters like Adaptability (A), Generational Evolution (Eg), Model Size (G), etc., which focus on the AI’s technical capabilities.
- Additional Parameters: Incorporating Bias (B), Model Complexity (C), Learning Error (E), Memory Capacity (M), and Evolutionary Time (t) to provide a more nuanced view of the model.
- Integrated Aspects: Ethics (Eeth) and Security (Ssafe) are introduced to ensure the AI operates within ethical and safe boundaries. Ethics ensures fairness, transparency, and privacy, while Security focuses on resilience against adversarial inputs and handling out-of-distribution data.
- BNAI Network Training: The training phase now includes not only the minimization of the difference between the original and clone profiles but also considers the ethical score and security score to ensure the clone adheres to responsible innovation principles.
- Generation: After training, the BNAI Clone, a digital profile that mirrors the original AI with added ethical and security considerations, is generated.
-
Validation & Iteration: The clone undergoes validation, where its performance, characteristics, ethical adherence, and security measures are evaluated. This stage involves:
- Comparing performance and characteristics to ensure fidelity.
- Ensuring the clone meets ethical standards through Eeth.
- Verifying security measures through Ssafe.
- Feedback Loop: If necessary, feedback from the validation leads to further refinement of the clone through iterative training, enhancing its ethical and security attributes along with its technical prowess.
- Result: The final product is a Cloned AI model that embodies the principles of BNAI: Ethics, Safety, and Responsible Innovation. This model not only replicates the functionality of the original AI but does so in a manner that is ethically sound, secure, and responsibly innovative.
- Positive capabilities and performance parameters.
- Benefits of transfer learning and meta-learning.
- Regularization and forgetting decay mechanisms.
- Factors of ethics, security, and explainability.
2. Background
3. Methodology
- BNAI (Bulla Neural Artificial Intelligence): a composite metric encoding the “digital DNA” of an AI model, integrating capabilities, robustness, complexity, and ethical aspects.
- 19MIND-UNITY: a self-generating decision module that supports self-updating and autonomous task generation, making the AI system autonomous, ethical, and resiliently evolutionary.
3.1. Epistemological Crisis of Traditional Models
3.1.1. The Fragmentation Paradox
with our continuous operator:
where represents the tokenized units, and is the continuous transformation function. This difference yields significantly divergent learning trajectories.3.1.2. Input-Output Dualism



3.1.3. Static Identity Crisis


3.2. Problems to Address
- Overcoming the reliance on tokenized representations by enabling continuous processing.
- Mitigating catastrophic forgetting in continual learning models.
- Increasing robustness against adversarial attacks.
- Intrinsically integrating ethical and security criteria.
3.3. BNAI (Original and Ethical)
3.3.1. Operational Definition of Parameters
- Adaptability (A)

- Generational Evolution (Eg)

- Magnitude (G)

- Entropy (H)

- Interconnection (I)

- Robustness (R)

- Ethics( Eeth)

3.3.2. Cloning Process: Creation of a Faithful Digital Clone
- Extraction: Measurement and Calculation of BNAI Digital DNA
- To calculate Adaptability (A), we measure the model’s accuracy on both training datasets and out-of-distribution datasets, using the defined formula.
- Robustness (R) requires assessing the model’s performance when faced with adversarial attacks.
- Ethics (Eeth) can be estimated through fairness metrics and transparency analysis. In summary, this phase includes a series of tests and measurements designed to capture the different aspects of the original model’s identity, converting them into a numerical vector of BNAI parameters.
- 2.
- 2. Normalization: Standardization of Digital DNA
- Different Numerical Scales: The BNAI parameters can vary across very different numerical scales. For example, Size (G) is a logarithm of the number of parameters, while Adaptability (A) is a value between 0 and 1. To make these parameters comparable and effectively usable by the BNAI HyperNetwork, they need to be brought onto a common scale.
-
Training Stability: The BNAI HyperNetwork, which will be used in the next phase, is a neural network. Neural networks often perform better and train more stably when input data is normalized.To achieve this goal, mathematical normalization transformations are applied:
- Logarithmic Transformations: Useful for compressing very wide scales.
- Sigmoidal Transformations: Such as logistic sigmoid or hyperbolic tangent, to map values into a specific range (e.g., between 0 and 1 or between -1 and 1).
- 3.
- Objective Function: Training Driven by Digital DNA
- The target BNAI parameter vector (extracted and normalized from the original model).
- The calculated BNAI parameter vector for the clone model generated by the BNAI HyperNetwork (after subjecting it to the same measurements as in the extraction phase).
- 4.
- Replica: Generation of a Faithful Clone
- The normalized BNAI parameter vector (the “digital DNA” of the original model) is provided as input to the BNAI HyperNetwork.
- The BNAI produces output weights and biases that define the architecture of the clone model.
- These weights and biases are used to initialize a new AI model, which constitutes the digital clone.
3.4. BNAI Non-Token Neural Network
3.4.1. Motivation
- Fragmentation via Tokenization: Tokenization fragments the informational flow, disrupting semantic continuity.
- Input-Output Dualism: Rigid separation increases latency and limits decision-making fluidity.
- Static Identity: The absence of persistent markers leaves models vulnerable to catastrophic forgetting and ethical drift.These problems inspired the development of an architecture that processes data continuously and integrates its operational history into a digital DNA.
3.4.2. Network Structure
- Input Layer: Receives continuous data (e.g., bytes, audio signals, raw images).
- Continuous Embedding:

-
Specific Parameters: Computes metrics such as
B. BNAI Identity ModuleGoal: Create a digital DNA that defines the model’s identity.
- Feature Extraction: Extracts identity parameters (e.g.,A,Eg, G ,I ,L , etc...) and gathers them into cnt.
- Identity Encoding: Transforms and normalizes features (e.g., with ln (.)or tanh) and weights them with coefficients wj.
- Integration of Transfer and Meta-Learning:
-
Evolutionary Decay:
C. MIND-UNITY Module: Autonomous and Self-Exciting Decision-MakingGoal: Abandon the traditional input-output scheme in favor of self-generated decisions.
- Autopoietic Dynamics:
- Integration: Decisions are aligned with the digital DNA.

3.5. The New Unified Formula
- Numerator: Integrates features from the NO-TOKEN module and the identity parameters, along with the benefits of transfer and meta-learning.
- Denominator: Penalizes bias, non-token complexity, and learning errors, balanced by
3.6. Benefits and Applications
-
Breaking the Token Limitation:NO-TOKEN removes informational fragmentation, enabling smooth, continuous processing.
-
AI with a Digital Soul:The BNAI metric encodes the model’s digital DNA, enabling precise replication and controlled evolution.
-
The Demise of Model Drift:Perfect replication avoids degradation caused by fine-tuning, preserving ethical and decision-making integrity.
-
Open-Source Without Anarchy:The permissioned model supports responsible collaboration while safeguarding innovation.
-
Beyond Text – A New AI Intelligence Model:NO-TOKEN revolutionizes learning in all modalities, promoting a more intuitive, human-like intelligence.
-
A New Standard for AI Consciousness:With AI Digital DNA, models gain a true identity, evolving autonomously, creatively, and consciously.
- Continuity of Thought: Think fluidly, without fragmentation.
- Context Awareness: Deeply understand meaning and context.
- Autonomous Adaptation: Evolve dynamically while maintaining identity.
- Creative Synthesis: Generate original ideas and innovative solutions.
- Self-Referencing & Identity: Retain self-awareness and a consistent identity over time.
4. Preliminary Validation
4.1. Objective
4.2. Simplified Experiment
- Baseline: Tokenized embedding.
- NO-TOKEN: Continuous embedding.
4.3. Comparative Table:
| Model | Accuracy (%) | Latency (ms) | Representation | Decision Process | Identity |
| Baseline | 88 | 120 | Tokenized | Input-Output | Static |
| NO-TOKEN | 89 | 95 | Continuous | Self-Generating | Dynamic |
4.4. Preliminary Discussion:


5. Evaluation
5.1. Experimental Setup
- Initial (SST-2): Conducted on a modern GPU (NVIDIA RTX 3090, per typical setups).
- Second (CPU-only): Performed exclusively on a 16-core CPU to evaluate scalability without GPU support.
- Initial (SST-2): Single evaluation of inference performance (no epoch data provided).
- Second (CPU-only): 100 epochs of training to observe long-term convergence.
- Initial (SST-2): Stanford Sentiment Treebank (Socher et al., 2013), with 67,349 training samples and 872 test samples, processed as raw text for NO-TOKEN.
- Second (CPU-only): The same dataset, supplemented by synthetic BNAI data (synthetic_bnai_data.json) generated via src/data/synthetic_data.py.
- Initial (SST-2): Accuracy, latency, and loss metrics recorded during training.
- Second (CPU-only): Accuracy, latency, training loss, and validation loss recorded from epoch 50 through epoch 100.
- The NO-TOKEN and BNAI models, implemented in /src/model, use a continuous embedding approach, eliminating tokenization. The tokenized baseline employs a standard Transformer pipeline. Training for the second experiment was executed via src/main.py, with configurations in config.yaml under /src/utils.
5.2. Overview of Models
comprising a CNN (3 layers, kernel sizes 3, 5, 7), self-attention (4 heads, 512 dimensions), and wavelet transform (Daubechies wavelet, level 3).6. Results
| Experiment | Model/Method | Accuracy (%) | Latency (ms) | Training Loss | Validation Loss |
| Initial (SST-2) | Tokenized Baseline (BERT-base) | 88 | 120 | - | - |
| Initial (SST-2) | NO-TOKEN | 89 | 95 | 0.0208–0.0211 | 0.0241–0.0285 |
| Second (CPU-only, Epoch 50) | BNAI | 85 | 110 | 0.0211 | 0.0241–0.0285 |
| Second (CPU-only, Epoch 100) | BNAI | 89 | 95.5 | 0.0209 | 0.0251 |
- The initial SST-2 experiment in Preliminary Validation (Section 4.0) reported NO-TOKEN’s accuracy (89%), latency (95ms), and loss metrics (training: 0.0208–0.0211, validation: 0.0241–0.0285), with the tokenized baseline at 88% accuracy and 120ms latency.
- The second experiment on a 16-core CPU reported BNAI’s performance at epoch 50 (85% accuracy, 110ms latency, 0.0211 training loss, 0.0241–0.0285 validation loss) and epoch 100 (89% accuracy, 95.5ms latency, 0.0209 training loss, 0.0251 validation loss).
6.1. Comparative Analysis
-
Capability Demonstration:
- ○
- The initial NO-TOKEN model demonstrates strong capability with 89% accuracy, 95ms latency, a training loss of 0.0208–0.0211, and a validation loss of 0.0241–0.0285 on SST-2, reflecting stable convergence and generalization on a GPU. The second experiment showcases the BNAI model’s capability on a 16-core CPU, improving from 85% accuracy and 110ms latency at epoch 50 to 89% accuracy and 95.5ms latency at epoch 100, with a training loss of 0.0209 and a validation loss of 0.0251, indicating robust performance without GPU support.
-
Comparison with Tokenized Baseline:
- ○
-
The tokenized baseline achieved 88% accuracy and 120ms latency on SST-2. NO-TOKEN improved this to 89% accuracy (1% gain) and 95ms latency (21% reduction:(120−95)/120×100=20.83% ~ 21%.The BNAI model in the second experiment matches NO-TOKEN’s 89% accuracy by epoch 100, with a slightly higher latency of 95.5ms (still a 20.4% reduction over the baseline:(120−95.5)/120×100=20.4166 ~ 20.4BNAI’s loss metrics (training: 0.0209, validation: 0.0251) are consistent with NO-TOKEN’s (training: 0.0208–0.0211, validation: 0.0241–0.0285), suggesting comparable optimization, but achieved on a CPU, highlighting greater efficiency in resource-constrained settings.
-
Context with Previous Methods:
- ○
- Previous token-free methods, such as ByT5 (Clark et al., 2022), have shown competitive performance on NLP tasks, but we lack direct comparisons in our current setup. BNAI’s 89% accuracy and 95.5ms latency on a 16-core CPU suggest a potential efficiency advantage over GPU-reliant methods, pending formal testing against ByT5 or Wavelet-Based Transformers (Zhang et al., 2023).
7. Discussion
8. Future Works
- Task-Specific Performance: Test the BNAI model on additional benchmarks (e.g., GLUE, ImageNet) on a 16-core CPU, comparing accuracy and latency with the tokenized baseline and NO-TOKEN.
- Comparison with Baselines and Previous Methods: Implement and evaluate against BERT and ByT5, reporting accuracy, latency, and loss metrics.
- BNAI Cloning and MIND-UNITY: Assess Δ_BNAI and MIND-UNITY on Split-CIFAR-100, comparing with Deep Generative Replay.
- Ethics and Security: Evaluate E_eth and S_safe, benchmarking against AI Fairness 360 and adversarially trained models.
- These will leverage /src/data, /src/model, /src/utils, /src/experiments, and src/experiments/run_experiment.py.
9. 22Conclusions: Towards an Anthropology of the Artificial
| 1 | Goodfellow, I., Shlens, J., & Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. https://arxiv.org/abs/1412.6572
|
| 2 | |
| 3 |
https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf Shannon, C. E. (1948). A Mathematical Theory of Communication. |
| 4 |
arXiv: ByT5: Towards a Token-Free Future
Zenodo: Beyond Tokenization: Wavelet-Based Embeddings
arXiv: CNN-Attention Hybrid Embeddings
arXiv: Digital DNA for Neural Networks
Nature: Persistent Identity Markers in ML
arXiv: HyperNetwork Approaches
arXiv: Autopoietic Decision-Making Modules
ScienceDirect: Self-Exciting Neural Architectures
arXiv: Intrinsic Ethical Parameterization
Zenodo: AI Security via Identity Encoding
arXiv: Neuromorphic Continual Learning
IEEE: Meta-Learning for Stability
arXiv: Multidimensional Model Assessment
Frontiers: Composite AI Metrics
Zenodo: Continual Learning Benchmark
arXiv: Raw Byte Processing in LLMs
bioRxiv: DNA-Inspired ML
Springer: Cellular Automata for AI
arXiv: Neural Operators for Signals
ICLR: Raw Signal Processing
GitHub: Continuous Embeddings
|
| 5 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence. Zenodo. https://doi.org/10.5281/zenodo.14894878
|
| 6 |
Image 1: Description of the Entire Cloning Process through BNAI
The diagram illustrates the comprehensive process of AI cloning using the BNAI methodology:
Original AI (Digital DNA): The process starts with the original AI, characterized by its unique digital DNA, which encapsulates all its learned behaviors, knowledge, and capabilities.
Extraction: From the original AI, a BNAI profile is extracted. This profile is composed of various parameters including:
A (Adaptability): Measures the AI's ability to modify its behavior in response to changes in the domain or degraded conditions.
Eg (Generational Evolution): Tracks the average percentage increase in performance between model versions.
G (Model Size): Represents the computational capacity by the total number of parameters.
H (Learning Entropy): Quantifies the information gained during training.
I (AI Interconnection): Evaluates the model's ability to integrate knowledge from other models, often through transfer learning.
L (Current Learning Level): Indicates the convergence state of the model.
P (Computational Power): Assesses the resources utilized during training and inference.
Q (Response Precision): Measures the model's accuracy in providing correct outputs.
R (Robustness): Evaluates stability under adversarial conditions or data noise.
U (Autonomy): Gauges operational independence from external resources.
V (Response Speed): Measures the average inference time of the model.
T (Tensor Dimensionality): Reflects architectural complexity in terms of layer count.
O (Computational Efficiency): The ratio of performance to computational cost.
BNAI Network Training: The extracted BNAI profile is then used to train the BNAI network. The training focuses on minimizing the difference between the original AI's profile and the clone's profile, aiming to replicate the original as closely as possible.
Generation: Once the training phase reduces the difference to near zero, the BNAI Clone, which is a cloned digital profile, is generated.
Validation: The generated clone undergoes validation where its performance and characteristics are compared with those of the original AI.
Profile Comparison: This step involves a detailed comparison of the clone's profile against the original, looking at aspects like performance metrics and inherent characteristics to ensure fidelity.
Iterative Refinement: Based on the validation results, an iterative process of refinement might be initiated. Feedback from this comparison is used to further train and refine the clone, ensuring it meets or exceeds the original's capabilities.
Cloned AI (BNAI Cloned Model): The final outcome is a cloned AI model that mirrors the original AI in terms of functionality, performance, and characteristics. This cloned model leverages the BNAI methodology to achieve a high degree of similarity and effectiveness.
This diagram encapsulates the entire lifecycle from the original AI to the creation of a faithful clone, highlighting the meticulous process of extraction, training, generation, validation, and refinement to ensure the clone's accuracy and utility.
|
| 7 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence. Zenodo. https://doi.org/10.5281/zenodo.14894878
|
| 8 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence. Zenodo. https://doi.org/10.5281/zenodo.14894878
|
| 9 | Zenodo: BigScience Workshop (BLOOM Model), Zenodo: AI Fairness 360 , Zenodo: Continual Learning Benchmark |
| 10 | arXiv: A Survey on Bias and Fairness in Machine Learning Mehrabi et al. (2021) , arXiv: Explainable AI for Cybersecurity Apruzzese et al. (2022) , arXiv: The Mythos of Model Interpretability Lipton (2016) Critica ai modelli "black-box", rilevante per XAI. |
| 11 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence. Zenodo. https://doi.org/10.5281/zenodo.14894878
|
| 12 |
Description of Image 2: BNAI Cloning Process
Image 2 illustrates the flow of the AI cloning process through the BNAI methodology, highlighting the main stages and the critical validation decision:
Original AI (Digital DNA): The process begins with the original AI, whose unique digital profile serves as the basis for cloning.
Profile Extraction: From this original AI, a BNAI Profile is extracted, capturing the essential characteristics and capabilities of the AI.
BNAI Network Training: The BNAI Profile is used to train the BNAI network. This training aims to minimize the difference between the original AI's profile and that of the clone, ensuring the clone is a faithful representation.
Generation of BNAI Clone: Once training is complete, the BNAI Clone is generated, which is a digital profile reflecting the capabilities of the original AI.
Validation: The generated clone enters the validation phase where its performance, characteristics, and compliance with BNAI criteria (including ethics and security) are assessed.
Validation Decision: At this point, the process reaches a decision point:
- If Validation is Successful (YES): If the clone passes validation, it proceeds to become the final cloned AI model.
- If Validation is Unsuccessful (NO): If the clone does not pass validation, the process loops back to the BNAI Network Training for further iterations and improvements.
Cloned AI (BNAI Cloned Model): If validation is positive, the final outcome is the
Cloned AI utilizing the BNAI cloned model, ensuring that the clone not only replicates the functionality of the original AI but also adheres to ethical, security, and responsible innovation standards.
This visual representation emphasizes the importance of validation in the cloning process, ensuring that only models that meet rigorous performance and ethical compliance criteria are considered as final clones.
|
| 13 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence. Zenodo. https://doi.org/10.5281/zenodo.14894878
|
| 14 |
arXiv: HyperNetworks Ha et al. (2016) , arXiv: Continual Learning with Deep Generative Replay
Shin et al. (2017) , arXiv: Model Zoos: A Dataset of Diverse Populations of Neural Network Models
|
| 15 | arXiv: Neural Machine Translation of Rare Words with Subword Units- Sennrich et al. (2016) |
| 16 | arXiv: ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models Clark et al. (2022) , arXiv: Neural Operators for Continuous Data Kovachki et al. (2023) , arXiv: Wavelet-Based Transformers for Time Series Zhang et al. (2023) |
| 17 | arXiv: Pretrained Transformers as Universal Computation Engines Lu et al. (2021) |
| 18 |
arXiv: The Tokenization Problem in Generative AI
Asprovska & Hunter (2024)
|
| 19 |
arXiv: Autopoiesis in Artificial Intelligence
Maturana & Varela (1972, citato in lavori recenti),arXiv: Curiosity-Driven Reinforcement Learning Pathak et al. (2017).
|
| 20 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence Code: https://github.com/fra150/BNAI.git
|
| 21 |
Image 3 – Flowchart of the BNAI non-token Neural Network Framework):
The diagram illustrates the transition from traditional AI models, based on tokenized data and limited by static identity and input-output separation, to the BNAI non-token Neural Network framework. The latter integrates three synergistic modules: NO-TOKEN (continuous processing), BNAI (dynamic digital identity), and MIND-UNITY (autonomous decision-making), overcoming the shortcomings of previous models. The arrows show how the modules collaborate to preserve informational context, encode a digital DNA, and ensure continuous learning, creating evolutive, ethical, and resilient AI systems.
|
| 22 | Bulla, F., & Ewelu, S. (2025). BNAI, NO-TOKEN, and MIND-UNITY: Pillars of a Systemic Revolution in Artificial Intelligence. Zenodo. https://doi.org/10.5281/zenodo.14894878
|
References
- Bellamy, R. K., et al. (2018). AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arXiv preprint arXiv:1810.01943.
- Bengio, Y., Courville, A., & Vincent, P. (2013). Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828. [Removed, replaced with Bostrom & Durrett, 2020].
- Bostrom, K., & Durrett, G. (2020). Byte Pair Encoding is Suboptimal for Language Model Pretraining. Findings of the Association for Computational Linguistics: EMNLP 2020, 4617-4624.
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Brown, T. B., et al. (2020). Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.
- Dai, Z., et al. (2019). Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv preprint arXiv:1901.02860.
- Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT 2019, 4171-4186.
- Hospedales, T., et al. (2021). Meta-Learning in Neural Networks: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(9), 3044-3064.
- Kirkpatrick, J., et al. (2017). Overcoming Catastrophic Forgetting in Neural Networks. Proceedings of the National Academy of Sciences, 114(13), 3521-3526.
- Sennrich, R., et al. (2016). Neural Machine Translation of Rare Words with Subword Units. Proceedings of ACL 2016, 1715-1725.
- Socher, R., et al. (2013). Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. Proceedings of EMNLP 2013, 1631-1642.
- Vaswani, A., et al. (2017). Attention is All You Need. Advances in Neural Information Processing Systems (NeurIPS), 5998-6008.
- Xue, L., et al. (2021). ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models. arXiv preprint arXiv:2105.13626.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).



