Preprint
Article

This version is not peer-reviewed.

MetaStructure and Iterated MetaStructure: MetaCognition, MetaLaerning, MetaAnalysis, MetaScience, MetaPhilosophy, and More

Submitted:

25 August 2025

Posted:

27 August 2025

You are already at the latest version

Abstract
A MetaStructure is a higher-level framework that treats entire collections of structures as single objects, equipped with natural operations that preserve isomorphisms across different domains. The term “Structure” here refers broadly to mathematical systems as well as real-world models. An Iterated MetaStructure generalizes this idea recursively, generating successive layers in which structures of structures form deeper hierarchical meta-levels. In this paper, we investigate whether well-known meta-concepts such as MetaCognition, Meta-Learning, MetaAnalysis, MetaScience, and Metaphilosophy can be extended by means of Iterated MetaStructures. While some of these meta-concepts are primarily conceptual in nature and lack precise mathematical definitions, we deliberately attempt to formalize them as rigorously as possible and examine whether such formalizations can be systematically extended.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Preliminaries

This section presents the fundamental concepts and definitions that underpin the discussions in this paper. Throughout this paper, all structures and sets are assumed to be finite.

1.1. Classical Structure

In this paper, the term Structure refers broadly to a mathematical system, not restricted to a single area, but encompassing domains such as Set Theory, Logic, Probability, Statistics, Algebra, and Geometry.
Definition 1.1 
(Classical Structure). (cf.[1,2]) A Classical Structure  C is a mathematical object arising from a traditional field—for example Set Theory, Logic, Probability, Statistics, Algebra, Geometry, Graph Theory, Automata Theory, or Game Theory. Formally, it may be represented as a pair
C = H , { # ( m ) } m I ,
where:
  • H is a nonempty set, often called the carrier or universe.
  • For each m I Z > 0 , there exists an m-ary operation
    # ( m ) : H m H ,
    subject to appropriate axioms (such as associativity, commutativity, or identity laws), which vary according to the chosen type of structure.
The collection { # ( m ) : m I } determines the type of C . Representative examples include:
  • A Set  ( S , ) , consisting solely of a carrier with distinguished elements or relations, but without operations [3,4].
  • A Logic structure ( L , , , ¬ ) , where , are binary connectives and ¬ a unary connective, satisfying logical axioms [5].
  • A Probability model ( Ω , F , P ) , where P : F [ 0 , 1 ] is a probability measure on a sigma-algebra F P ( Ω ) [6,7].
  • A Statistical model ( X , A , θ ) , where θ maps data X into parameters of interest [8,9].
  • Algebraic structures such as:
    • A Group  ( G , * ) , with * : G × G G satisfying associativity, identity, and inverses [10,11].
    • A Ring  ( R , + , × ) , with two binary operations fulfilling ring axioms [12,13].
    • A Vector Space  ( V , + , · ) over a field F , with scalar multiplication · : F × V V [14,15,16].
  • A Geometric structure ( X , dist ) , where dist : X × X R satisfies the axioms of a metric.
  • A Graph  ( V , E ) , where E { { u , v } u , v V } for undirected graphs, or E V × V for directed graphs, with adjacency and incidence relations [17,18,19,20].
  • An Automaton  ( Q , Σ , δ , q 0 , F ) , where Q is a set of states, Σ an input alphabet, δ : Q × Σ Q the transition function, q 0 Q the start state, and F Q the accepting states [21,22].
  • A Game  ( N , { A i } , { u i } ) , where N is the set of players, A i each player’s action set, and u i : j N A j R the payoff function for player i[23,24].
Related concepts include the HyperStructure [25,26,27,28] and the SuperHyperStructure [29,30,31,32], which have also been extensively investigated in recent studies.

1.2. MetaStructure (Structure of a Structure)

Fix once and for all a single–sorted, finitary signature
Σ = Func , Rel , ar Func , ar Rel ,
where Func (resp. Rel ) is a set of function (resp. relation) symbols and ar records their arities. A (single–sorted) Σ–structure is a tuple
C = H , ( f C ) f Func , ( R C ) R Rel ,
consisting of a nonempty carrier H, together with interpretations f C : H m H for each f Func of arity m, and relations R C H r for each R Rel of arity r. Let Str Σ denote the class of all such structures.
Definition 1.2 
(MetaStructure over a fixed signature). (cf. [33]) With Σ as above, a MetaStructure (a “structure of structures”) is a pair
M = U , ( Φ ) Λ ,
where:
  • U Str Σ is a nonempty collection of Σ –structures (the level–0 objects);
  • for each label Λ with meta–arity k N , the meta–operation
    Φ : U k U
    is described by uniform constructors acting on carriers and symbol interpretations:
    Γ ( C 1 , , C k ) = H ( functorially built carrier ) ; f Func : f Φ ( C 1 , , C k ) = Λ f f C 1 , , f C k ; R Rel : R Φ ( C 1 , , C k ) = Ξ R R C 1 , , R C k ,
    where the recipes Λ f and Ξ R depend only on f , R and (not on the particular representatives) and produce the output interpretations over H .
Each Φ is required to be isomorphism–invariant (natural): if α i : C i D i are isomorphisms for 1 i k , then there is an induced isomorphism
Φ ( α 1 , , α k ) : Φ ( C 1 , , C k ) Φ ( D 1 , , D k ) ,
compatible with all function and relation symbols of Σ .

1.3. Iterated MetaStructure (Structure of Structure of … of Structure)

An Iterated MetaStructure is obtained by repeatedly applying the MetaStructure construction, thereby forming successive levels where “structures of structures’’ build a hierarchical tower (cf. [33,34,35,36]).
Definition 1.3 
(Iterated MetaStructure of depth t). (cf. [33]) For t N , an Iterated MetaStructure of depth t over Σ is a MetaStructure M ( t ) obtained by t iterations of a lifting procedure. When s < t , we lift a height–s MetaStructure M ( s ) = ( U ( s ) , { i } , { S j } ) to height t by
ι s t : U ( s ) U Σ t s U ( t ) : = U Σ t s U ( s ) ,
and, for each meta–operation i : ( E Σ m i ) k i P n i ( E Σ n i ) , define its lift by
i : E Σ m i + t s k i P n i E Σ n i + t s , i U Σ t s ( x 1 ) , , U Σ t s ( x k i ) : = U Σ t s i ( x 1 , , x k i ) ,
and analogously for relations S j : = U Σ t s × j ( S j ) .

2. Main Results: Some Iterated MetaStructures

This section presents the main results of the paper.

2.1. Iterated-Metacognition (Cognition of ... of Cognitions)

Metacognition is cognition about cognition: monitoring, evaluating, and regulating one’s thinking, strategies, confidence, and control during problem solving and reflection (cf.[37,38,39,40,41]). Iterated-Metacognition recursively applies metacognition to itself, forming hierarchical layers that optimize monitoring, control, calibration, and strategy selection across tasks continually.
Definition 2.1 
(Metacognition). Let a cognitive learner produce a hypothesis h H from data D D via A : D H . A metacognitive policy is a measurable map
M : H × D H × [ 0 , 1 ] , ( h , D ) ( h + , c ) ,
that (i) monitors by outputting a calibrated confidence c and (ii) controls by updating h to h + . Given a data–label distribution P on X × Y and loss , metacognition seeks
M arg min M E D D E ( x , y ) P h + ( x ) , y + λ c 1 { h + ( x ) = y } 2 ,
with λ 0 penalizing miscalibration.
Example 2.2. (Metacognition (everyday study)). A student preparing for a calculus exam actively regulates their own thinking:
(i)
Monitor: While reading, they notice: “I follow worked examples but cannot derive the theorem unaided.”
(ii)
Evaluate: They judge understanding as recognition-only and overconfident on integrals.
(iii)
Control: They switch to retrieval practice (closed-book problems), set spaced reviews, and keep an error log to target weak steps.
This is first-order metacognition: monitoring, evaluating, and controlling one’s ongoing cognition.
Definition 2.3. (Iterated–Metacognition (depth t)). Let a cognitive learner A ( 0 ) : D H map data D D to a hypothesis h H . A level-1 metacognitive policy (cf. Definition 2.1) is a measurable map
M ( 1 ) : H × D H × [ 0 , 1 ] , ( h , D ) h + , c ,
where h + is a controlled update of h and c is a calibrated confidence. Fix a base loss : Y × Y [ 0 , ) and constants λ , μ 0 . For a data–label distribution P on X × Y , the level-1 metacognitive risk is
R ( 1 ) ( M ( 1 ) ; P ) : = E D D E ( x , y ) P h + ( x ) , y task loss + λ c 1 { h + ( x ) = y } 2 calibration penalty + μ Cost ( h , h + ) control cos t ,
with a measurable control cost Cost : H × H [ 0 , ) .
For t 2 , define a level- ( t 1 )  task as an evaluation protocol U t 1 = ( Π t 2 , protocol ) specifying how to score level- ( t 1 ) policies via R ( t 1 ) . Let T t 1 be the space of such tasks and Π t 1 a distribution on T t 1 . A level-t meta-experience is
E ( t ) = { ( U t 1 ( j ) , D ( j ) ) } j = 1 m , U t 1 ( j ) i . i . d . Π t 1 ,
where each D ( j ) bundles the data required by U t 1 ( j ) . A level-titerated metacognitive learner outputs a level- ( t 1 ) policy:
M ( t ) : m 1 T t 1 × Data t 1 m M ( t 1 ) , E ( t ) M E ( t ) ( t 1 ) .
The level-t risk is defined recursively by
R ( t ) M ( t ) ; Π t 1 : = E E ( t ) Π t 1 E U t 1 Π t 1 E D U t 1 R ( t 1 ) M ( t ) ( E ( t ) ) ; U t 1 ,
with the base case R ( 1 ) given by (1). We call any M ( t ) minimizing (2) an Iterated–Metacognition (depth t) solution.
Example 2.4. (Iterated–Metacognition (everyday refinement of the strategy)). The same student reflects on their metacognitive method a week later:
(i)
Meta-monitor: For each problem, they recorded confidence c [ 0 , 1 ] and correctness 1 { correct } . They now compute average calibration error e ¯ = 1 n k = 1 n | c k 1 { correct k } | .
(ii)
Meta-evaluate: They find overconfidence on word problems ( e ¯ is largest there) and that rereading yields poor retention versus self-testing.
(iii)
Meta-control: They revise the metacognitive policy: tighten the confidence threshold for moving on (require c 0.9 with two consecutive correct recalls), prioritize interleaved retrieval over rereading, and schedule next-day “calibration checks” to update thresholds if e ¯ > 0.15 .
This is iterated–metacognition: applying metacognition to one’s own metacognitive strategy and updating the policy itself.
Definition 2.5 
(A signature for metacognition by depth). Let Σ MC ( 0 ) have carrier H ( 0 ) = H and function symbols
Apply ( 0 ) : H × X Y , Loss ( 0 ) : H × ( X × Y ) R 0 .
For t 1 , extend to Σ MC ( t ) with carrier H ( t ) = Pol ( t 1 ) (the set of level- ( t 1 ) metacognitive policies) and symbols
Train ( t ) : Exp ( t ) Pol ( t 1 ) , Risk ( t ) M ( t 1 ) , R encoding R = R ( t 1 ) M ( t 1 ) ; · .
Proposition 2.6. 
(Meta-operations at depth t). For C 1 ( t ) , , C k ( t ) Σ MC ( t ) –structures with carriers H i ( t ) :
  • Tagged sum Φ ( t ) : set H ( t ) : = i = 1 k H i ( t ) and define Train ( t ) ( E ( t ) , i ) : = Train C i ( t ) ( E ( t ) ) ; interpret Risk ( t ) componentwise on tagged inputs.
  • Weighted ensemble (shared protocol) Φ ens ( t ) : if the evaluation protocol agrees, set H ens ( t ) : = i = 1 k H i ( t ) × Δ k 1 and
    Train ens ( t ) ( E ( t ) ) = Train C 1 ( t ) ( E ( t ) ) , , Train C k ( t ) ( E ( t ) ) , w ^ ( E ( t ) ) ,
    where w ^ Δ k 1 minimizes an empirical proxy of R ( t 1 ) .
Theorem 2.7 
(Iterated–Metacognition generalizes Metacognition and is an Iterated MetaStructure). For every t 1 :
  • (Generalization)Fix a level- ( t 1 ) task U t 1 T t 1 and take Π t 1 : = δ U t 1 . If M ( t ) ignores E ( t ) and always returns a fixed level- ( t 1 ) policy B ( t 1 ) , then
    R ( t ) M ( t ) ; Π t 1 = E D U t 1 R ( t 1 ) B ( t 1 ) ; U t 1 .
    In particular, when t = 1 and U 0 = ( P , , λ , μ ) , Iterated–Metacognition reduces to the (single-level) Metacognition risk (1).
  • (Iterated MetaStructure)For each t, with U ( t ) the class of Σ MC ( t ) –structures (Definition 2.5) and meta-operations { Φ ( t ) , Φ ens ( t ) } (Proposition 2.6),
    M ( t ) : = U ( t ) , { Φ ( t ) , Φ ens ( t ) }
    is a MetaStructure over Σ MC ( t ) in the sense of Definition 1.2. Moreover, if s < t , the lifted pair M ( s ) (Definition 1.3) is again a MetaStructure on U ( t ) .
Proof.(a) Generalization. By (2) and Π t 1 = δ U t 1 ,
R ( t ) M ( t ) ; Π t 1 = E E ( t ) δ U t 1 E U t 1 δ U t 1 E D U t 1 R ( t 1 ) M ( t ) ( E ( t ) ) ; U t 1 = E D U t 1 R ( t 1 ) M ( t ) ( E ( t ) ) B ( t 1 ) ; U t 1 = E D U t 1 R ( t 1 ) B ( t 1 ) ; U t 1 .
When t = 1 , U 0 = ( P , , λ , μ ) and R ( 0 ) collapses to task loss, so (1) is recovered. For concreteness, if λ = 1 , μ = 0 , and for some ( x , y ) the adapted leaf predicts correctly with c = 0.8 , then the per-sample penalty equals ( 0.8 1 ) 2 = 0.04 ; if it errs with c = 0.2 , the penalty ( 0.2 0 ) 2 = 0.04 is symmetric, illustrating calibrated control inside (1).
(b) Iterated MetaStructure. Fix t 1 . Uniform carrier constructors. By Definition 2.5, each structure C ( t ) has carrier H ( t ) = Pol ( t 1 ) . Proposition 2.6 specifies for any finite family ( C i ( t ) ) i = 1 k the carriers H ( t ) = i H i ( t ) and H ens ( t ) = i H i ( t ) × Δ k 1 , which depend only on the inputs’ carriers, as required by Definition 1.2.
Uniform symbol interpretations. The recipes for Train ( t ) and Risk ( t ) under Φ ( t ) and Φ ens ( t ) (tag selection and convex weighting of policies) are independent of representatives, hence define Λ f and Ξ R uniformly for all inputs (cf. Definition 1.2).
Isomorphism invariance (naturality). Let α i : C i ( t ) D i ( t ) be Σ MC ( t ) –isomorphisms, i.e. bijections α i : H i ( t ) H i ( t ) intertwining Train ( t ) and preserving Risk ( t ) :
Risk C i ( t ) M , R Risk D i ( t ) α i ( M ) , R .
Define α ( t ) : = i α i and α ens ( t ) : = i α i × id Δ k 1 . Then, for all E ( t ) ,
Train D ( t ) α ( t ) ( · ) , E ( t ) = α ( t ) Train C ( t ) ( · , E ( t ) ) , and Risk ( t ) is preserved by the same bijections .
Hence there are induced isomorphisms Φ ( t ) ( α 1 , , α k ) and Φ ens ( t ) ( α 1 , , α k ) , verifying naturality. Therefore M ( t ) satisfies the axioms of a MetaStructure (Definition 1.2).
Lifted structures. If s < t , the lift ι s t (Definition 1.3) acts symbolwise, replacing each depth-s symbol by its depth-t analogue and composing meta-operations componentwise. Typing and naturality are preserved, so M ( s ) is a MetaStructure on U ( t ) .
Combining the three underlined parts yields (b).    □

2.2. Iterated-Meta–Learning (Learning of ... of Learnings)

Learning updates a model or behavior from data and feedback, minimizing loss, improving predictions, skills, or decisions over time continually. Meta–Learning trains across tasks to learn learning strategies, initializations, or update rules enabling rapid adaptation, few-shot generalization, and robustness, transferability (cf.[42,43]). Iterated-Meta–Learning composes multiple meta-levels, where meta-learners optimize other meta-learners, yielding recursive improvement, self-referential curricula, continual adaptation, and automation capabilities emergence.
Definition 2.8. (Base Learning (level 0)). A task is T = ( X T , Y T , P T , H T , T ) , where P T is a distribution on X T × Y T and T : Y T × Y T [ 0 , ) . A level-0 learner A ( 0 ) maps a sample S ( X T × Y T ) * to a hypothesis h S H T . Its true risk on T is
L ( 0 ) A ( 0 ) ; T : = E S P T n E ( x , y ) P T T h S ( x ) , y .
Example 2.9. (Learning (level 0, everyday life)). A home cook learns to bake a single sourdough loaf.
(i)
Task T. Input X T : flour type, hydration, proof time, oven temperature. Output Y T : loaf quality score (crumb, rise, taste). Loss T : deviation from target score.
(ii)
Data S. They run a few attempts with different hydrations and proof times, logging outcomes.
(iii)
Hypothesis h S . A concrete recipe (e.g., 72 % hydration, 24h cold proof, bake at 240 C with steam).
(iv)
Learning. The cook updates h S to reduce T on subsequent bakes (adjusts hydration and proof until target quality is met).
This is base learning A ( 0 ) : S h S for a single task T.
Definition 2.10. (Meta–Learning(level 1)). Let Π 0 be a distribution over level-0 tasks T. A level-1 learner A ( 1 ) maps a meta–experience
E ( 1 ) = { ( T ( j ) , S ( j ) ) } j = 1 m , T ( j ) i . i . d . Π 0 , S ( j ) P T ( j ) n j ,
to a base learner A E ( 1 ) ( 0 ) . Its meta–risk is
L ( 1 ) A ( 1 ) ; Π 0 : = E E ( 1 ) Π 0 E T Π 0 E S P T n L ( 0 ) A E ( 1 ) ( 0 ) ; T .
Example 2.11 (Meta–Learning (level 1, everyday life)). A caterer must quickly learn new recipes across cuisines before events.
(i)
Task distribution Π 0 . Each task T ( j ) is a different dish (Thai curry, gluten-free cake, vegan stew), each with its own inputs/outputs and loss.
(ii)
Meta–experience E ( 1 ) . For many past dishes, the caterer has small practice sets S ( j ) (pilot batches) and observed errors.
(iii)
Output (a learner). From E ( 1 ) , they learn a recipe-bootstrapping procedure A E ( 1 ) ( 0 ) :
  • start with a “base” template (mise en place checklist, temperature ladder),
  • run a two-point pilot (low/high spice or hydration),
  • update the working recipe by one-step gradient: increase/decrease limiting factor causing largest error,
  • finalize after a calibration taste test.
(iv)
Adaptation to a new dish. Given a new task T Π 0 , applying A E ( 1 ) ( 0 ) yields a good recipe after very few trials (few-shot adaptation).
This is level-1 meta–learning A ( 1 ) : E ( 1 ) A E ( 1 ) ( 0 ) minimizing L ( 1 ) over Π 0 .
Definition 2.12. (Iterated–Meta–Learning (level t 2 )). For t 2 , let a level- ( t 1 )  task be a protocol U t 1 = ( Π t 2 , eval ) specifying how to evaluate level- ( t 1 ) learners via L ( t 1 ) . Let Π t 1 be a distribution on such tasks. A level-t meta–experience is
E ( t ) = { ( U t 1 ( j ) , D ( j ) ) } j = 1 m , U t 1 ( j ) i . i . d . Π t 1 ,
where each D ( j ) bundles the lower–level datasets needed by U t 1 ( j ) . A level-t learner outputs a level- ( t 1 ) learner,
A ( t ) : m 1 T t 1 × Data t 1 m A ( t 1 ) , E ( t ) A E ( t ) ( t 1 ) ,
and its level-t risk is defined recursively by
L ( t ) A ( t ) ; Π t 1 : = E E ( t ) Π t 1 E U t 1 Π t 1 E D U t 1 L ( t 1 ) A ( t ) ( E ( t ) ) ; U t 1 ,
with base L ( 0 ) (Definition 2.8) and L ( 1 ) (Definition 2.10).
Example 13 (Iterated–Meta–Learning (level 2, everyday life)). A cooking school optimizes how instructors meta–learn across cohorts.
(i)
Level- ( 1 ) tasks U 1 . Each task specifies (a) a cohort’s dish distribution Π 0 (e.g., pastry-heavy vs. savory-heavy), and (b) an evaluation protocol (time-to-target, waste, taste scores) defining L ( 1 ) .
(ii)
Meta–experience E ( 2 ) . Over semesters, the school logs pairs U 1 ( j ) , D ( j ) comparing several meta–learning procedures (e.g., “pilot-then-gradient”, “case-base retrieval”, “video-first imitation”).
(iii)
Output (a meta–learner selector). From E ( 2 ) , the school learns A E ( 2 ) ( 1 ) that selects/tunes the best level-1 procedure A ( 1 ) given the cohort profile U 1 (e.g., choose “case-base retrieval” for pastry cohorts; otherwise choose “pilot-then-gradient” and tighten temperature priors).
(iv)
Deployment. For a new cohort U 1 , the level-2 learner A ( 2 ) outputs the tailored level-1 meta–learner A E ( 2 ) ( 1 ) ( U 1 ) , which then rapidly adapts recipes for that cohort’s dishes, lowering the level-2 risk L ( 2 ) .
This is iterated meta–learning: A ( 2 ) learns to produce the best meta–learner for each evaluation context U 1 .
Definition 2.14. 
(Signature by depth for IML). Let Σ IML ( 0 ) have carrier H ( 0 ) = H and function/relations
Pred ( 0 ) : H × X Y , EmpRisk ( 0 ) : ( X × Y ) n × H R 0 , Train ( 0 ) : ( X × Y ) * H , Risk ( 0 ) ( h , r ) .
For t 1 , set carrier H ( t ) = Alg ( t 1 ) (the set of level- ( t 1 ) learners) and add
Train ( t ) : Exp ( t ) Alg ( t 1 ) , Risk ( t ) A ( t 1 ) , R encoding R = L ( t 1 ) A ( t 1 ) ; · .
Proposition 2.15 
(Meta–operations on Σ IML ( t ) –structures). For C 1 ( t ) , , C k ( t ) with carriers H i ( t ) :
  • Tagged sum Φ ( t ) : set H ( t ) : = i = 1 k H i ( t ) and define Train ( t ) ( E ( t ) , i ) : = Train C i ( t ) ( E ( t ) ) ; interpret Risk ( t ) componentwise on tagged inputs.
  • Weighted ensemble (shared protocol) Φ ens ( t ) : if all inputs use the same evaluation protocol, set H ens ( t ) : = i = 1 k H i ( t ) × Δ k 1 and
    Train ens ( t ) ( E ( t ) ) = Train C 1 ( t ) ( E ( t ) ) , , Train C k ( t ) ( E ( t ) ) , w ^ ( E ( t ) ) ,
    where w ^ Δ k 1 minimizes an empirical proxy of L ( t 1 ) .
Theorem 2.16. 
(IML generalizes Meta–Learning and forms an Iterated MetaStructure). For every t 1 :
  • (Generalization)Fix a level- ( t 1 ) task U t 1 and let Π t 1 : = δ U t 1 . If A ( t ) ignores E ( t ) and always returns a fixed level- ( t 1 ) learner B ( t 1 ) , then
    L ( t ) A ( t ) ; Π t 1 = E D U t 1 L ( t 1 ) B ( t 1 ) ; U t 1 .
    In particular, with t = 1 and U 0 = T 0 (a single base task), IML reduces to Meta–Learning; fixing B ( 0 ) further reduces to the level-0 risk in Definition 2.8.
  • (Iterated MetaStructure)Let U ( t ) be the class of Σ IML ( t ) –structures and Φ ( t ) { Φ ( t ) , Φ ens ( t ) } the meta–operations of Proposition 2.15. Then
    M ( t ) : = U ( t ) , { Φ ( t ) , Φ ens ( t ) }
    is a MetaStructure over Σ IML ( t ) (uniform carrier constructors and symbol interpretations, isomorphism–invariant). Moreover, for s < t , the lift M ( s ) (in the sense of Iterated MetaStructure) is a MetaStructure on U ( t ) .
Proof.(a) Generalization. By the recursive definition (3) and Π t 1 = δ U t 1 ,
L ( t ) A ( t ) ; Π t 1 = E E ( t ) δ U t 1 E U t 1 δ U t 1 E D U t 1 L ( t 1 ) A ( t ) ( E ( t ) ) ; U t 1 = E D U t 1 L ( t 1 ) B ( t 1 ) ; U t 1 ,
because A ( t ) ( E ( t ) ) B ( t 1 ) by hypothesis. This proves (4).
Numerical sanity check (explicit). Take t = 1 , Π 0 = 1 2 δ T 1 + 1 2 δ T 2 , and fix a base learner A ( 0 ) with L ( 0 ) ( A ( 0 ) ; T 1 ) = 0.30 , L ( 0 ) ( A ( 0 ) ; T 2 ) = 0.50 . If A ( 1 ) always returns A ( 0 ) , then
L ( 1 ) ( A ( 1 ) ; Π 0 ) = 1 2 · 0.30 + 1 2 · 0.50 = 0.40 .
With Π 0 = δ T 1 (Dirac), L ( 1 ) ( A ( 1 ) ; δ T 1 ) = 0.30 , which equals the level-0 risk on T 1 .
(b) Iterated MetaStructure.Uniform carriers. By Definition 2.14, each Σ IML ( t ) –structure C ( t ) has carrier H ( t ) = Alg ( t 1 ) . The meta–operations Φ ( t ) and Φ ens ( t ) construct H ( t ) = i H i ( t ) and H ens ( t ) = i H i ( t ) × Δ k 1 , which depend only on input carriers (the constructor Γ in the MetaStructure axioms).
Uniform symbol interpretations. The interpretations of Train ( t ) and Risk ( t ) under Φ ( t ) (tag routing) and Φ ens ( t ) (joint training with simplex weights w ^ ) are given by the same recipes for any inputs, providing the required Λ f and Ξ R .
Isomorphism invariance (naturality). Let α i : C i ( t ) D i ( t ) be Σ IML ( t ) –isomorphisms: bijections α i : H i ( t ) H i ( t ) intertwining Train ( t ) and preserving Risk ( t ) :
Risk C i ( t ) A ( t 1 ) , R Risk D i ( t ) α i ( A ( t 1 ) ) , R .
Define α ( t ) : = i α i and α ens ( t ) : = i α i × id Δ k 1 . Then for all E ( t ) ,
Train D ( t ) α ( t ) = α ( t ) Train C ( t ) , Risk ( t ) is preserved likewise ,
hence Φ ( t ) ( α 1 , , α k ) and Φ ens ( t ) ( α 1 , , α k ) are induced isomorphisms. Therefore M ( t ) satisfies the MetaStructure axioms.
Lifted structures. For s < t , lifting acts symbolwise (replacing depth-s symbols by depth-t analogues and composing meta–operations), which preserves typing and naturality; hence M ( s ) is a MetaStructure on U ( t ) .    □

2.3. Iterated-Meta-analysis (Analysis of ... of Analysis)

Meta-analysis statistically combines results from multiple studies to estimate overall effects, assess heterogeneity, evaluate bias, and improve evidence, precision, reliability (cf.[44,45,46,47]). Iterated-Meta-analysis hierarchically aggregates meta-analyses across domains or time, modeling between-review dependencies, updating priors, synthesizing evidence streams, guiding decisions, policy, practice.
Definition 2.17 
(Meta-analysis). Given K independent studies each providing an estimate θ ^ i of a common effect θ with known/estimated variances v i > 0 , a (fixed-effect) meta-analytic estimator is the inverse-variance weighted mean
θ ^ FE = i = 1 K w i θ ^ i i = 1 K w i , w i = 1 v i , Var ( θ ^ FE ) = 1 i w i .
Allowing between-study heterogeneity τ 2 0 (random effects) yields
θ ^ RE = i = 1 K w i * θ ^ i i = 1 K w i * , w i * = 1 v i + τ 2 ,
where τ 2 is estimated (e.g., method of moments or REML). Meta-analysis thus defines an estimator Ψ { ( θ ^ i , v i ) } i = 1 K that aggregates evidence to minimize mean squared error under the chosen model.
Example 2.18. (Meta-analysis (everyday life): estimating real fuel savings of a hybrid car). A shopper wants a single best estimate of the MPG gain of a hybrid vs. a comparable non-hybrid. They collect three independent road tests (treated as “studies”), each reporting an estimated MPG difference θ ^ i and an uncertainty v i (variance):
( θ ^ 1 , v 1 ) = ( 7.0 , 4.0 ) , ( θ ^ 2 , v 2 ) = ( 5.0 , 1.0 ) , ( θ ^ 3 , v 3 ) = ( 9.0 , 9.0 ) .
Using a fixed–effect meta–analysis (inverse–variance weights w i = 1 / v i ), the pooled estimate and variance are
θ ^ FE = i w i θ ^ i i w i = 0.25 · 7 + 1 · 5 + 1 9 · 9 0.25 + 1 + 1 9 = 7.75 1.3611 5.694 , Var ( θ ^ FE ) = 1 i w i 0.7347 .
Thus SE 0.857 and a 95% CI is 5.694 ± 1.96 · 0.857 [ 4.02 , 7.37 ] MPG. The shopper concludes the hybrid yields about 6 MPG more (precision improved by pooling the three tests).
Definition 2.19 (Iterated Meta–analysis (depth t)). At level 0, a study provides an estimate–variance pair ( θ ^ i ( 0 ) , v i ( 0 ) ) . At level 1 (ordinary meta–analysis, cf. Definition 2.17), given pairs { ( θ ^ i ( 0 ) , v i ( 0 ) ) } i = 1 K , the fixed–effect (FE) pooled pair is
θ ^ ( 1 ) = i = 1 K w i ( 1 ) θ ^ i ( 0 ) i = 1 K w i ( 1 ) , v ( 1 ) 1 = i = 1 K w i ( 1 ) , w i ( 1 ) : = 1 v i ( 0 ) .
(With random effects, replace w i ( 1 ) by 1 / ( v i ( 0 ) + τ 1 2 ) with a chosen τ 1 2 0 .)
For t 2 , suppose we are given a finite family of children nodes indexed by j, each child providing a (level t 1 ) meta–analytic pair ( θ ^ j ( t 1 ) , v j ( t 1 ) ) . The level-t pooled pair is defined recursively by
θ ^ ( t ) : = j w j ( t ) θ ^ j ( t 1 ) j w j ( t ) , v ( t ) 1 : = j w j ( t ) , w j ( t ) : = 1 v j ( t 1 ) + τ t 2 ,
with depth–t heterogeneity τ t 2 0 (set τ t 2 = 0 for FE). We call any estimator produced by the recursion (5) an Iterated Meta–analytic estimator of depth t.
Remark 2.20 
(Two-stage FE equals one-stage FE). If every child level uses FE ( τ s 2 = 0 for all s), then the two-stage FE obtained by first pooling within disjoint groups and then pooling across groups equals the single-stage FE pooling of all studies at once; see the algebra in the proof of Theorem 2.24.
Example 2.21 (Iterated Meta–analysis (everyday life): pooling regional meta–analyses). A national consumer group wants a single summary across regional syntheses. Each region has already meta–analyzed multiple local road tests (level 1), yielding a pooled effect and variance:
Region A : ( θ ^ A ( 1 ) , v A ( 1 ) ) = ( 6.0 , 0.50 ) , Region B : ( θ ^ B ( 1 ) , v B ( 1 ) ) = ( 4.8 , 0.80 ) .
At level 2, they pool these meta–analytic pairs using fixed–effect weights W A = 1 / 0.50 = 2 and W B = 1 / 0.80 = 1.25 :
θ ^ ( 2 ) = 2 · 6.0 + 1.25 · 4.8 2 + 1.25 = 12.0 + 6.0 3.25 5.538 , Var ( θ ^ ( 2 ) ) = 1 2 + 1.25 0.3077 .
Hence SE 0.555 and a 95% CI is 5.538 ± 1.96 · 0.555 [ 4.45 , 6.63 ] MPG. This iterated pooling coherently combines already–pooled regional evidence into one national estimate, mirroring Definition 2.19 with t = 2 .
Definition 2.22 
(Signature by depth for meta–analysis). Let Σ MA ( 0 ) have carrier H ( 0 ) = { ( θ ^ , v ) R × ( 0 , ) } and symbols
Est ( 0 ) : StudyData H ( 0 ) , Var ( 0 ) : H ( 0 ) ( 0 , ) .
For t 1 , set the carrier H ( t ) = H ( 0 ) (each node is summarized by an estimate–variance pair) and add a combiner
Combine ( t ) : ( H ( t 1 ) ) k × R 0 H ( t ) , ( θ ^ j ( t 1 ) , v j ( t 1 ) ) j = 1 k , τ t 2 ( θ ^ ( t ) , v ( t ) )
interpreted by (5).
Proposition 2.23 
(Meta–operations and naturality). Let C 1 ( t ) , , C k ( t ) be Σ MA ( t ) –structures (nodes) with carriers H ( t ) = H ( 0 ) . Define the following meta–operations on U ( t ) Str Σ MA ( t ) :
  • Tagged sum Φ ( t ) : the carrier constructor is the Cartesian product of carriers; Combine ( t ) is applied to the selected tag; variances and estimates pass through componentwise.
  • Weighted ensemble Φ ens ( t ) : when all inputs share the same model for τ t 2 , set the carrier constructor to the product-with-simplex H ens ( t ) = ( H ( t ) ) k × Δ k 1 and interpret
    Combine ens ( t ) ( θ ^ j ( t 1 ) , v j ( t 1 ) ) j = 1 k , τ t 2 = j w j ( t ) θ ^ j ( t 1 ) j w j ( t ) , j w j ( t ) 1 , w j ( t ) = 1 v j ( t 1 ) + τ t 2 .
Both operations areisomorphism–invariantunder any permutation σ of the input indices (relabeling studies), since the formulas for ( θ ^ ( t ) , v ( t ) ) are symmetric in the multiset of pairs { ( θ ^ j ( t 1 ) , v j ( t 1 ) ) } .
Theorem 2.24 
(IMA generalizes MA and forms an Iterated MetaStructure). For every t 1 :
  • (Generalization)If t = 1 and τ 1 2 = 0 , Definition 2.19 reduces to the fixed–effect meta–analytic estimator (Definition 2.17). More generally, with τ s 2 0 at all levels and grouping of base studies into any disjoint family, the two-stage FE estimator equals the one-stage FE estimator.
  • (Iterated MetaStructure)With U ( t ) the class of Σ MA ( t ) –structures and meta–operations { Φ ( t ) , Φ ens ( t ) } from Proposition 2.23,
    M ( t ) : = U ( t ) , { Φ ( t ) , Φ ens ( t ) }
    is a MetaStructure over Σ MA ( t ) (uniform carrier constructors and symbol interpretations, natural under isomorphisms). If s < t , the lift M ( s ) (in the sense of Iterated MetaStructure) is again a MetaStructure on U ( t ) .
Proof.(a) Generalization and explicit algebra. The t = 1 case with τ 1 2 = 0 is exactly the FE formula in Definition 2.17. For the two-stage FE equivalence, partition the K base studies into disjoint groups g = 1 , , G . Within each group g, let w g i : = 1 / v g i and W g : = i g w g i . The group FE estimate and variance are
θ ^ g = i g w g i θ ^ g i W g , Var ( θ ^ g ) = 1 W g .
At the second stage, use weights W g (the inverse variances) to form
θ ^ two - stage = g = 1 G W g θ ^ g g = 1 G W g = g W g · i g w g i θ ^ g i / W g g W g = g i g w g i θ ^ g i g i g w g i = θ ^ one - stage .
Likewise, the variance satisfies Var ( θ ^ two - stage ) = 1 / g W g = 1 / g i g w g i = Var ( θ ^ one - stage ) .
Numerical check. Consider three studies split into two groups:
( θ ^ 11 , v 11 ) = ( 1.00 , 0.04 ) , ( θ ^ 12 , v 12 ) = ( 1.20 , 0.09 ) , ( θ ^ 21 , v 21 ) = ( 0.80 , 0.01 ) .
Within group g = 1 : w 11 = 1 / 0.04 = 25 , w 12 = 1 / 0.09 11.111 , so W 1 = 36.111 and
θ ^ 1 = 25 · 1.00 + 11.111 · 1.20 36.111 25 + 13.333 36.111 = 1.06154 , Var ( θ ^ 1 ) = 1 / W 1 0.02769 .
Group g = 2 : W 2 = 1 / 0.01 = 100 , θ ^ 2 = 0.80 . Two-stage FE:
θ ^ two - stage = 36.111 · 1.06154 + 100 · 0.80 36.111 + 100 = 38.333 + 80 136.111 = 0.869388 , Var = 1 / 136.111 = 0.00734694 .
One-stage FE over all three studies:
θ ^ one - stage = 25 · 1.00 + 11.111 · 1.20 + 100 · 0.80 25 + 11.111 + 100 = 25 + 13.333 + 80 136.111 = 0.869388 , Var = 0.00734694 ,
which matches exactly.
(b) Iterated MetaStructure.Uniform carrier constructors. In Definition 2.22, all carriers are H ( 0 ) (estimate–variance pairs). For any finite input family, the tagged product and the product-with-simplex (when present) depend only on carriers, supplying the Γ required by MetaStructure.
Uniform symbol interpretations. The interpretation of Combine ( t ) is given by the single recipe (5), independent of representatives; this provides the uniform Λ f (functions) and Ξ R (relations) in the MetaStructure axioms.
Isomorphism invariance (naturality). Any permutation σ of inputs induces a relabeling isomorphism (bijection on carriers) under which the multiset { ( θ ^ j ( t 1 ) , v j ( t 1 ) ) } is unchanged; since (5) is symmetric in this multiset, the output pair ( θ ^ ( t ) , v ( t ) ) is invariant, yielding induced isomorphisms Φ ( t ) ( σ ) and Φ ens ( t ) ( σ ) . Therefore M ( t ) is a MetaStructure.
Lifted structures. If s < t , the lift M ( s ) replaces depth-s symbols by their depth-t counterparts and composes meta–operations componentwise; typing and naturality are preserved, hence M ( s ) is again a MetaStructure on U ( t ) .    □

2.4. Iterated Metadiscourse (Discussion about ... about a discussion)

Metadiscourse refers to language used to organize, evaluate, and guide interpretation of discourse, marking structure, stance, hedging, emphasis, or reader engagement beyond subject content [48,49,50,51,52]. Iterated Metadiscourse recursively applies discourse about discourse to itself, layering commentary on rhetorical devices, coherence, and stance, formalized through Iterated-MetaStructure representation.
Definition 2.25 
(Metadiscourse). Fix a two-sorted semantics with an object world W and an utterance worldU. Let L be a language whose sentences have denotations in either Prop W (propositions about W) or Prop U (propositions about U). For a finite discourse D Sent ( L ) , write · W : Sent ( L ) Prop W and · U : Sent ( L ) Prop U . A sentence φ is metadiscursive (w.r.t. D) iff its truth depends only on rhetorical/structural or stance features of D, i.e.
φ MD ( D ) F : Struct ( D ) { , } such that φ U = F Struct ( D ) ,
where Struct ( D ) encodes text-internal features (ordering, sectioning, connectives, hedges/boosters, attitude markers, cue phrases).
Example 2.26 (Metadiscourse (everyday email)). A manager sends a short email to staff:
“First, the office will open at 10am tomorrow. Second, client calls move to Friday. Finally, please confirm by 5pm. Frankly, this is not ideal, but it should work.”
Here the words First/Second/Finally are frame/sequence markers that organize the message; please confirm is an engagement directive; Frankly is an attitude marker. Consider the metadiscursive claim:
φ : ` ` The email explicitly signals its three - part structure and expresses stance .
The truth of φ depends only on the rhetorical features Struct ( D ) (presence of sequence markers and stance markers), not on whether the office actually opens at 10am in the world W. Hence φ MD ( D ) as in the definition.
Definition 2.27 
(Iterated Metadiscourse of depth t). Let D ( 0 ) Sent ( L ) be a base (object-level) discourse. For k 1 define the k-th metadiscourse layer
D ( k ) : = MD D ( k 1 ) = φ Sent ( L ) : F k : Struct D ( k 1 ) { , } , φ U ( k ) = F k Struct ( D ( k 1 ) ) ,
where · U ( k ) : Sent ( L ) Prop U ( k ) denotes level-k utterance-world denotations (propositions about the ( k 1 ) -level discourse). The iterated metadiscourse tower of depth t is
D ( t ) : = D ( 0 ) , D ( 1 ) , , D ( t ) , and its closure D ¯ ( t ) : = k = 0 t D ( k ) .
Remark 2.28 
(Typing discipline). D ( 0 ) is typed by Prop W via · W , while D ( k ) (for k 1 ) is typed by Prop U ( k ) via · U ( k ) ; that is, D ( k ) makes claims about the structure/stance of D ( k 1 ) , not about W directly.
Example 2.29. (Iterated Metadiscourse (feedback about feedback)). Level 0 (discourse D ( 0 ) ). A student writes: “In this report, I first outline the method; however, the results may suggest a limitation.” This contains a frame marker (first), a contrastive connective (however), and a hedge (may).
Level 1 (metadiscourse D ( 1 ) = MD ( D ( 0 ) ) ). A TA comments: “Good structure signal with `first’; your stance is cautious due to `may’. Consider adding a `finally’ to close the sequence.” These remarks are about the student’s rhetorical devices, hence metadiscourse on D ( 0 ) .
Level 2 (iterated metadiscourse D ( 2 ) = MD ( D ( 1 ) ) ). A course coordinator replies to the TA: “Your feedback prioritizes hedging detection over coherence guidance; add explicit transition advice (e.g., `next’, `in sum’).” This sentence evaluates the structure and stance-focus of the TA’s feedback, so its truth depends on Struct ( D ( 1 ) ) (which features the TA addressed and which it omitted), not on the student’s scientific content. Formally, if
ψ : ` ` The TA s feedback emphasizes hedges more than transitions . ,
then ψ MD ( D ( 1 ) ) , witnessing a depth-2 (iterated) metadiscourse claim.
Theorem 2.30. 
(Iterated Metadiscourse generalizes Metadiscourse). For any base discourse D ( 0 ) , the depth-1 tower ( D ( 0 ) , D ( 1 ) ) satisfies D ( 1 ) = MD D ( 0 ) , hence Definition 2.27 reduces to the usual notion of metadiscourse at t = 1 .
Proof. 
By Definition 2.27 with k = 1 we set D ( 1 ) = { φ : φ U ( 1 ) = F 1 ( Struct ( D ( 0 ) ) ) } , which matches the “Metadiscourse” definition with D : = D ( 0 ) (only the superscript on · U distinguishes the level). Thus D ( 1 ) = MD ( D ( 0 ) ) .    □
Definition 2.31 
(Metadiscourse as an Iterated MetaStructure). Let Σ be a single-sorted signature whose structures encode finite discourses equipped with their internal annotations:
D = H , ord , sec , conn , hedge , att ,
where H Sent ( L ) and the symbols capture, respectively, linear order, sectioning, discourse connectives, hedges/boosters, and attitude markers. Let U ( 0 ) be the class of such Σ –structures (object-level discourses), and for k 0 define the meta-operation
Φ : U ( k ) U ( k + 1 ) , Φ D : = MD ( D ) ,
where MD ( D ) is the Σ –structure whose carrier is D ( k + 1 ) = MD ( H ) and whose symbols are uniformly computed from Struct ( H ) (e.g., ord on the metalevel inherits the canonical order induced by the generating features referenced in F k ).
Proposition 2.32 
(Naturality / isomorphism invariance). If α : D D is a Σ–isomorphism preserving Struct , then there exists an induced isomorphism Φ ( α ) : Φ ( D ) Φ ( D ) . Hence Φ is isomorphism–invariant (natural).
Proof. 
Because each φ MD ( H ) is defined solely by a Boolean functional F k ( Struct ( H ) ) , and α preserves Struct by assumption, the truth of every metadiscursive sentence is preserved under α . Mapping each metadiscursive sentence in Φ ( D ) to its α –transported counterpart in Φ ( D ) yields a bijection commuting with all Σ –symbols by construction.    □
Theorem 2.33 
(Representation as an Iterated MetaStructure). Let t N . The tower
M ( t ) : = U ( 0 ) , U ( 1 ) , , U ( t ) ; Φ , with U ( k + 1 ) : = Φ ( U ( k ) ) ,
is anIterated MetaStructure of depth t: repeated application of the natural meta-operation Φ produces the hierarchy of discourses-about-discourses, and the lifts Φ coincide with Φ by definition. Moreover, t = 1 recovers ordinary Metadiscourse (Theorem 2.30).
Proof. 
By the Proposition, Φ is an isomorphism–invariant constructor on U ( k ) , so it is a valid meta-operation in the sense of MetaStructure. Define U ( k + 1 ) : = Φ ( U ( k ) ) and iterate t times; this realizes the lifting procedure that builds an Iterated MetaStructure. Unwinding the recursion shows that objects in U ( k ) encode level-k metadiscourse over the base, and the case t = 1 gives exactly MD ( D ( 0 ) ) .    □

2.5. Iterated Metaphilosophy (Philosophy of ... of Philosophy)

Philosophy systematically investigates fundamental questions about reality, knowledge, value, mind, language, logic, meaning, methodology, and reasoning. evidence, argument, clarity, wisdom (cf.[53,54,55,56]). Metaphilosophy critically examines philosophy itself: its aims, methods, questions, boundaries, progress, evaluation standards, practices, institutions, pedagogy, communication. history, value, limitations(cf.[57,58,59,60]). Iterated metaphilosophy recursively studies higher-order layers: metaphilosophies evaluating other metaphilosophies, formalizing reflexive methods, coherence, convergence, governance, incentive structures. dynamics, impact.
Definition 2.34 
(Metaphilosophy). Let a (first-order) philosophical theory be a quadruple
Φ = ( Q , Ans , Meth , Norm )
, where Q is a set of questions, Ans a set of candidate answers, Meth a set of admissible methods, and Norm a set of evaluation norms (e.g. validity, clarity, fruitfulness). A metaphilosophical theory  Φ is a theory whose non-logical symbols range over the components of Φ and whose axioms/theorems are propositions about them, e.g.
WellFormed ( q ) , Admissible ( m ) , Adequate ( n , q , a ) ,
together with rules that compare or constrain ( Q , Meth , Norm ) . Formally, if L Φ is the language of Φ , a metaphilosophy is any theory in a meta-language L Φ that quantifies over and predicates of  Q , Ans , Meth , Norm .
Example 2.35. (Metaphilosophy (departmental policy in practice)). A university philosophy department meets to design its research and teaching policy.
  • Object level Φ ( 0 ) : faculty pursue questions such as “Do moral facts supervene on natural facts?” using methods like thought experiments, formal modeling, or historical analysis, and they publish candidate answers.
  • Meta level Φ ( 1 ) = ( Φ ( 0 ) ) : the committee explicitly debates about the components of Φ ( 0 ) :
    (1)
    Q : Which questions count as central (e.g., priority to public-impact ethics vs. abstract metaphysics)?
    (2)
    Meth : Which methods are admissible (e.g., allow experimental philosophy surveys; require formal clarity for modality debates)?
    (3)
    Norm : Which evaluation norms govern papers and courses (e.g., transparency of argument maps, reproducible data for X-phi, fruitfulness for cross-field collaboration)?
    The committee’s resolutions (e.g., “Admissible(survey)”; “WellFormed(question drafts require argument maps)”) are propositions about Q , Meth , Norm of Φ ( 0 ) , hence constitute a concrete metaphilosophical theory in everyday departmental governance.
Definition 2.36 
(Iterated Metaphilosophy of depth t). Fix a base philosophical theory
Φ ( 0 ) = ( Q ( 0 ) , Ans ( 0 ) , Meth ( 0 ) , Norm ( 0 ) )
. Define a meta-operator
: Φ ( k ) Φ ( k + 1 ) : = Q ( k + 1 ) , Ans ( k + 1 ) , Meth ( k + 1 ) , Norm ( k + 1 ) ,
where each component of level k + 1 is about level k:
Q ( k + 1 ) : = { questions about ( Q ( k ) , Ans ( k ) , Meth ( k ) , Norm ( k ) ) } , Ans ( k + 1 ) : = { propositions / claims answering elements of Q ( k + 1 ) } , Meth ( k + 1 ) : = { meta - methods that evaluate / compare Meth ( k ) and Norm ( k ) } , Norm ( k + 1 ) : = { norms to appraise elements of Ans ( k + 1 ) and Meth ( k + 1 ) } .
For t N , the iterated metaphilosophy tower of depth t is
Φ ( t ) : = Φ ( 0 ) , Φ ( 1 ) , , Φ ( t ) , with Φ ( k + 1 ) = Φ ( k ) .
When needed, we write L ( k ) for the (meta)-language in which Φ ( k ) is formulated, so that L ( k + 1 ) extends L ( k ) by symbols that range over the components of Φ ( k ) .
Remark 2.37 
(Level discipline). Φ ( 0 ) makes claims about the world; Φ ( 1 ) makes claims about
( Q ( 0 ) , Ans ( 0 ) , Meth ( 0 ) , Norm ( 0 ) )
; in general Φ ( k ) makes claims about Φ ( k 1 ) . Cross-level conservativity or reflection principles may be added, but are not required by the definition.
Example 2.38 (Iterated Metaphilosophy (governing the governors)). A national philosophy association evaluates departments’ meta-policies to issue accreditation.
  • Level 1 Φ ( 1 ) : each department has a metaphilosophical framework (e.g., “pluralist admissibility of methods with clarity-and-impact norms”) regulating its object-level research Φ ( 0 ) .
  • Level 2 Φ ( 2 ) = ( Φ ( 1 ) ) : the association forms an oversight panel that compares and audits those metaphilosophies. Its questions and rules are about Q ( 1 ) , Meth ( 1 ) , Norm ( 1 ) :
    (1)
    Q ( 2 ) : “Do the departments’ admissibility rules systematically bias against certain subfields or methods?”
    (2)
    Meth ( 2 ) : “Use rubric-based audits, citation-network analysis, and stakeholder interviews to evaluate Norm ( 1 ) .”
    (3)
    Norm ( 2 ) : “Prefer metaphilosophies that ensure transparency of review, methodological pluralism, and measurable educational outcomes.”
    The accreditation decision (e.g., “Adequate ( Norm ( 2 ) , bias - robustness , Dept A s Φ ( 1 ) ) ”) is a claim about a metaphilosophy rather than about object-level philosophy. Thus everyday accreditation practice exemplifies iterated metaphilosophy: philosophy about philosophy about philosophy.
Theorem 2.39 
(Iterated Metaphilosophy generalizes Metaphilosophy). For any base Φ ( 0 ) , the depth-1 tower ( Φ ( 0 ) , Φ ( 1 ) ) satisfies Φ ( 1 ) = ( Φ ( 0 ) ) and coincides with the usual notion of metaphilosophy.
Proof. 
By Definition 2.36, Φ ( 1 ) has a language L ( 1 ) whose non-logical symbols range over the components of Φ ( 0 ) , and its sentences are propositions about those components. This is exactly the “Metaphilosophy (recalled)” notion.    □
Definition 2.40 
(Encoding as an Iterated MetaStructure). Let Σ phil be a single-sorted signature with unary predicates Q , A , M , N picking out questions, answers, methods, and norms inside a carrier H, and relation symbols
Admissible ( m ) , WellFormed ( q ) , Adequate ( n , q , a ) , Prefer ( n , a , a ) ,
together with any additional (fixed) vocabulary needed to represent a philosophical theory internally. A level-0 object is a Σ phil -structure
Φ = H ; Q Φ , A Φ , M Φ , N Φ ; Admissible Φ , WellFormed Φ , Adequate Φ , Prefer Φ ,
interpreting the components of a philosophical theory on H. Define a meta-operation
Φ * : U ( k ) U ( k + 1 ) , Φ Φ ,
where the carrier of Φ consists of well-formed formulas in the meta-language that refer to the relations of Φ (e.g. encodings of Admissible ( m ) , Adequate ( n , q , a ) , etc.), and where each predicate/relation of Σ phil on the meta-level is computed by a fixed, isomorphism-invariant recipe from the corresponding data of Φ . Set U ( 0 ) : = { all Σ phil structures } and U ( k + 1 ) : = Φ * ( U ( k ) ) .
Proposition 2.41 (Isomorphism invariance (naturality)). If α : Φ Ψ is a Σ phil –isomorphism, then there exists an induced isomorphism Φ * ( α ) : Φ Ψ .
Proof. 
By construction, the carrier of Φ is the set of meta-sentences obtained from the invariantly defined relations of Φ ; transporting along α preserves truth of such sentences because α preserves all Σ phil –relations. Thus mapping each meta-sentence to its α -transported counterpart yields a bijection that commutes with the interpretations of the meta-level predicates and relations.    □
Theorem 2.42 
(Representation as an Iterated MetaStructure). For every depth t N , the tower
M ( t ) : = U ( 0 ) , U ( 1 ) , , U ( t ) ; Φ * , U ( k + 1 ) : = Φ * ( U ( k ) ) ,
is anIterated MetaStructure of depth twhose objects encode Φ ( 0 ) , Φ ( 1 ) , , Φ ( t ) . Moreover, the case t = 1 recovers ordinary metaphilosophy (Theorem 2.39).
Proof. 
By Proposition 2.41, Φ * is an isomorphism-invariant constructor on U ( k ) , hence a valid meta-operation in the sense of MetaStructure. Iterating Φ * produces the desired hierarchy; unpacking the definitions shows that U ( k ) consists exactly of encodings of level-k metaphilosophical theories about level- ( k 1 ) . For t = 1 , U ( 1 ) = Φ * ( U ( 0 ) ) coincides with the usual metaphilosophy over U ( 0 ) , establishing the generalization.    □

2.6. Iterated Metaknowledge (Knowledge of ... of Knowledge)

Metaknowledge concerns knowledge about knowledge: its sources, structures, justification, uncertainty, sharing, retrieval, governance, evolution, applications, and limitations across agents communities (cf.[61,62,63,64,65,66]). Iterated metaknowledge recursively studies layers of knowledge about knowledge, modeling higher-order beliefs, reflexive reasoning, communication protocols, incentives, and governance dynamics.
Definition 2.43 
(Metaknowledge). Let M = ( S , { R i } i I , V ( 0 ) ) be a Kripke model for agents i I over a base propositional language L ( 0 ) with valuation V ( 0 ) : Prop P ( S ) . For φ in the Boolean closure of L ( 0 ) under the unary modalities K i , define the order
ord ( φ ) = 0 if φ has no K i ; ord ( K i ψ ) = ord ( ψ ) + 1 ; ord ( ψ χ ) = max { ord ( ψ ) , ord ( χ ) } .
A formula is meta-knowledge of order n 1 iff ord ( φ ) = n .
Example 2.44 (Metaknowledge (everyday transactive memory)). In a household, a teenager knows that their mother knows the trusted plumber’s contact. Let p denote the base proposition “the recommended plumber’s phone number is # ”. The teen’s state can be expressed as
K teen K mom p ,
i.e., knowledge about someone else’s knowledge (metaknowledge). Practically, when a leak occurs, the teen does not search the web but immediately calls the mother, leveraging knowledge-of-where-the-knowledge-is stored (a family ‘who-knows-what’ map).
Definition 2.45 
(Language/semantics tower). Define a family of languages ( L ( t ) ) t 0 by
L ( 0 ) given , L ( t + 1 ) : = Bool { K i ψ i I , ψ L ( t ) } .
For each t, define satisfaction ( t ) S × L ( t ) inductively: Boolean clauses as usual, and
M , s ( t + 1 ) K i ψ s ( s R i s M , s ( t ) ψ ) , ψ L ( t ) .
Definition 2.46 
(Iterated Metaknowledge of depth t). Fix t N . The iterated metaknowledge structure of depth t on M is
MK ( t ) ( M ) : = M ; ( 0 ) , ( 1 ) , , ( t ) ,
equipped with the language tower ( L ( 0 ) , , L ( t ) ) from Definition 2.45. The set of all depth-t metaknowledge formulae is L ( t ) : = k = 1 t L ( k ) .
Example 2.47 (Iterated Metaknowledge (layered expertise in a project team)). In a startup, the on-call engineer A knows that the tech lead B knows that the compliance officer C knows whether a specific clause q permits storing user logs for 90 days. Formally,
K A K B K C q .
This iterated (third-order) metaknowledge determines the escalation path: A pages B (who will consult C) instead of making an ad hoc decision. If, moreover, B knows that A knows that B knows that C knows q (and this becomes common knowledge within the on-call playbook), hand-offs become faster and errors fewer, illustrating how higher-order knowledge about knowledge improves coordination under time pressure.
Proposition 2.48 
(Generalization). At t = 1 , L ( 1 ) = L ( 1 ) is exactly the set of order-1 metaknowledge statements (“knowledge about base facts”), and all order-n metaknowledge with n t belongs to L ( t ) . Hence Iterated Metaknowledge of depth t generalizes (single-step) Metaknowledge.
Proof. 
By construction, L ( 1 ) is the Boolean closure of { K i ψ : ψ L ( 0 ) } , which are exactly order-1 forms. The recursion for L ( t ) increases outermost modal depth by one, so every order-n formula lies in some L ( n ) L ( t ) whenever n t .    □
Let Σ ep be the single-sorted signature with carrier S and relation symbols { R i } i I together with, for each t 0 , a binary predicate Sat ( t ) ( s , φ ) intended as “ s ( t ) φ ”. Consider the class
U ( 0 ) : = E = ( S , { R i } i I , V ( 0 ) )
of base epistemic structures (Kripke models with base valuation). Define a meta-operation
Lift : U ( k ) U ( k + 1 ) ,
which takes E U ( k ) to the Σ ep -structure E with the same carrier S and relations { R i } , and whose new predicate Sat ( k + 1 ) is computed from Sat ( k ) by the Kripke clause:
Sat ( k + 1 ) ( s , K i ψ ) s ( s R i s Sat ( k ) ( s , ψ ) ) , ψ L ( k ) .
(For k = 0 , Sat ( 0 ) is induced by V ( 0 ) and Boolean truth tables.)
Proposition 2.49 
(Naturality). If f : ( S , { R i } , V ( 0 ) ) ( S , { R i } , V ( 0 ) ) is an isomorphism of base epistemic structures, then there is a unique induced isomorphism Lift ( f ) : E ( E ) preserving all Sat ( k + 1 ) . Hence Lift is isomorphism-invariant.
Proof. 
Isomorphisms preserve accessibility relations and base valuations. The defining clause for Sat ( k + 1 ) uses only R i and Sat ( k ) ; the induction hypothesis gives preservation of Sat ( k ) , so the displayed equivalence is preserved under f.    □
Theorem 2.50 
(Representation as Iterated MetaStructure). For every t N , the tower
M ( t ) : = U ( 0 ) , U ( 1 ) , , U ( t ) ; Lift , U ( k + 1 ) : = Lift U ( k ) ,
is anIterated MetaStructure of depth t. Moreover, each object in U ( t ) encodes MK ( t ) ( M ) for some base model M , and for t = 1 we recover ordinary Metaknowledge (Proposition 2.48).
Proof. 
By the Proposition, Lift is an isomorphism-invariant constructor, hence a valid meta-operation in the sense of MetaStructure. Iterating Lift adds the satisfaction predicate Sat ( k ) level by level according to Definition 2.45, thus capturing ( 0 ) , , ( t ) on a common carrier with fixed { R i } . Unwinding the definitions shows that U ( t ) consists exactly of epistemic structures equipped with the first t satisfaction relations, i.e. instances of MK ( t ) ( M ) .    □

2.7. Iterated MetaScience (Science of ... of Science)

Metascience uses scientific methods to study and improve research itself, measuring validity, reproducibility, transparency, costs, incentives, and optimizing policies systemwide (cf.[67,68,69]). Iterated Metascience recursively applies metascientific evaluation to policy-making processes themselves, optimizing multi-level research ecosystems through nested experimentation and loss minimization.
Definition 2.51 
(Metascience as a higher–order statistical decision problem). Let Q be a measurable set of scientific questions, each with ground–truth parameter θ ( q ) Θ and data–generating law P q on a sample space X q . A (pre-specified) study design is a measurable map
s S E s : X q n E , X E s ( X ) = ( θ ^ s ( X ) , CI s ( X ) , p s ( X ) , δ s ( X ) ) ,
that turns data X P q n into an evidential object E s ( X ) (estimator, interval, p–value, decision).
An ecosystem policy (methods, reporting, evaluation, incentives) is denoted π Π and induces a probability kernel P π on Q × S (and thus on evidential outputs E ). For weights w = ( w 1 , , w 4 ) R 0 4 , define per-study quality functionals
Val ( s , q ) : = 1 MSE q ( θ ^ s ) , Rep ( s , q ) : = P δ s = δ s q , s P π ( · q ) ,
Trn ( s ) [ 0 , 1 ] ( transparency / completeness score ) , Cost ( s ) 0 ,
and the social loss
L ( π ) : = E ( q , s ) P π w 1 1 Val ( s , q ) + w 2 1 Rep ( s , q ) + w 3 1 Trn ( s ) + w 4 Cost ( s ) .
Metascience measures functionals of P π (descriptive/evaluative) and seeks π arg min π Π L ( π ) (prescriptive).
Example 2.52 (MetaScience (department-level policy A/B trial)). A psychology department wants to improve the quality of senior-thesis experiments. Two ecosystem policies are considered: π RR (Registered Reports + mandatory power analysis + data sharing) and π BAU (business-as-usual). Over one academic year the department cluster-randomizes courses (units u) to either policy and, for each study s in unit u with question q s , records:
Val ( s , q s ) = 1 MSE q s ( θ ^ s ) ,
Rep ( s , q s ) = 1 { δ s = δ s on preregistered replication } , Trn ( s ) [ 0 , 1 ] , Cost ( s ) 0 .
Using weights w = ( 0.4 , 0.3 , 0.2 , 0.1 ) , the empirical social loss for policy π is
L ^ ( π ) = 1 N π ( q , s ) P ^ π 0.4 ( 1 Val ) + 0.3 ( 1 Rep ) + 0.2 ( 1 Trn ) + 0.1 Cost .
They find L ^ ( π RR ) < L ^ ( π BAU ) , driven by higher Rep and Trn with modest cost. The department adopts π RR arg min π L ^ ( π ) , illustrating metascience: measuring how policies π shape the distribution of evidential objects and choosing the loss-minimizing policy.
Definition 2.53 
(Iterated MetaScience of depth t). A level–0 scientific ecosystem is a tuple
E ( 0 ) = ( Q , { P q } q Q , S , Π , P ( · ) , L ) ,
as in the recalled definition. For k 0 , define the lifted (meta) policy space
Π ( k + 1 ) : = Δ Π ( k ) with Π ( 0 ) : = Π ,
i.e. Π ( k + 1 ) is the set of randomized selectors σ over Π ( k ) . Given σ Π ( k + 1 ) , define the induced population kernel on Q × S × Π ( k ) by
P σ ( k + 1 ) : = π σ , ( q , s ) P π ( k ) , P π ( 0 ) : = P π .
Define the lifted loss on Π ( k + 1 ) by
L ( k + 1 ) ( σ ) : = E ( π , q , s ) P σ ( k + 1 ) L ( k ) ( π ) + λ k + 1 Cost ( k + 1 ) ( σ ) , L ( 0 ) : = L ,
with λ k + 1 0 and a meta-experimental cost Cost ( k + 1 ) . The level– ( k + 1 ) meta-optimizer solves σ arg min σ Π ( k + 1 ) L ( k + 1 ) ( σ ) .
An Iterated MetaScience system of depth t is the tower
E ( t ) : = E ( 0 ) , Π ( 1 ) , , Π ( t ) ; P ( 1 ) , , P ( t ) ; L ( 1 ) , , L ( t ) .
Example 2.54 (Iterated MetaScience (funder-level experiment on policyselectionrules)) A national research funder wants not only to compare policies (e.g., open-data bonuses, preregistration mandates, registered reports) but to evaluate how it chooses among them. Let the base policy set be Π = { π 1 , π 2 , π 3 } with base loss L ( π ) as above. Two meta-policies over Π are compared:
σ static Δ ( Π ) ( fixed portfolio for a year ) , σ adapt ( Thompson - sampling bandit over Π ) .
Universities are cluster-randomized to either σ static or σ adapt ; within each cluster, grants are assigned policies π σ and outcomes are tracked. The level-1 loss (Definition 2.53) is
L ( 1 ) ( σ ) = E ( π , q , s ) P σ ( 1 ) L ( π ) + λ 1 Cost ( 1 ) ( σ ) ,
where Cost ( 1 ) accounts for overhead of adaptivity (dashboards, monitoring). After one cycle, the funder estimates L ^ ( 1 ) ( σ adapt ) < L ^ ( 1 ) ( σ static ) : the adaptive meta-policy quickly allocates more grants to low-loss base policies, improving reproducibility and transparency systemwide despite modest overhead. The funder then deploys σ arg min σ L ^ ( 1 ) ( σ ) , demonstrating iterated metascience: optimizing a policy-over-policies that governs how base research policies are chosen.
Proposition 2.55 
(Generalization of Metascience). At depth t = 1 , Iterated MetaScience reduces to ordinary Metascience: if σ is restricted to degenerate distributions δ π on Π, then L ( 1 ) ( δ π ) = L ( 0 ) ( π ) and arg min σ Δ ( Π ) L ( 1 ) ( σ ) contains the embeddings of arg min π Π L ( 0 ) ( π ) .
Proof. 
For any π Π , P δ π ( 1 ) = ( q , s ) P π ( 0 ) and hence L ( 1 ) ( δ π ) = E ( q , s ) P π = L ( 0 ) ( π ) , ignoring the constant experiment cost for degenerate σ . Thus minimizing L ( 1 ) over Δ ( Π ) generalizes the base problem.    □
Fix a single-sorted signature Σ ms with carrier H and function/relational symbols
Q , S , Pi ( k ) , Kern ( k ) : Pi ( k ) Δ ( Q × S × Pi ( k 1 ) ) , Loss ( k ) : Pi ( k ) R 0 ,
(with the convention Pi ( 1 ) : = and Kern ( 0 ) : Pi ( 0 ) Δ ( Q × S ) ). Let U ( 0 ) be the class of Σ ms -structures instantiating ( Q , S , Π , P ( · ) , L ) .
Definition 2.56 (Meta-operationLift) Define Lift : U ( k ) U ( k + 1 ) on objects by
  • keeping the underlying carriers for Q and S unchanged,
  • replacing Pi ( k ) by Pi ( k + 1 ) : = Δ ( Pi ( k ) ) ,
  • defining Kern ( k + 1 ) ( σ ) as the mixture Kern ( k + 1 ) ( σ ) : = Kern ( k ) ( π ) σ ( d π ) ,
  • defining Loss ( k + 1 ) ( σ ) : = Loss ( k ) ( π ) σ ( d π ) + λ k + 1 Cost ( k + 1 ) ( σ ) .
On morphisms f : E E (measurable isomorphisms preserving Q , S and pushing Kern ( k ) , Loss ( k ) forward), Lift ( f ) acts by push-forward on Δ ( Pi ( k ) ) .
Lemma 2.57 (Naturality ofLift)If f : E E is an isomorphism in U ( k ) , then Lift ( f ) : Lift ( E ) Lift ( E ) is an isomorphism in U ( k + 1 ) preserving Kern ( k + 1 ) and Loss ( k + 1 ) .
Proof. 
By assumption, f preserves Kern ( k ) and Loss ( k ) up to push-forward. For any σ Δ ( Pi ( k ) ) ,
Kern E ( k + 1 ) Lift ( f ) ( σ ) = Kern E ( k ) ( f ( π ) ) σ ( d π ) = f # Kern E ( k ) ( π ) σ ( d π ) = f # Kern E ( k + 1 ) ( σ ) ,
and similarly Loss ( k + 1 ) is preserved by linearity of the integral and the invariance of Cost ( k + 1 ) under Lift ( f ) . Hence Lift is natural.    □
Theorem 2.58 
(Iterated MetaScience is an Iterated MetaStructure). For every t N , the tower
M ( t ) : = U ( 0 ) , U ( 1 ) , , U ( t ) ; Lift , U ( k + 1 ) : = Lift U ( k ) ,
is anIterated MetaStructure of depth tin the sense that U ( k ) is a class of Σ ms -structures and Lift is an isomorphism-invariant meta-operation (by Lemma 2.57). Moreover, any object in U ( t ) encodes an Iterated MetaScience system of depth t as in Definition 2.53, and for t = 1 it reduces to ordinary Metascience (Proposition 2.55).
Proof. 
By Lemma 2.57, Lift is a valid meta-constructor (isomorphism-invariant). Iterating Lift yields the sequence of policy spaces Π ( k ) , kernels P ( k ) , and losses L ( k ) exactly as in Definition 2.53. The identification of U ( t ) with depth-t Iterated MetaScience is immediate from the clauses in Definition 2.56. The t = 1 case follows from Proposition 2.55.    □

2.8. Iterated MetaModel (Model of ... of Models)

A metamodel is a formal model that defines the syntax, rules, and constraints of other models, providing structure and conformance principles (cf.[70,71,72,73]). An Iterated Metamodel recursively models metamodels themselves, enabling hierarchical layers of abstraction that generalize modeling frameworks through Iterated-MetaStructure formalisms.
Definition 2.59 
(Metamodel and conformance). Fix a finite set of class symbols  Cls and relation symbols Rel with ar : Rel Z 1 . A metamodel is a tuple
MM = ( Cls , Rel , ar , Ctr ) ,
where Ctr is a set of (first-order) well-formedness constraints over typed graphs. A model conforming to MM is a finite typed multigraph
M = ( V , E , t , ) with t : V Cls , : E Rel ,
such that for each e E with ( e ) = r of arity k = ar ( r ) , the endpoints of e form an ordered k-tuple in V k , and ( V , E , t , ) satisfies all constraints in Ctr . We write M MM and call the relation ⊧ conformance.
Example 2.60 (MetaModel in everyday software design (UML ⇒ domain model)). Metamodel (UML fragment). The UML metamodel provides the types of modeling elements and their rules, e.g. Class, Association, Attribute, Multiplicity, and well-formedness constraints (OCL): every Association links two Classes; multiplicities are nonnegative; attribute types are declared, etc. Formally, it is a typed graph of meta-classes and meta-relations that any user model must conform to.
Model (conforming e-commerce diagram). A team models an online store with classes
Customer , Order , OrderLine , Product .
Associations (with multiplicity constraints) encode business rules:
  • Customer 1 * Order (a customer can place many orders).
  • Order 1 * OrderLine (each order has at least one line).
  • OrderLine * 1 Product (each line references exactly one product).
Attributes include Order.date: Date, OrderLine.qty: Nat, OrderLine.unitPrice: Money. An OCL invariant expresses a domain constraint:
context Order inv Total : self . total = i self . lines i . qty · i . unitPrice .
This user model is valid because every element (Class, Association, Attribute, multiplicity, OCL) is an instance of the UML metamodel elements and satisfies their well-formedness rules. Thus the UML metamodel (the “model of models”) constrains and validates the concrete e-commerce model.
Definition 2.61 
(Iterated MetaModel of order n). For n 1 , an iterated metamodel of order n is a finite chain
C n : = ( L 0 , L 1 , , L n )
with the following properties:
(1)
L 0 is a (level-0) model.
(2)
For each k = 1 , , n , L k is a metamodel L k = ( Cls k , Rel k , ar k , Ctr k ) .
(3)
Typed conformance chain: for each k = 1 , , n there is a conformance relation k U k 1 × U k such that
L k 1 k L k ,
where U 0 is the class of level-0 models and, for k 1 , U k is the class of level-k metamodels.
We abbreviate this situation by writing a conformance tower
L 0 1 L 1 2 n L n .
The case n = 1 recovers ordinary metamodeling ( L 0 1 L 1 ) .
Example 2.62 (Iterated MetaModel in everyday data exchange (metaschema ⇒ schema ⇒ document)). Level M2 (metamodel-of-metamodel: JSON Schemametaschema). The JSON Schema metaschema specifies what a schema itself may contain: keywords like type, properties, required, items, minimum, and the allowed structures for combining them. It is a metamodel because its “instances” are schemas.
Level M1 (metamodel: an organization’sExpenseReport.schema.json). A company publishes a JSON Schema for expense reports using the metaschema’s keywords:
  • properties: employeeId: string, date: string (format: date), items: array.
  • Each item has category ∈ {travel, meals, lodging}, amount: number with minimum = 0.
  • required: {employeeId, date, items}.
This schema conforms to the metaschema (all keywords/structures are valid), so it is a correct model at M1.
Level M0 (model instance: a concrete expense report document).
An employee submits alice_expense_0421.json: it contains the required fields, a list of items with nonnegative amounts, and valid category values. This document conforms to ExpenseReport.schema.json.
Why this isiteratedmetamodeling. Conformance occurs at two successive meta-levels:
document M 0 schema M 1 and schema M 1 metaschema M 2 .
Thus a metamodel (the schema) is itself validated by a meta-metamodel (the metaschema), and the everyday act of submitting a JSON file is governed by this two-tier, iterated conformance chain.
To make the iteration concrete, we specify a level-agnostic schema MMM whose instances are precisely metamodels in the sense above.
Let
Cls 2 = { Class , Rel , Nat , Ctr , Var , Frm } .
Let Rel 2 contain the following relation symbols with arities indicated:
Arg ( = ` ` argument typing ) : ar = 3 , Arity : ar = 2 , Mentions : ar = 2 , WellFormed : ar = 1 .
Intended reading: Arg ( r , j , c ) says that for relation r the jth position has class c; Arity ( r , k ) sets the arity of r; Mentions ( φ , · ) lists the symbols used in constraint φ ; and WellFormed ( φ ) encodes syntactic well-formedness of φ as a formula over the typed graph signature.
Let Ctr 2 enforce:
  • For each r there is a unique k with Arity ( r , k ) .
  • For each 1 j k , there is a unique c with Arg ( r , j , c ) .
  • Each φ with WellFormed ( φ ) only mentions declared classes/relations and is type-correct (standard first-order typing conditions).
Define
MMM = ( Cls 2 , Rel 2 , ar 2 , Ctr 2 ) .
An encoding  Enc maps any metamodel MM = ( Cls , Rel , ar , Ctr ) to a level-1 model Enc ( MM ) over MMM by:
create a node v c of type Class for each c Cls , a node v r of type Rel for each r Rel , a node v k of type Nat for each k in the image of ar , edges witnessing Arity ( v r , v k ) and Arg ( v r , j , v c j ) for each position j , and nodes v φ of type Frm for φ Ctr with Mentions edges .
One checks that Enc ( MM ) MMM .
Definition 2.63 
(The lifting operator). The lift of a metamodel MM is the pair
Lift ( MM ) : = MM 2 MMM ,
where the right component is instantiated by Enc ( MM ) . Inductively, for k 1 define
Lift ( k ) ( MM ) : = MM 2 MMM 3 k + 1 MMM k copies .
Theorem 2.64 
(Generalization of MetaModel). Every metamodel MM extends canonically to an iterated metamodel of any finite order n 1 :
Lift ( n 1 ) ( MM ) = L 0 , L 1 , , L n with L 1 = MM , L 2 = = L n = MMM .
Conversely, for any iterated metamodel C n , the truncation ( L 0 , L 1 ) is a metamodel with its usual conformance relation.
Proof. (Existence) We showed above that for any MM the encoding Enc ( MM ) conforms to MMM , i.e. MM 2 MMM . Because MMM is level-agnostic (its classes/relations describe any metamodel in typed-graph form), the same schema can serve as a meta-metamodel at all higher levels. Therefore the tower
L 0 1 L 1 ( = MM ) 2 L 2 ( = MMM ) 3 n L n ( = MMM )
is well-defined for any n 1 .
(Converse) If C n = ( L 0 , , L n ) is an iterated chain, by Definition 2.61 the pair ( L 0 , L 1 ) already satisfies the base metamodel/conformance conditions. Hence truncation recovers an ordinary metamodeling instance.    □
Definition 2.65 
(Iterated-MetaStructure for metamodeling). An Iterated-MetaStructure (IMS) of height n consists of universes { U k } k = 0 n and conformance relations { k U k 1 × U k } k = 1 n such that:
(1)
U 0 is the class of level-0 models; U k ( k 1 ) is the class of level-k metamodels.
(2)
(Locality) k is defined by the typed-graph semantics at level k.
(3)
(Chainability) For any x U k 1 , y U k , z U k + 1 , if x k y and y k + 1 z , then x is well-typed at level k + 1 via z (in particular, z validates the constraints that ensure y itself is a well-formed type system for x).
Theorem 2.66 
(Representation of Iterated MetaModel inside IMS). For each n 1 , the class of order-n iterated metamodels forms an IMS of height n with U 0 the level-0 models, U 1 the metamodels, and U k = { MMM } for all k 2 , and with k the usual typed-graph conformance.
Proof. (Well-defined universes) U 0 and U 1 are as in the base definition. By construction, MMM is a metamodel whose instances are (encodings of) metamodels; hence it can serve as the unique element of U k for k 2 .
(Locality) Each k is the satisfaction of first-order constraints over typed graphs at level k, exactly as in the base case.
(Chainability) Suppose L k 1 k L k and L k k + 1 L k + 1 . When k 1 , L k + 1 = MMM , whose constraints enforce that L k itself is a well-formed metamodel (unique arities, well-typed constraints, etc.). Therefore L k 1 is typed against a well-formed  L k , and so the composition of typings is consistent. When k = 0 this reduces to the ordinary “model typed by metamodel typed by meta-metamodel” situation, which holds by the same argument. Thus Definition 2.65(iii) is satisfied.
All IMS axioms hold, so the representation is established.    □

2.9. Iterated Metaoptimization (Optimization of ... of Optimizations)

Optimization is the mathematical process of selecting the best solution from feasible alternatives by minimizing or maximizing an objective function under given constraints (cf.[74,75,76,77]). Metaoptimization is the process of using one optimization algorithm to tune, configure, or control another optimization method’s parameters, improving efficiency, adaptability, and performance across diverse problem instances [78,79,80,81]. Iterated Metaoptimization recursively applies metaoptimization over metaoptimizers themselves, creating hierarchical layers that generalize tuning, automate optimizer design, and establish Iterated-MetaStructure for adaptive optimization ecosystems.
Definition 2.68 
(Meta-optimization as bilevel learning of optimizers). Let Λ be a set of optimizer hyperparameters, F a distribution over base objectives f : X R { + } , and A λ an optimization routine that, given f, returns a candidate x λ ( f ) X . Given performance and cost functionals J ( f , x ) and C ( λ ) , the meta-optimization problem is
min λ Λ E f F J f , x λ ( f ) + ρ C ( λ ) subject to x λ ( f ) Out A λ ; f ,
where ρ 0 and Out ( A λ ; f ) is the (possibly set-valued) output of the inner optimizer. Thus meta-optimization chooses λ to optimize the performance of an optimizer across problem instances.
Example 2.68 
(Metaoptimization in everyday ML: tuning an optimizer for faster model training). A retail company trains a next–day demand forecaster each evening. The inner optimizer is Adam with hyperparameters λ = ( η , β 1 , β 2 , λ w ) (learning rate, momentum terms, weight decay). For a given day’s objective f (training loss on that day’s data), the inner routine A λ returns weights x λ ( f ) after a fixed epoch budget. The MLOps team runs a Bayesian optimizer over λ to minimize the expected next–day validation loss plus training cost:
min λ E f F J f , x λ ( f ) + ρ C ( λ ) ,
where J is validation loss after training and C ( λ ) captures wall–clock or GPU time. The resulting λ (e.g., slightly smaller η and nonzero λ w ) is then fixed for the nightly runs. This is metaoptimization: choosing the optimizer’s hyperparameters so that the optimizer itself performs best across the company’s rolling stream of training problems.
Definition 2.69 
(Iterated metaoptimization of order n). Fix n 1 . For k = 1 , , n let Λ k be a (measurable) hyperparameter space at level k, with level-1 associated to the base optimizer family  { A λ 1 } λ 1 Λ 1 . For k 2 let a selector (or compiler) be a measurable map
g ( k ) : Λ k Λ k 1 ,
encoding how a level-k meta-choice induces a level- ( k 1 ) hyperparameter. Given costs C k : Λ k R 0 and weights ρ k 0 , the order-n iterated metaoptimization problem is
min λ n Λ n E f F J f , x A Γ ( n ) ( λ n ) ( f ) expected performance of the effective optimizer + k = 1 n ρ k C k Γ ( k ) ( λ n ) composed cos t ,
where the compiled hyperparameters are defined by
Γ ( 1 ) ( λ n ) : = g ( 2 ) g ( 3 ) g ( n ) ( λ n ) Λ 1 , Γ ( k ) ( λ n ) : = g ( k + 1 ) g ( n ) ( λ n ) Λ k
(with the convention that an empty composition is the identity, so Γ ( n ) ( λ n ) = λ n ), and the effective optimizer at order n is
A ˜ λ n ( n ) : = A Γ ( 1 ) ( λ n ) .
We call ( n ) an iterated (or multi-level) meta-optimization and ( 1 ) reduces to ordinary meta-optimization.
Example 2.70 
(Iterated metaoptimization in practice: tuning the tuner that tunes training). A cloud analytics platform serves many internal teams. There are three levels:
  • Level 1 (base optimizer): for each task f, training uses SGD/Adam with hyperparameters λ 1 Λ 1 ; the routine A λ 1 returns x λ 1 ( f ) .
  • Level 2 (tuner configuration): a selector g ( 2 ) : Λ 2 Λ 1 maps a tuner setting  λ 2 (e.g., BO kernel type, acquisition function, Hyperband parameter η , max trials) into a concrete search space and schedule for Level 1. Running the tuner with g ( 2 ) ( λ 2 ) yields the chosen λ 1 .
  • Level 3 (portfolio policy): a policy g ( 3 ) : Λ 3 Λ 2 chooses which tuner (portfolio of BO, Hyperband, population–based training) and its exploration budget for a new project based on meta–features (data size, signal–to–noise, latency SLA).
When a new forecasting project arrives, the platform picks λ 3 (risk/budget preferences). This compiles to
Γ ( 1 ) ( λ 3 ) = g ( 2 ) g ( 3 ) ( λ 3 ) Λ 1 ,
which is the effective Level–1 hyperparameter used by the training optimizer; the system minimizes
min λ 3 Λ 3 E f F J f , x A Γ ( 1 ) ( λ 3 ) ( f ) + ρ 2 C 2 g ( 3 ) ( λ 3 ) + ρ 3 C 3 ( λ 3 ) .
In everyday terms: the platform tunes the tuner (Level 2) and also tunes the policy that chooses and budgets tuners (Level 3). A global change (e.g., stricter latency SLA) is made once at Level 3 and propagates downward, automatically altering the search strategy (Level 2) and, in turn, the training optimizer settings (Level 1) used by each team.
Definition 2.69 already presents the n-level problem as a single optimization over  Λ n , by compiling upper-level choices down to a level-1 optimizer through Γ ( 1 ) and accounting for all costs via the composed term.
Theorem 2.71 
(Generalization and equivalence by compilation). Let n 1 and suppose all Λ k are compact metric spaces, the selectors g ( k ) are continuous, C k are lower-semicontinuous, and f J f , x A λ 1 ( f ) is integrably bounded with the expectation E f F continuous in λ 1 . Then:
(1)
( n = 1 boundary case) For n = 1 , ( n ) coincides with the standard meta-optimization problem.
(2)
(Flattening) Any n-level meta-optimization with hierarchical decision variables ( λ 1 , , λ n ) constrained by λ k 1 = g ( k ) ( λ k ) is equivalent to ( n ) , i.e. to a single-level problem over λ n with objective
Φ n ( λ n ) = E f F J f , x A ˜ λ n ( n ) ( f ) + k = 1 n ρ k C k Γ ( k ) ( λ n ) .
In particular, minimizers correspond under the bijection λ n Γ ( 1 ) ( λ n ) , , Γ ( n ) ( λ n ) .
(3)
(Existence) Under the stated compactness/continuity assumptions, ( n ) admits a minimizer.
Proof. (1) Immediate from the definitions: for n = 1 the composition Γ ( 1 ) is the identity on Λ 1 and the cost reduces to ρ 1 C 1 .
(2) Consider the constrained hierarchical formulation
min λ 1 , , λ n E f J ( f , x A λ 1 ( f ) ) + k = 1 n ρ k C k ( λ k ) s . t . λ k 1 = g ( k ) ( λ k ) , k = 2 , , n .
Successively eliminating λ n 1 , , λ 1 by substitution yields the unconstrained problem
min λ n E f J ( f , x A Γ ( 1 ) ( λ n ) ( f ) ) + k = 1 n ρ k C k Γ ( k ) ( λ n ) ,
which is exactly ( n ) . The map λ n ( λ 1 , , λ n ) = ( Γ ( 1 ) ( λ n ) , , Γ ( n ) ( λ n ) ) establishes a one-to-one correspondence between feasible tuples and choices of λ n , and the objective values are identical by construction, hence minimizers correspond.
(3) By compactness of Λ n and continuity of each g ( k ) , each composed map Γ ( k ) is continuous. Lower-semicontinuity of k ρ k C k Γ ( k ) follows from the corresponding property of the C k . By the dominated convergence (or the assumed continuity of the expectation in λ 1 ) and continuity of Γ ( 1 ) , the performance term is continuous in λ n . Thus Φ n is lower-semicontinuous on a compact set and attains a minimum.    □
Definition 2.72 
(Iterated-MetaStructure (IMS) for metaoptimization). An IMS of height n for metaoptimization consists of universes { U k } k = 0 n and relations { k } k = 1 n with:
(1)
U 0 = set of base objectives with their distributions ( F , f , J ) ;
(2)
U 1 = set of base optimizer families { A λ 1 } λ 1 Λ 1 ;
(3)
For k 2 , U k = set of selectors g ( k ) : Λ k Λ k 1 ;
(4)
The relation X 1 Y means: Y U 1 acts on X U 0 (produces f x A λ 1 ( f ) ). For k 2 , Y k Z (with Y U k 1 , Z U k ) means: Zcompiles a level- ( k 1 ) choice from a level-k choice, i.e. applies g ( k ) .
Theorem 2.73 
(Representation in IMS). For any order n, the data of an iterated metaoptimization instance
F , { A λ 1 } , { g ( k ) } k = 2 n , { C k , ρ k } k = 1 n
defines an IMS of height n as in Definition 2.72, and the compiled effective optimizer is the result of theiteratedrelation
F 1 { A λ 1 } 2 g ( 2 ) 3 n g ( n ) ,
namely A ˜ λ n ( n ) = A Γ ( 1 ) ( λ n ) . Moreover, the standard metaoptimization problem is precisely the height-1 truncation of this IMS.
Proof. 
By construction, 1 captures the action of a base optimizer on objectives. For k 2 , k is the (deterministic) compilation step λ k 1 = g ( k ) ( λ k ) . Composing these relations yields Γ ( 1 ) = g ( 2 ) g ( n ) , hence the effective optimizer A Γ ( 1 ) ( λ n ) . Truncating at level 1 removes the compilers and recovers the usual metaoptimization.    □

2.10. Iterated Metaorganization (Organization of ... of Organizations)

A metaorganization is an organizational form where members are themselves organizations, coordinating collective action, governance, and decision-making across institutional boundaries [82,83,84,85,86,87]. Iterated Metaorganization recursively structures organizations of organizations, creating multi-level governance hierarchies that manage interactions, coordination, and adaptation across successive organizational layers.
Definition 2.74 
(Meta-organization (MO)). Let Org be a set of organizations. Each o Org has an interface action space  A o . A meta-organization is a pair
MO = ( O , D ) , O Org , D : o O A o A MO ,
where D is a governance/aggregation rule mapping member-organizations’ action profile to a collective action in A MO . (Preference, feasibility, or incentive constraints on D may be imposed but are not needed for the structural results below.)
Example 2.75 
(Metaorganization in practice: a city climate coalition). A metropolitan Climate Action Council is formed whose members are organizations: city transit authority, utilities, chambers of commerce, and environmental NGOs. Each member o chooses an interface action a o A o such as an annual emission reduction target and a budget request. The council’s governance rule
D : o O A o A MO
solves a transparent allocation with commitment: given reported requests r = ( r o ) o and utilities u o , pick
D ( r ) : = arg max x R 0 O o O u o ( x o ) subject to o x o B , x o r o ,
and publish a joint emissions plan plus funding vector x. Members remain independent organizations, but coordination and accountability occur at the meta level through D.
Definition 2.76 
(Iterated metaorganization of order n). Fix a rooted finite directed tree T = ( V , E ) of depth n . Assign to each node v V an action space A v . Leaves L V are atomic organizations (no children) and have no internal rule. Every internal node v (with children ch ( v ) ) is a meta-organization node with governance rule
D v : u ch ( v ) A u A v .
The triple ( T , { A v } v V , { D v } v V L ) is an iterated metaorganization (IMO). Its root output is an action in the root space A root .
Example 
(Iterated metaorganization: layered disaster response governance). Local NGOs and municipal agencies in several coastal cities form city clusters (first layer). Each cluster v aggregates its members’ proposed resource deployments a u A u via a governance rule
D v : u ch ( v ) A u A v ,
returning a cluster deployment vector (boats, generators, medical kits). Clusters then form a regional council (second layer) with rule D reg : v A v A reg that reconciles intercluster transport and warehouse capacities. Finally, the national task force (third layer) applies D nat : A reg A nat to align with federal assets and no fly zones. The directed tree of organizations induces a root output in A nat that is an organization of organizations of organizations, enabling rapid, multi level coordination while preserving local autonomy.
Definition 2.78 (Behavior map (flattened decision rule)). For an IMO as in Definition 2.76, define recursively the behavior map  B v : L ( v ) A A v for each node v, where L ( v ) are the leaves in the subtree rooted at v:
B v : = id A v if v L , D v u ch ( v ) B u if v L ,
where u ch ( v ) B u is the canonical product map L ( v ) A u ch ( v ) A u . The flattened (single-level) decision rule of the IMO is B root .
Example 2.79 
(Behavior map: composing two stage budget aggregation). Consider an iterated metaorganization with leaves L = { A , B , C } holding local budget proposals a A , a B , a C R 0 . Node v 1 governs sector partners A and B by averaging their proposals
D v 1 ( a A , a B ) = 1 2 ( a A + a B ) A v 1 .
The root node r (a national board) then combines the sector aggregate with C using a policy weight α ( 0 , 1 ) ,
D r ( x , a C ) = α x + ( 1 α ) a C A r .
The behavior map  B r : R 0 3 R 0 that flattens the tree is
B r ( a A , a B , a C ) = D r D v 1 ( a A , a B ) , a C = α a A + a B 2 + ( 1 α ) a C ,
a single level decision rule equivalent to the two stage governance. Stakeholders can thus audit how leaf actions deterministically propagate to the root decision.
Theorem 2.80 
(IMO generalizes MO and flattens to a single MO). Let ( T , { A v } , { D v } ) be an IMO with root r and leaf set L. Then the pair
O : = L , D * : = B r : L A A r
is a (single-level) meta-organization whose collective action coincides with that of the IMO for every leaf action profile. Moreover, if depth ( T ) = 1 , the IMO reduces to the usual MO.
Proof. 
By definition of B r , for any profile ( a ) L L A , the unique action produced at the root by the hierarchical evaluation of the tree is
B r ( a ) L = D r B u 1 ( ( a ) L ( u 1 ) ) , , B u m ( ( a ) L ( u m ) ) ,
which, by the recursive definition of each B u j , is precisely the result of applying every internal D v bottom-up. Hence the IMO is behaviorally equivalent to the MO with members L and rule D * = B r . If the depth is 1, the root’s children are leaves, so B r = D r u ch ( r ) id = D r , i.e. the definition of an ordinary MO.    □
Definition 2.81 
(Compositional properties). A predicate P on decision rules is compositional if for any arities compatible with product composition,
P ( D v ) for all internal v P ( B r ) .
Examples: continuity (with product topologies), coordinatewise monotonicity, anonymity (when product orders/labels align), and Lipschitz boundedness, all of which are preserved under product and composition.
Proposition 2.82 
(Property preservation). If each D v is continuous (resp. coordinatewise monotone, resp. L–Lipschitz), then so is B r .
Proof. 
Continuity and monotonicity are preserved under finite products and composition. For Lipschitzness, if each D v has constant L v w.r.t. sup-product norms, then B r is Lipschitz with constant bounded by the product (or appropriate composition bound) of the L v along each path; taking the maximum over finitely many root-to-leaf paths yields a global constant.    □
Definition 2.83 
(Iterated-MetaStructure (IMS) for metaorganization). Given an IMO of depth n , define levels U k to be the set of nodes at depth k (measured from leaves, so leaves have k = 0 ). For k 1 , define a binary relation k between u ch ( v ) A u and v U k by
( a u ) u ch ( v ) k v D v ( a u ) u ch ( v ) is the output at v .
The iterated action at the root is obtained by composing the relations k along the unique rooted paths (i.e. relational composition mirrors functional composition).
Theorem 2.84 
(Representation in IMS). For any IMO of depth n , the IMS in Definition 2.83 represents its evaluation in the sense that the iterated relational composition from leaves to the root yields the graph of the behavior map B r . Consequently, ordinary meta-organization corresponds to the height-1 truncation of the IMS.
Proof. 
Proceed by induction on the height h of the subtree. For h = 1 (a single internal node v), 1 is exactly the graph of D v , hence equals the graph of B v . Assume the claim holds for all subtrees of height < h . Let v have children u 1 , , u m , each with height < h . By the induction hypothesis, the relational composition from the leaves of each subtree to u j equals the graph of B u j . Composing these m relations (product) with 1 at v yields the graph of D v j B u j , which is the graph of B v by definition. Applying this at the root gives the statement. The height-1 truncation removes all intermediate compositions and leaves the single relation given by the root aggregator, the MO case.    □

2.11. Iterated Metaprogramming (Programming of ... of Programmings)

A metaprogram is a program that manipulates or generates other programs, treating code as data to transform, optimize, or produce executable structures (cf.[88,89]). An Iterated Metaprogram applies metaprogramming recursively, enabling programs that generate or transform metaprograms themselves, creating layered abstraction through Iterated-MetaStructure.
Definition 2.85 
(Metaprogram). Fix a base language L 0 with abstract syntax AST 0 and semantics · 0 : AST 0 Beh . A (level–1) metaprogram is a computable transformer P : AST 0 AST 0 (or, dually, a generator G : Σ * AST 0 ), typically expressed in a metalanguage L 1 equipped with quoting/splicing so that the denotation p 1 of a metaterm p AST 1 satisfies p 1 = P . A semantic contract Φ specifies correctness, e.g. P ( e ) 0 = Φ e 0 for all e AST 0 .
Example 2.86 
(Metaprogramming in everyday work: generating a REST client from an API spec). A developer needs a typed client library for a shipping service. Instead of hand-writing code, they run a metaprogram (a code generator) that takes a machine-readable OpenAPI file shipping.yaml and produces executable artifacts:
  • Inputs (data about the program): endpoints, request/response schemas, auth requirements.
  • Metaprogram action: parse the spec, expand templates, and emit code.
  • Outputs (programs):ShippingClient.ts with methods createLabel, track, etc.; type definitions; unit tests; and API docs.
Here the generator is a program that writes other programs. When the API changes, re-running the metaprogram regenerates consistent, type-safe client code and tests in seconds.
Definition 2.87 
(Tower of metalanguages and iterated metaprogram). For n 1 let ( L k ) k = 0 n be a stratified tower of languages with syntaxes AST k and metalevel semantics
· k : AST k AST k 1 AST k 1 , k = 1 , , n ,
so each p k AST k denotes a level–k transformer P k : = p k k acting on AST k 1 . An iterated metaprogram of order n is a tuple
P = ( p n , p n 1 , , p 1 ) AST n × × AST 1 ,
with pipeline transformer
P : = P 1 P 2 P n : AST 0 AST 0 .
Given e AST 0 , its iterated metaexpansion is e ( n ) : = P ( e ) = P 1 P 2 ( P n ( e ) ) .
Definition 2.88 
(Contracts and compositional soundness). Suppose each P k satisfies a contract Φ k on behaviors: P k ( x ) k 1 = Φ k x k 1 for all x AST k 1 . Say that ( Φ k ) is closed under composition if Φ 1 Φ n is welltyped and total on Beh .
Example 2.89 
(Iterated metaprogramming in a team: a generator that writes generators). An engineering organization standardizes service scaffolding (logging, auth, CI, docs). They maintain:
(1)
Level 2 (meta-generator)OrgScaffolder: reads a high-level policy file (org_policy.yaml) and emits a service-specific code generatorCrudGen configured with the organization’s conventions (naming, lint rules, CI workflows, Dockerfile templates).
(2)
Level 1 (generator)CrudGen: takes a product team’s domain schema (orders.schema.json) and generates an orders-service codebase: controllers, ORM models, migrations, OpenAPI, tests, and a GitHub Actions pipeline.
(3)
Level 0 (programs): the concrete microservice source code that ships to production.
Workflow in practice:
  • Platform team updates org_policy.yaml (e.g., switch to OpenTelemetry).
  • Run OrgScaffolder ⇒ regenerates CrudGen to include tracing hooks.
  • Product team runs the new CrudGen on their schema ⇒ a fresh service codebase with tracing, auth, CI, and docs appears automatically.
This is iterated metaprogramming: a program that creates another metaprogram, which in turn creates the final programs—allowing org-wide changes to propagate by regenerating at two levels.
Theorem 2.90 
(Generalization and flattening). Let P = ( p n , , p 1 ) be an order n iterated metaprogram.
(i) Generalization.For n = 1 we recover ordinary metaprogramming: P = P 1 .
(ii) Compositional soundness.If each P k satisfies Φ k and ( Φ k ) is closed under composition, then for all e AST 0 ,
e ( n ) 0 = P ( e ) 0 = ( Φ 1 Φ n ) e 0 .
(iii) Single–stage realization.Assume the class of level–1 denotable transformers C 1 : = { p 1 : p AST 1 } is closed under composition. Then there exists p * AST 1 with p * 1 = P , i.e. the whole n–stage pipeline can beflattenedto a single level–1 metaprogram without changing its effect on AST 0 .
Proof. (i) is immediate from the definition with n = 1 .
(ii) By definition of e ( n ) and the contracts,
e ( n ) 0 = P 1 ( P 2 ( P n ( e ) ) ) 0 = Φ 1 P 2 ( P n ( e ) ) 0 .
Iterating the same reasoning for k = 2 , , n yields the stated composition ( Φ 1 Φ n ) ( e 0 ) .
(iii) Since C 1 is closed under composition and P k C 1 for each k (as each P k is a computable transformer on AST 0 expressible at level 1 by assumption), we have P C 1 . Hence there exists p * AST 1 with p * 1 = P . □
Definition 2.91 
(Iterated-MetaStructure (IMS) presentation). Define levels U k : = AST k ( k 1 ) and U 0 : = AST 0 . For k 1 introduce the binary relation
k U k × ( U k 1 × U k 1 ) , ( p k , x ) k y y = p k k ( x ) .
Given P = ( p n , , p 1 ) , its IMS trace on e U 0 is the unique e ( n ) U 0 such that
( p n , e ) n e ( n 1 ) , ( p n 1 , e ( n 1 ) ) n 1 e ( n 2 ) , , ( p 1 , e ( 1 ) ) 1 e ( 0 ) = e ( n ) .
Equivalently, relational composition ( 1 n ) is the graph of P .
Theorem 2.92 
(Correct IMS representation). Let P be as above. Then for all e AST 0 , the iterated relational composition in the Definition yields exactly e ( n ) = P ( e ) . Moreover, the truncation to n = 1 recovers ordinary metaprogramming.
Proof. 
By construction, ( p k , x ) k y iff y = P k ( x ) , hence the composition ( 1 n ) applied to e yields P 1 ( P 2 ( P n ( e ) ) ) = P ( e ) . The case n = 1 is tautological. □

2.12. Iterated MetaSystem (System of ... of Systems)

A MetaSystem is a higher-level construct where elements are systems themselves, coordinating, integrating, or supervising multiple subsystems into a unified framework [90]. An Iterated MetaSystem recursively organizes meta-systems over meta-systems, forming hierarchical layers that generalize system integration, coordination, and adaptation through Iterated-MetaStructure.
Definition 2.93 
(Systems and configurations). Let Sys be a class of (discrete–time) dynamical systems S = ( X , U , ξ ) with state space X, input space U, and transition map ξ : X × U X . For k 0 let Iface k be a set of k–ary interfaces (wiring/coordination blueprints). Define the configuration space
Conf : = k 0 Sys k × Iface k .
A realizer is a map Real : Conf Sys that composes a configuration into a concrete system (e.g. by interconnection along the interface).
Definition 2.94 
(Meta–operator and MetaSystem). A meta–operator is a function Ψ : Conf Conf acting on systems as primitives (e.g. selection, composition, supervision, adaptation). A MetaSystem is a pair ( S , Ψ ) with S Sys and a meta–operator Ψ such that Ψ ( S k × Iface k ) Conf for all k. Its realized system on input configuration c Conf is S Ψ ( c ) : = Real ( Ψ ( c ) ) Sys . Intuitively, a MetaSystem is a “system about systems”: its state/data are systems, and its dynamics are Ψ over them.
Definition 2.95 
(Iterated MetaSystem). Fix a tower of n 1 meta–operators Ψ = ( Ψ n , Ψ n 1 , , Ψ 1 ) with each Ψ j : Conf Conf . The Iterated MetaSystem of order n is
IMS n : = ( S , Ψ , Real ) , with pipeline P : = Ψ 1 Ψ 2 Ψ n : Conf Conf .
Given a configuration c Conf , its n–step meta–evolution is c ( n ) : = P ( c ) and the realized system is S Ψ ( c ) : = Real c ( n ) Sys .
Definition 2.96 
(Contracts and invariants). Let I be a set of system invariants (safety, stability, etc.) and Inv Sys × I a satisfaction relation. A meta–operator Ψ  respects an invariant transformer Φ : I I if for all c Conf and ι I ,
Real ( c ) , ι Inv Real ( Ψ ( c ) ) , Φ ( ι ) Inv .
We call Φ the contract of Ψ . A list ( Φ n , , Φ 1 ) is closed under composition if Φ 1 Φ n is well defined on I .
Theorem 2.97 
(Generalization, compositional soundness, and flattening). Let IMS n = ( S , ( Ψ n , , Ψ 1 ) , Real ) .
(i) Generalization.For n = 1 we recover an ordinary MetaSystem: S Ψ ( c ) = Real ( Ψ 1 ( c ) ) .
(ii) Compositional soundness.Suppose each Ψ j respects a contract Φ j , and ( Φ n , , Φ 1 ) is closed under composition. Then for all c Conf and ι I ,
Real ( c ) , ι Inv S Ψ ( c ) , ( Φ 1 Φ n ) ( ι ) Inv .
(iii) Flattening (single–stage realization).Assume the set of meta–operators C : = { Ψ : Conf Conf } used in the platform is closed under composition. Then there exists Ψ * C such that
Real Ψ * = Real Ψ 1 Ψ n , i . e . S Ψ ( c ) = Real Ψ * ( c )
for all c Conf . Hence any iterated metasystem is equivalent, extensionally, to a single meta–operator applied once.
Proof. (i) is immediate from the definitions with P = Ψ 1 .
(ii) Let c ( 0 ) : = c and c ( j ) : = Ψ j ( c ( j 1 ) ) for j = 1 , , n . By the definition of respect, from ( Real ( c ( 0 ) ) , ι ) Inv we get ( Real ( c ( 1 ) ) , Φ 1 ( ι ) ) Inv , then ( Real ( c ( 2 ) ) , Φ 2 Φ 1 ( ι ) ) Inv , and so on. By induction over j, after n steps we obtain ( Real ( c ( n ) ) , Φ 1 Φ n ( ι ) ) Inv . Since S Ψ ( c ) = Real ( c ( n ) ) , the claim follows.
(iii) Closure of C under composition implies Ψ * : = Ψ 1 Ψ n C . Therefore, for all c, Real ( Ψ * ( c ) ) = Real ( Ψ 1 ( Ψ n ( c ) ) ) = S Ψ ( c ) . □
Definition 2.98 (Iterated–MetaStructure (IMS) presentation). Let the levels be U 0 : = Conf and, for k 1 ,
k C × U 0 × U 0 , ( Ψ , c ) k c c = Ψ ( c ) .
For a tower Ψ = ( Ψ n , , Ψ 1 ) and seed c U 0 , the IMS trace is c = c ( 0 ) n c ( 1 ) n 1 1 c ( n ) , and the realized system is Real ( c ( n ) ) .
Theorem 2.99 
(Correct IMS representation). For all towers Ψ and c Conf , the relational composition ( 1 n ) is the graph of P = Ψ 1 Ψ n , hence its unique output is c ( n ) = P ( c ) and the realized system equals S Ψ ( c ) . In particular, truncating to n = 1 recovers the ordinary MetaSystem step.
Proof. 
By definition ( Ψ j , c ( j 1 ) ) j c ( j ) c ( j ) = Ψ j ( c ( j 1 ) ) . Thus relational composition yields c ( n ) = Ψ 1 ( Ψ 2 ( Ψ n ( c ) ) ) = P ( c ) . Applying Real gives S Ψ ( c ) . □

3. Conclusion

In this paper, we have defined various concepts related to MetaStructure and Iterated MetaStructure. In the future, we expect to explore algorithmic research on these structures and consider possible extensions using Fuzzy Sets[91,92,93], Intuitionistic Fuzzy Sets [94,95], Neutrosophic Sets [96,97,98,99], Rough Sets [100,101,102], HyperFuzzy Sets [28,103,104,105,106], and Plithogenic Sets [9,20,107,108,109,110,111].
Research IntegrityThe author confirms that this manuscript is original, has not been published elsewhere, and is not under consideration by any other journal.
Use of Computational Tools All proofs and derivations were performed manually; no computational software (e.g., Mathematica, SageMath, Coq) was used.
Code Availability No code or software was developed for this study.
Ethical Approval This research did not involve human participants or animals, and therefore did not require ethical approval.
Use of Generative AI and AI-Assisted Tools We use generative AI and AI-assisted tools for tasks such as English grammar checking, and We do not employ them in any way that violates ethical standards.

Supplementary Materials

No supplementary materials accompany this paper.

Funding

No external funding was received for this work.

Data Availability Statement

This paper is theoretical and did not generate or analyze any empirical data. We welcome future studies that apply and test these concepts in practical settings.

Acknowledgments

We thank all colleagues, reviewers, and readers whose comments and questions have greatly improved this manuscript. We are also grateful to the authors of the works cited herein for providing the theoretical foundations that underpin our study. Finally, we appreciate the institutional and technical support that enabled this research.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this work.

References

  1. Fujita, T.; Smarandache, F. A Unified Framework for U-Structures and Functorial Structure: Managing Super, Hyper, SuperHyper, Tree, and Forest Uncertain Over/Under/Off Models. Neutrosophic Sets and Systems 2025, 91, 337–380. [Google Scholar]
  2. Fujita, T. How to Represent A→ B →…→ Z: From Curried Functions and Hyperfunctions to Curried Structures and Hyperstructures, and More, 2025. [CrossRef]
  3. Hausdorff, F. Set theory; Vol. 119, American Mathematical Soc., 2021.
  4. Jech, T. Set theory: The third millennium edition, revised and expanded; Springer, 2003.
  5. Panti, G. Multi-valued logics. In Quantified representation of uncertainty and imprecision; Springer, 1998; pp. 25–74.
  6. Jaynes, E.T.; Grandy, T.B.; Smith, R.; Loredo, T.; Tribus, M.; Skilling, J.; Bretthorst, G.L. Probability theory: the logic of science. The Mathematical Intelligencer 2005, 27, 83. [Google Scholar] [CrossRef]
  7. Kingman, J.F.C.; Feller, W. An Introduction to Probability Theory and its Applications. Biometrika 1958, 130, 430–430. [Google Scholar]
  8. Chen, J.; Ye, J.; Du, S. Scale Effect and Anisotropy Analyzed for Neutrosophic Numbers of Rock Joint Roughness Coefficient Based on Neutrosophic Statistics. Symmetry 2017, 9, 208. [Google Scholar] [CrossRef]
  9. Smarandache, F. Plithogeny, plithogenic set, logic, probability, and statistics. arXiv preprint arXiv:1808.03948, arXiv:1808.03948 2018.
  10. Lenz, R.; Bovik, A.C. Group Theory. 2007.
  11. Burde, D. Group Theory. Computers, Rigidity, and Moduli.
  12. Wisbauer, R. Foundations of module and ring theory; Routledge, 2018.
  13. Stenstrom, B. Rings of quotients: an introduction to methods of ring theory; Vol. 217, Springer Science & Business Media, 2012.
  14. Dehghan, O.; Ameri, R.; Aliabadi, H.E. Some results on hypervector spaces. Italian Journal of Pure and Applied Mathematics 2019, 41, 23–41. [Google Scholar]
  15. Sameena, K.; et al. FUZZY MATROIDS FROM FUZZY VECTOR SPACES. South East Asian Journal of Mathematics and Mathematical Sciences 2021, 17, 381–390. [Google Scholar]
  16. Hatip, A.; Olgun, N.; et al. On the Concepts of Two-Fold Fuzzy Vector Spaces and Algebraic Modules. Journal of Neutrosophic and Fuzzy Systems 2023, 7, 46–52. [Google Scholar]
  17. Diestel, R. Graph theory 3rd ed. Graduate texts in mathematics 2005, 173, 12. [Google Scholar]
  18. Gross, J.L.; Yellen, J.; Anderson, M. Graph theory and its applications; Chapman and Hall/CRC, 2018.
  19. Diestel, R. Graph theory; Springer (print edition); Reinhard Diestel (eBooks), 2024.
  20. Smarandache, F. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra; Infinite Study, 2020.
  21. Hou, Z. Automata Theory and Formal Languages. Texts in Computer Science 2021. [Google Scholar]
  22. Bonakdarpour, B.; Sheinvald, S. Automata for hyperlanguages. arXiv preprint arXiv:2002.09877, arXiv:2002.09877 2020.
  23. Kovach, N.; Gibson, A.S.; Lamont, G.B. Hypergame Theory: A Model for Conflict, Misperception, and Deception. 2015.
  24. House, J.T.; Cybenko, G.V. Hypergame theory applied to cyber attack and defense. In Proceedings of the Defense + Commercial Sensing; 2010. [Google Scholar]
  25. Oguz, G.; Davvaz, B. Soft topological hyperstructure. J. Intell. Fuzzy Syst. 2021, 40, 8755–8764. [Google Scholar] [CrossRef]
  26. Vougioukli, S. Helix hyperoperation in teaching research. Science & Philosophy 2020, 8, 157–163. [Google Scholar]
  27. Vougioukli, S. HELIX-HYPEROPERATIONS ON LIE-SANTILLI ADMISSIBILITY. Algebras Groups and Geometries 2023. [Google Scholar] [CrossRef]
  28. Fujita, T. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond; Biblio Publishing, 2025.
  29. Smarandache, F. Foundation of SuperHyperStructure & Neutrosophic SuperHyperStructure. Neutrosophic Sets and Systems 2024, 63, 21. [Google Scholar]
  30. Smarandache, F. SuperHyperFunction, SuperHyperStructure, Neutrosophic SuperHyperFunction and Neutrosophic SuperHyperStructure: Current understanding and future directions; Infinite Study, 2023.
  31. Das, A.K.; Das, R.; Das, S.; Debnath, B.K.; Granados, C.; Shil, B.; Das, R. A Comprehensive Study of Neutrosophic SuperHyper BCI-Semigroups and their Algebraic Significance. Transactions on Fuzzy Sets and Systems 2025, 8, 80. [Google Scholar]
  32. Fujita, T. Chemical Hyperstructures, SuperHyperstructures, and SHv-Structures: Toward a Generalized Framework for Hierarchical Chemical Modeling. ChemRxiv 2025. [Google Scholar] [CrossRef]
  33. Fujita, T. MetaStructure, Meta-HyperStructure, and Meta-SuperHyperStructure, 2025. [CrossRef]
  34. Fujita, T. MetaHyperGraphs, MetaSuperHyperGraphs, and Iterated MetaGraphs: Modeling Graphs of Graphs, Hypergraphs of Hypergraphs, Superhypergraphs of Superhypergraphs, and Beyond, 2025. [CrossRef]
  35. Fujita, T. Meta-Fuzzy Graph, Meta-Neutrosophic Graph, Meta-Digraph, and Meta-MultiGraph with some applications 2025.
  36. Fujita, T. MetaFuzzy, MetaNeutrosophic, MetaSoft, and MetaRough Set 2025.
  37. Tanner, K.D. Promoting student metacognition. CBE-Life Sciences Education 2012, 11, 113–120. [Google Scholar] [CrossRef] [PubMed]
  38. Cox, M.T. Metacognition in computation: A selected research review. Artificial intelligence 2005, 169, 104–141. [Google Scholar] [CrossRef]
  39. Schneider, W.; Artelt, C. Metacognition and mathematics education. ZDM 2010, 42, 149–161. [Google Scholar] [CrossRef]
  40. Gourgey, A.F. Metacognition in basic skills instruction. Instructional science 1998, 26, 81–96. [Google Scholar] [CrossRef]
  41. Feurer, E.; Sassu, R.; Cimeli, P.; Roebers, C.M. Development of meta-representations: Procedural metacognition and the relationship to theory of mind. Journal of Educational and Developmental Psychology 2015, 5, 6. [Google Scholar] [CrossRef]
  42. Meyer*, J.H.; Shanahan. ; P, M. Developing metalearning capacity in students: Actionable theory and practical lessons learned in first-year economics. Innovations in Education and Teaching International 2004, 41, 443–458. [Google Scholar] [CrossRef]
  43. Losada, M. The complex dynamics of high performance teams. Mathematical and computer modelling 1999, 30, 179–192. [Google Scholar] [CrossRef]
  44. Rutter, C.M.; Gatsonis, C.A. A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Statistics in medicine 2001, 20, 2865–2884. [Google Scholar] [CrossRef] [PubMed]
  45. Senn, S.; Gavini, F.; Magrez, D.; Scheen, A. Issues in performing a network meta-analysis. Statistical Methods in Medical Research 2013, 22, 169–189. [Google Scholar] [CrossRef] [PubMed]
  46. Van Valkenhoef, G.; Lu, G.; De Brock, B.; Hillege, H.; Ades, A.; Welton, N.J. Automating network meta-analysis. Research synthesis methods 2012, 3, 285–299. [Google Scholar] [CrossRef]
  47. Zhao, J.; van Valkenhoef, G.; de Brock, E.; Hillege, H. ADDIS: an automated way to do network meta-analysis 2012.
  48. Hyland, K. Metadiscourse: What is it and where is it going? Journal of pragmatics 2017, 113, 16–29. [Google Scholar] [CrossRef]
  49. Jiang, F.K.; Akbaş, E. Metadiscourse in a disciplinary context: An overview. Chinese Journal of Applied Linguistics 2024, 47, 163–177. [Google Scholar] [CrossRef]
  50. Zhu, Z.; Wu, X. Metadiscourse in Multimodal Discourse: The Case of About-us Pages of Chinese and American Companies. Journal of Modern Research in English Language Studies 2025, 12, 1–18. [Google Scholar]
  51. Hyland, K.; Wang, W.; Jiang, F.K. Metadiscourse across languages and genres: An overview. Lingua 2022, 265, 103205. [Google Scholar] [CrossRef]
  52. Hyland, K. Metadiscourse. In Conducting genre-based research in applied linguistics; Routledge, 2023; pp. 59–81.
  53. Lee, K. Philosophy and revolutions in genetics: Deep science and deep technology; Springer, 2016.
  54. Koskela, L.; et al. Application of the new production philosophy to construction; Vol. 72, Stanford university Stanford, 1992.
  55. Teichman, J. The Mind and the Soul: an Introduction to the Philosophy of Mind; Routledge, 2014.
  56. Smarandache, F. A unifying field in Logics: Neutrosophic Logic. In Philosophy; American Research Press, 1999; pp. 1–141.
  57. Overgaard, S.; Gilbert, P.; Burwood, S. An introduction to metaphilosophy; Cambridge University Press, 2013.
  58. Joll, N. Metaphilosophy 2010.
  59. Vasilyev, V.V. Metaphilosophy: History and Perspectives. Epistemology & Philosophy of Science 2019, 56, 6–18. [Google Scholar] [CrossRef]
  60. Schmid, J. The methods of metaphilosophy; Klostermann, 2022.
  61. Evans, J.A.; Foster, J.G. Metaknowledge. Science 2011, 331, 721–725. [Google Scholar] [CrossRef]
  62. Avenali, A.; Daraio, C.; Di Leo, S.; Matteucci, G.; Nepomuceno, T. Systematic reviews as a metaknowledge tool: caveats and a review of available options. International Transactions in Operational Research 2023, 30, 2761–2806. [Google Scholar] [CrossRef]
  63. Trinquart, L.; Johns, D.M.; Galea, S. Why do we think we know what we know? A metaknowledge analysis of the salt controversy. International Journal of Epidemiology 2016, 45, 251–260. [Google Scholar] [CrossRef] [PubMed]
  64. Han, Y.; Dunning, D. Metaknowledge of experts versus nonexperts: do experts know better what they do and do not know? Journal of Behavioral Decision Making 2024, 37, e2375. [Google Scholar] [CrossRef]
  65. Pinski, M.; Haas, M.J.; Benlian, A. Building metaknowledge in ai literacy–the effect of gamified vs. text-based learning on ai literacy metaknowledge 2024.
  66. Zhang, C.; Guan, J. How to identify metaknowledge trends and features in a certain research field? Evidences from innovation and entrepreneurial ecosystem. Scientometrics 2017, 113, 1177–1197. [Google Scholar] [CrossRef]
  67. Elson, M.; Huff, M.; Utz, S. Metascience on peer review: Testing the effects of a study’s originality and statistical significance in a field experiment. Advances in Methods and Practices in Psychological Science 2020, 3, 53–65. [Google Scholar] [CrossRef]
  68. Munafò, M. Metascience: reproducibility blues, 2017.
  69. Ioannidis, J.P.; Fanelli, D.; Dunne, D.D.; Goodman, S.N. Meta-research: evaluation and improvement of research methods and practices. PLoS biology 2015, 13, e1002264. [Google Scholar] [CrossRef]
  70. Mohanty, S. Chapter 12 Metamodel-Based Fast AMS-SoC Design Methodologies," Nanoelectronic Mixed-Signal System Design", ISBN 978-0071825719 and 0071825711, 2015.
  71. Van Gigch, J.P. System design modeling and metamodeling; Springer Science & Business Media, 1991.
  72. Gonzalez-Perez, C.; Henderson-Sellers, B. Metamodelling for software engineering; John Wiley & Sons, 2008.
  73. Fatehah, M.; Mezhuyev, V.; Al-Emran, M. A systematic review of metamodelling in software engineering. Recent Advances in Intelligent Systems and Smart Applications.
  74. Koziel, S.; Yang, X.S. Computational optimization, methods and algorithms; Vol. 356, Springer, 2011.
  75. Yi, D.; Ahn, J.; Ji, S. An effective optimization method for machine learning based on ADAM. Applied Sciences 2020, 10, 1073. [Google Scholar] [CrossRef]
  76. Nassif, N.; Al-Sadoon, Z.A.; Hamad, K.; Altoubat, S. Cost-based optimization of shear capacity in fiber reinforced concrete beams using machine learning. Struct Eng Mech 2022, 83, 671–680. [Google Scholar]
  77. Gaspar-Cunha, A.; Costa, P.; Monaco, F.; Delbem, A. Many-objectives optimization: a machine learning approach for reducing the number of objectives. Mathematical and Computational Applications 2023, 28, 17. [Google Scholar] [CrossRef]
  78. Krus, P.; Ölvander, J. Performance index and meta-optimization of a direct search optimization method. Engineering optimization 2013, 45, 1167–1185. [Google Scholar] [CrossRef]
  79. Vinţan, L.; Chiş, R.; Ismail, M.A.; Coţofană, C. Improving computing systems automatic multiobjective optimization through meta-optimization. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 2015, 35, 1125–1129. [Google Scholar] [CrossRef]
  80. Mason, K.; Duggan, J.; Howley, E. A meta optimisation analysis of particle swarm optimisation velocity update equations for watershed management learning. Applied Soft Computing 2018, 62, 148–161. [Google Scholar] [CrossRef]
  81. Bian, W.; Hu, W. Balancing the Performance-Efficiency Trade-off in Lighting Control Systems through Meta-Optimisation. In Proceedings of the 2024 IEEE Sustainable Smart Lighting World Conference &, 2024, Expo (LS24). IEEE; pp. 1–4.
  82. Berkowitz, H.; Dumez, H. The concept of meta-organization: Issues for management studies. European Management Review 2016, 13, 149–156. [Google Scholar] [CrossRef]
  83. Coulombel, P.; Berkowitz, H. One name for two concepts: A systematic literature review about meta-organizations. International Journal of Management Reviews 2025, 27, 151–173. [Google Scholar] [CrossRef]
  84. Berkowitz, H.; Bor, S. Why meta-organizations matter: A response to Lawton et al. and Spillman. Journal of Management Inquiry 2018, 27, 204–211. [Google Scholar] [CrossRef]
  85. Gulati, R.; Puranam, P.; Tushman, M. Meta-organization design: Rethinking design in interorganizational and community contexts. Strategic management journal 2012, 33, 571–586. [Google Scholar] [CrossRef]
  86. Marciniak, R. From organization design to meta organization design. In Proceedings of the Digital Enterprise Design and Management 2013: Proceedings of the First International Conference on Digital Enterprise Design and Management DED&amp.
  87. Napolitano, P.; Cerveró-Romero, F. Meta-Organization: the Future for the Lean Organization’. In Proceedings of the 20th Annual Conference of the International Group for Lean Construction. San Diego, USA; 2012; pp. 18–20. [Google Scholar]
  88. Sobolewski, M. Exerted enterprise computing: from protocol-oriented networking to exertion-oriented networking. In Proceedings of the OTM Confederated International Conferences" On the Move to Meaningful Internet Systems". Springer; 2010; pp. 182–201. [Google Scholar]
  89. Sobolewski, M. Technology foundations. In Concurrent Engineering in the 21st Century: Foundations, Developments and Challenges; Springer, 2015; pp. 67–99.
  90. Palmer, K.D. 9. In 6 Meta-systems Engineering: A New Approach to Systems Engineering based on Emergent Meta-Systems and Holonomic Special Systems Theory. In Proceedings of the INCOSE International Symposium. Wiley Online Library, Vol. 10; 2000; pp. 889–904. [Google Scholar]
  91. Zadeh, L.A. Fuzzy sets. Information and control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  92. Al-Hawary, T. Complete fuzzy graphs. International Journal of Mathematical Combinatorics 2011, 4, 26. [Google Scholar]
  93. Mordeson, J.N.; Nair, P.S. Fuzzy graphs and fuzzy hypergraphs; Vol. 46, Physica, 2012.
  94. Atanassov, K.T.; Gargov, G. Intuitionistic fuzzy logics; Springer, 2017.
  95. Atanassov, K.T. Circular intuitionistic fuzzy sets. Journal of Intelligent & Fuzzy Systems 2020, 39, 5981–5986. [Google Scholar] [CrossRef]
  96. Smarandache, F. Neutrosophic Overset, Neutrosophic Underset, and Neutrosophic Offset. Similarly for Neutrosophic Over-/Under-/Off-Logic, Probability, and Statistics; Infinite Study, 2016.
  97. Zhu, S. Neutrosophic n-SuperHyperNetwork: A New Approach for Evaluating Short Video Communication Effectiveness in Media Convergence. Neutrosophic Sets and Systems 2025, 85, 1004–1017. [Google Scholar]
  98. Broumi, S.; Talea, M.; Bakali, A.; Smarandache, F. Single valued neutrosophic graphs. Journal of New theory.
  99. Broumi, S.; Talea, M.; Bakali, A.; Smarandache, F. Interval valued neutrosophic graphs. Critical Review, XII 2016, 2016, 5–33. [Google Scholar]
  100. Pawlak, Z. Rough sets. International journal of computer & information sciences 1982, 11, 341–356. [Google Scholar]
  101. Broumi, S.; Smarandache, F.; Dhar, M. Rough neutrosophic sets. Infinite Study 2014, 32, 493–502. [Google Scholar]
  102. Pawlak, Z. Rough sets: Theoretical aspects of reasoning about data; Vol. 9, Springer Science & Business Media, 2012.
  103. Jun, Y.B.; Hur, K.; Lee, K.J. Hyperfuzzy subalgebras of BCK/BCI-algebras. Annals of Fuzzy Mathematics and Informatics 2017. [Google Scholar]
  104. Ghosh, J.; Samanta, T.K. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol 2012, 41, 27–37. [Google Scholar]
  105. Jun, Y.B.; Song, S.Z.; Kim, S.J. Length-fuzzy subalgebras in BCK/BCI-algebras. Mathematics 2018, 6, 11. [Google Scholar] [CrossRef]
  106. Jun, Y.B.; Song, S.Z.; Kim, S.J. Distances between hyper structures and length fuzzy ideals of BCK/BCI-algebras based on hyper structures. Journal of Intelligent & Fuzzy Systems 2018, 35, 2257–2268. [Google Scholar]
  107. Sultana, F.; Gulistan, M.; Ali, M.; Yaqoob, N.; Khan, M.; Rashid, T.; Ahmed, T. A study of plithogenic graphs: applications in spreading coronavirus disease (COVID-19) globally. Journal of ambient intelligence and humanized computing 2023, 14, 13139–13159. [Google Scholar] [CrossRef]
  108. Singh, P.K. Complex plithogenic set. International Journal of Neutrosophic Sciences 2022, 18, 57–72. [Google Scholar] [CrossRef]
  109. Smarandache, F. Plithogeny, plithogenic set, logic, probability and statistics: a short review. Journal of Computational and Cognitive Engineering 2022, 1, 47–50. [Google Scholar] [CrossRef]
  110. Smarandache, F. Plithogeny, plithogenic set, logic, probability, and statistics; Infinite Study, 2017.
  111. Kandasamy, W.V.; Ilanthenral, K.; Smarandache, F. Plithogenic Graphs; Infinite Study, 2020.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated