Preprint
Article

This version is not peer-reviewed.

Refined Quadripartitioned, Refined Pentapartitioned, Refined Heptapartitioned, and Iterative Refined Neutrosophic Logic

Submitted:

13 October 2025

Posted:

13 October 2025

You are already at the latest version

Abstract
A Neutrosophic Set models uncertainty via three memberships—truth, indeterminacy, and falsity. We develop an n–Refined Neutrosophic Logic (n–RNL) that splits these into interpretable subcomponents and define refined quadripartitioned, pentapartitioned, and heptapartitioned families capturing contradiction, ignorance, unknown, and relative truth/falsity. Connectives are given blockwise by a t–norm/t–conorm pair. We prove merge/split homomorphisms that (i) reduce refined models to classical ones and (ii) embed unrefined semantics into refined spaces. We further introduce Iterative Refined Neutrosophic Logic (IRNL), allowing repeated, symmetric refinements with functorial stability of evaluations. Numerical examples from clinical decision support, autonomous driving, and fraud detection illustrate granular reasoning under heterogeneous evidence and priority schemes.
Keywords: 
;  ;  ;  

Contents in this Paper

The remainder of this paper is organized as follows.

1. Preliminaries

This section gathers the basic notions and notation used throughout the paper. Unless explicitly stated otherwise, we work in the finite setting. By convention, the empty set is treated as an element of every set.

1.1. Neutrosophic Set

A Neutrosophic Set models uncertainty using three membership functions: truth (T), indeterminacy (I), and falsity (F), which satisfy:
0 T + I + F 3 .
[1,2,3]. Related concepts include Bipolar Neutrosophic Sets [4,5,6], Hesitant Neutrosophic Sets [7,8,9], and q-rung Orthopair Neutrosophic Sets [10,11,12], which have been widely investigated in recent studies. Moreover, the Neutrosophic Set is known to generalize Fuzzy Sets [13,14], Intuitionistic Fuzzy Sets [15,16], Vague Sets [17], and Hesitant Fuzzy Sets [18,19].
Definition 1 
(Neutrosophic Set). [2,20] Let X be a non-empty set. A Neutrosophic Set (NS) A on X is characterized by three membership functions:
T A : X [ 0 , 1 ] , I A : X [ 0 , 1 ] , F A : X [ 0 , 1 ] ,
where for each x X , the values T A ( x ) , I A ( x ) , and F A ( x ) represent the degrees of truth, indeterminacy, and falsity, respectively. These values satisfy the following condition:
0 T A ( x ) + I A ( x ) + F A ( x ) 3 .
Example 1 
(Neutrosophic Set in Practice: Spam Email Identification). (cf. [21,22,23]) Let X be the set of incoming emails to a user’s inbox. Consider the neutrosophic set A X with the intended meaning
x A to the degree that email x is spam .
For a particular email e X , we use three observable signals:
  • Content–based spam score s cont ( e ) = 0.74 (higher means more spam–like),
  • Sender–reputation badness score s rep ( e ) = 0.65 ,
  • Ham (non–spam) cues: whitelisting w ( e ) = 0.20 and conversational context h ( e ) = 0.15 .
We define the three memberships as follows (all values in [ 0 , 1 ] ):
Truth ( spam support ) : T A ( e ) = 1 1 s cont ( e ) 1 s rep ( e )
= 1 ( 1 0.74 ) ( 1 0.65 ) = 1 0.26 · 0.35 = 0.909 .
Indeterminacy ( missing / uncertain features ) :
I A ( e ) = m ( e ) = 0.22 ( e . g . , partial headers , absent DKIM / DMARC ) .
Falsity ( ham evidence ) : F A ( e ) = w ( e ) + h ( e ) w ( e ) h ( e ) = 0.20 + 0.15 0.20 · 0.15 = 0.32 .
Therefore the neutrosophic triple for e is
T A ( e ) , I A ( e ) , F A ( e ) = ( 0.909 , 0.22 , 0.32 ) ,
and the sum constraint holds:
0.909 + 0.22 + 0.32 = 1.449 3 .
Strong content and reputation signals support “spam”; some uncertainty remains from missing technical signals; modest ham cues (whitelist, ongoing thread) contribute to falsity.
Example 2 
(Neutrosophic Set in Practice: Bacterial Pneumonia Triage). (cf. [24]) Let X be the set of adult patients in an emergency department. Define the neutrosophic set A X with meaning
x A to the degree that patient x has bacterial pneumonia .
For a given patient p X , suppose we observe:
  • Biomarker support (e.g., CRP/procalcitonin) s lab ( p ) = 0.68 ,
  • Imaging support (chest radiograph) s img ( p ) = 0.55 ,
  • Pending culture/diagnostics fraction u pend ( p ) = 0.30 and incomplete history u hist ( p ) = 0.10 ,
  • Counter–evidence: viral PCR positivity f viral ( p ) = 0.35 and alternative diagnosis likelihood f alt ( p ) = 0.20 .
Define the memberships (all in [ 0 , 1 ] ) by:
Truth ( supporting evidence ) : T A ( p ) = s lab ( p ) + s img ( p ) s lab ( p ) s img ( p ) = 0.68 + 0.55 0.68 · 0.55 = 0.856 .
Indeterminacy ( unknowns / missingness ) : I A ( p ) = u pend ( p ) + u hist ( p ) u pend ( p ) u hist ( p ) = 0.30 + 0.10 0.30 · 0.10 = 0.37 .
Falsity ( counter evidence ) : F A ( p ) = f viral ( p ) + f alt ( p ) f viral ( p ) f alt ( p ) = 0.35 + 0.20 0.35 · 0.20 = 0.48 .
Hence
T A ( p ) , I A ( p ) , F A ( p ) = ( 0.856 , 0.37 , 0.48 ) ,
and the bound is satisfied:
0.856 + 0.37 + 0.48 = 1.706 3 .
Concordant lab and imaging provide substantial truth; pending tests and incomplete history contribute to indeterminacy; viral findings and a plausible alternative diagnosis add falsity.

1.2. n–Refined Neutrosophic Logic

n-Refined Neutrosophic Logic splits truth, indeterminacy, and falsity into multiple subcomponents, enabling granular reasoning under heterogeneous evidence with priorities assigned [25,26,27,28,29,30,31].
Definition 2 
(n–Refined Neutrosophic Logic (n-RNL)). [32,33,34] Fix integers p , r , s 1 and set n : = p + r + s . An n–refined neutrosophic truth value is a vector
v = T 1 , , T p | I 1 , , I r | F 1 , , F s , T j , I k , F [ 0 , 1 ] ,
where the T–components encode refined kinds of truth, the I–components encode refined kinds of indeterminacy, and the F–components encode refined kinds of falsity; the total dimension is n = p + r + s . In the standard (numerical) setting one may assume
0 j = 1 p T j + k = 1 r I k + = 1 s F n ,
while in philosophical/nonstandard variants each component may range in the nonstandard unit interval, with the same n–dimensional refinement principle. The splitting of ( T , I , F ) into ( T 1 , , T p ) , ( I 1 , , I r ) , ( F 1 , , F s ) (with p + r + s = n ) is the hallmark of n–refinement.
A (propositional) n–refined neutrosophic valuation on a language L is a map
Val : Form ( L ) [ 0 , 1 ] n , φ Val ( φ ) = T ( φ ) I ( φ ) F ( φ ) ,
equipped with the following connectives defined componentwise from a chosen fuzzy t–norm t : [ 0 , 1 ] 2 [ 0 , 1 ] and its dual t–conorm s : [ 0 , 1 ] 2 [ 0 , 1 ] (both associative, commutative, and monotone).
Conjunction (n–norm). The neutrosophic conjunction φ n ψ is
T ( φ n ψ ) = t - wise combine T 1 ( φ ) , , T p ( φ ) with T 1 ( ψ ) , , T p ( ψ ) , I ( φ n ψ ) = s - wise combine I 1 ( φ ) , , I r ( φ ) with I 1 ( ψ ) , , I r ( ψ ) , F ( φ n ψ ) = s - wise combine F 1 ( φ ) , , F s ( φ ) with F 1 ( ψ ) , , F s ( ψ ) .
Equivalently, T uses a t–norm, while both I and F use a t–conorm.
Disjunction (n–conorm). The neutrosophic disjunction φ n ψ is
T ( φ n ψ ) = s - wise combine the T blocks , I ( φ n ψ ) = t - wise combine the I blocks , F ( φ n ψ ) = t - wise combine the F blocks .
Equivalently, T uses a t–conorm, while both I and F use a t–norm.
Negation. A basic (involutive) neutrosophic negation swaps truth and falsity blocks and leaves indeterminacy as is:
¬ n φ : T ( φ ) I ( φ ) F ( φ ) F ( φ ) I ( φ ) T ( φ ) ,
with optional numeric complementation x 1 x applied componentwise when a strong negation is desired.
Priority (pessimistic/optimistic) variants. When needed, priority orderings among the refined subcomponents (e.g. T < I < F for lower–bound n–norm, or T > I > F for upper–bound n–conorm) specialize the above mixers; in the unrefined triplet case this yields the familiar closed forms for n and n .
Example 3 
(Clinical Decision Support: Sepsis Triage (n–RNL with p = r = s = 2 )). (cf. [35,36]) We assess two propositions on the same patient:
φ : = the patient has bacterial sepsis , ψ : = broad - spectrum antibiotics are indicated .
We adopt p = r = s = 2 (so n = 6 ) with components interpreted as:
T = ( T 1 , T 2 ) = ( lab / imaging truth , guideline / model truth ) ,
I = ( I 1 , I 2 ) = ( missing - data uncertainty , ambiguity ) ,
F = ( F 1 , F 2 ) = ( counter - evidence , alternative diagnosis ) .
Choose the Gödel t–norm and t–conorm: t ( a , b ) = min { a , b } and s ( a , b ) = max { a , b } . Suppose the valuations are
Val ( φ ) = ( 0.82 , 0.68 ) ( 0.40 , 0.25 ) ( 0.35 , 0.20 ) ,
Val ( ψ ) = ( 0.75 , 0.72 ) ( 0.30 , 0.35 ) ( 0.25 , 0.40 ) .
Conjunction  φ n ψ (truth via t, indeterminacy/falsity via s):
T = ( min { 0.82 , 0.75 } , min { 0.68 , 0.72 } ) = ( 0.75 , 0.68 ) ,
I = ( max { 0.40 , 0.30 } , max { 0.25 , 0.35 } ) = ( 0.40 , 0.35 ) ,
F = ( max { 0.35 , 0.25 } , max { 0.20 , 0.40 } ) = ( 0.35 , 0.40 ) .
Disjunction  φ n ψ (truth via s, indeterminacy/falsity via t):
T = ( max { 0.82 , 0.75 } , max { 0.68 , 0.72 } ) = ( 0.82 , 0.72 ) , I = ( min { 0.40 , 0.30 } , min { 0.25 , 0.35 } ) = ( 0.30 , 0.25 ) ,
F = ( min { 0.35 , 0.25 } , min { 0.20 , 0.40 } ) = ( 0.25 , 0.20 ) .
Negation  ¬ n φ (swap truth/falsity, keep indeterminacy):
Val ( ¬ n φ ) = ( 0.35 , 0.20 ) ( 0.40 , 0.25 ) ( 0.82 , 0.68 ) .
The conjunction preserves cautious truth ( 0.75 , 0.68 ) while exposing combined ambiguity ( 0.40 , 0.35 ) and accumulated counter-evidence ( 0.35 , 0.40 ) . The disjunction reflects that at least one of { φ , ψ } has strong support while overall uncertainty and opposition drop under the t–norm.
Example 4 
(Autonomous Driving: Highway Merge Decision (n–RNL with p = r = s = 2 )). Consider
φ : = the current gap is safe to merge , ψ : = the adjacent driver will yield .
Use p = r = s = 2 with
T = ( T 1 , T 2 ) = ( multi - sensor consensus , V 2 V / traffic - rule inference ) ,
I = ( I 1 , I 2 ) = ( occlusion , adverse weather ) ,
F = ( F 1 , F 2 ) = ( contrary sensor cues , rule - violation risk ) .
Choose the product t–norm t ( a , b ) = a b and the probabilistic t–conorm s ( a , b ) = a + b a b . Suppose
Val ( φ ) = ( 0.88 , 0.70 ) ( 0.20 , 0.35 ) ( 0.15 , 0.25 ) ,
Val ( ψ ) = ( 0.60 , 0.55 ) ( 0.30 , 0.25 ) ( 0.40 , 0.30 ) .
Conjunction  φ n ψ (truth via t, indeterminacy/falsity via s):
T = ( 0.88 · 0.60 , 0.70 · 0.55 ) = ( 0.528 , 0.385 ) ,
I = 0.20 + 0.30 0.20 · 0.30 , 0.35 + 0.25 0.35 · 0.25 = ( 0.44 , 0.5125 ) ,
F = 0.15 + 0.40 0.15 · 0.40 , 0.25 + 0.30 0.25 · 0.30 = ( 0.49 , 0.475 ) .
Disjunction  φ n ψ (truth via s, indeterminacy/falsity via t):
T = 0.88 + 0.60 0.88 · 0.60 , 0.70 + 0.55 0.70 · 0.55 = ( 0.952 , 0.865 ) ,
I = ( 0.20 · 0.30 , 0.35 · 0.25 ) = ( 0.06 , 0.0875 ) ,
F = ( 0.15 · 0.40 , 0.25 · 0.30 ) = ( 0.06 , 0.075 ) .
Negation  ¬ n φ :
Val ( ¬ n φ ) = ( 0.15 , 0.25 ) ( 0.20 , 0.35 ) ( 0.88 , 0.70 ) .
The conjunction demands joint support, thus lowering truth by multiplication and accumulating uncertainty/opposition via s. The disjunction captures the any-sufficient nature of acting safely: high fused truth ( 0.952 , 0.865 ) with suppressed uncertainty and opposition under t.

1.3. Quadripartitioned, Pentapartitioned, and Heptapartitioned Neutrosophic Set

A quadripartitioned neutrosophic set assigns to each element four independent degrees—truth, contradiction, ignorance, and falsity—each valued in [ 0 , 1 ] with the total bounded above by 4. This cleanly separates “conflicting evidence” (contradiction) from “insufficient information” (ignorance), enabling a finer representation of uncertainty than a single truth grade [37,38,39,40]. A pentapartitioned neutrosophic set describes each element by five memberships—truth, contradiction, ignorance, unknown, and falsity—again taking values in [ 0 , 1 ] with the sum at most 5. The explicit “unknown” component distinguishes genuinely missing or undefined observations from “ignorance,” which captures partial yet inconclusive information [41,42]. A heptapartitioned neutrosophic set equips each element with seven degrees—truth, relative truth, contradiction, unknown, ignorance, relative falsity, and falsity—each in [ 0 , 1 ] and jointly bounded by 7. The “relative” truth and falsity components encode context-dependent tendencies, supporting highly nuanced reasoning in complex, ambiguous settings [43,44,45].
Definition 3 
(Quadripartitioned Neutrosophic Set). [37,38,39,40] Let U be a non-empty universe. A quadripartitioned neutrosophic set (Q-NS) Q on U is defined by
Q = x , T Q ( x ) , C Q ( x ) , G Q ( x ) , F Q ( x ) | x U ,
where
T Q , C Q , G Q , F Q : U [ 0 , 1 ]
are the truth, contradiction, ignorance, and falsity membership functions, respectively, satisfying
0 T Q ( x ) + C Q ( x ) + G Q ( x ) + F Q ( x ) 4 for all x U .
Example 5 
(Quadripartitioned Neutrosophic Set: Evacuation Need in Disaster Response). Let U be the set of households in district D. Consider the Q-NS Q whose intended meaning is
x Q to the degree that household x needs immediate evacuation .
For a specific household h 17 U , suppose we have the following measurable indicators:
  • Hazard index (ground motion + landslide risk): H ( h 17 ) = 0.80 ,
  • Structural damage index (rapid visual assessment): D ( h 17 ) = 0.60 ,
  • Two decision sources: municipal order S 1 ( h 17 ) = 1.00 (“evacuate”) and field officer S 2 ( h 17 ) = 0.60 (“remain”),
  • Data completeness (sensors, reports): Comp ( h 17 ) = 0.75 ,
  • Stability/benign-evidence index (nearby shelter capacity, route safety): B ( h 17 ) = 0.20 .
Define the Q-NS memberships (all in [ 0 , 1 ] ) by
T Q ( h 17 ) = max { H ( h 17 ) , D ( h 17 ) } = max { 0.80 , 0.60 } = 0.80 ,
C Q ( h 17 ) = | S 1 ( h 17 ) S 2 ( h 17 ) | = | 1.00 0.60 | = 0.40 ,
G Q ( h 17 ) = 1 Comp ( h 17 ) = 1 0.75 = 0.25 ,
F Q ( h 17 ) = B ( h 17 ) = 0.20 .
Hence the quadripartitioned tuple is
T Q ( h 17 ) , C Q ( h 17 ) , G Q ( h 17 ) , F Q ( h 17 ) = ( 0.80 , 0.40 , 0.25 , 0.20 ) ,
and the total is 0.80 + 0.40 + 0.25 + 0.20 = 1.65 4 .
Strong hazard/damage support evacuation ( T = 0.80 ); there is moderate contradiction between sources ( C = 0.40 ) and some ignorance from partial data ( G = 0.25 ); a small amount of counter-evidence ( F = 0.20 ) comes from local stability factors.
Definition 4 
(Pentapartitioned Neutrosophic Set). [41,42] Let U be a non-empty universe. A pentapartitioned neutrosophic set (P-NS) P on U is defined by
P = x , T P ( x ) , C P ( x ) , G P ( x ) , U P ( x ) , F P ( x ) | x U ,
where
T P , C P , G P , U P , F P : U [ 0 , 1 ]
are the truth, contradiction, ignorance, unknown, and falsity membership functions, respectively, satisfying
0 T P ( x ) + C P ( x ) + G P ( x ) + U P ( x ) + F P ( x ) 5 for all x U .
Example 6 
(Pentapartitioned Neutrosophic Set: Urgent Antibiotic Decision). (cf. [46]) Let U be the set of patients in an emergency department. Define the P-NS P with meaning
x P to the degree that patient x requires broad - spectrum antibiotics now .
For a patient p * U , suppose:
  • Blood culture signal B ( p * ) = 0.82 , procalcitonin elevation P C T ( p * ) = 0.78 ,
  • Model A (sepsis triage) score M A ( p * ) = 0.85 , stewardship rule (“withhold”) score M S ( p * ) = 0.40 ,
  • Laboratory/missingness ratio Miss ( p * ) = 0.10 ,
  • Unknowns from pending cultures Pend ( p * ) = 0.30 ,
  • Viral panel evidence (against bacterial cause) V ( p * ) = 0.25 .
Define the P-NS memberships (all in [ 0 , 1 ] ) by
T P ( p * ) = min { B ( p * ) , P C T ( p * ) } = min { 0.82 , 0.78 } = 0.78 ,
C P ( p * ) = | M A ( p * ) M S ( p * ) | = | 0.85 0.40 | = 0.45 ,
G P ( p * ) = Miss ( p * ) = 0.10 , U P ( p * ) = Pend ( p * ) = 0.30 ,
F P ( p * ) = V ( p * ) = 0.25 .
Thus
T P , C P , G P , U P , F P ( p * ) = ( 0.78 , 0.45 , 0.10 , 0.30 , 0.25 ) ,
with total 0.78 + 0.45 + 0.10 + 0.30 + 0.25 = 1.88 5 .
Convergent biomarkers support treatment ( T = 0.78 ) while a stewardship rule conflicts with the triage model ( C = 0.45 ); some evidence is still unknown (pending) and a modest falsity signal arises from viral findings.
Definition 5 
(Heptapartitioned Neutrosophic Set). [43,44,45] Let U be a non-empty universe. A heptapartitioned neutrosophic set (H-NS) H on U is defined by
H = x , T H ( x ) , M H ( x ) , C H ( x ) , U H ( x ) , I H ( x ) , K H ( x ) , F H ( x ) | x U ,
where
T H , M H , C H , U H , I H , K H , F H : U [ 0 , 1 ]
are the truth, relative truth, contradiction, unknown, ignorance, relative falsity, and falsity membership functions, respectively, satisfying
0 T H ( x ) + M H ( x ) + C H ( x ) + U H ( x ) + I H ( x ) + K H ( x ) + F H ( x ) 7 for all x U .
Example 7 
(Heptapartitioned Neutrosophic Set: Authenticity of an Online Listing). (cf. [47]) Let U be the set of listings on a marketplace. Consider the H-NS H meaning
x H to the degree that listing x is genuine ( not fraudulent ) .
For a listing U , suppose we observe:
  • Objective truth (IMEI/serial check passed) T obj ( ) = 0.90 ,
  • Relative/context truth: seller reputation R ( ) = 0.80 , description–photo consistency M cons ( ) = 0.70 ,
  • Contradiction (metadata mismatches) C mis ( ) = 0.20 ,
  • Unknown (missing invoice/warranty) U miss ( ) = 0.40 ,
  • Ignorance (low-resolution photos/incomplete specs) I inc ( ) = 0.30 ,
  • Relative falsity (price anomaly score) K anom ( ) = 0.50 ,
  • Objective falsity (hit in theft/blacklist DB) F obj ( ) = 0.10 .
Set the seven memberships by
T H ( ) = T obj ( ) = 0.90 , M H ( ) = min { R ( ) , M cons ( ) } = min { 0.80 , 0.70 } = 0.70 ,
C H ( ) = C mis ( ) = 0.20 , U H ( ) = U miss ( ) = 0.40 , I H ( ) = I inc ( ) = 0.30 ,
K H ( ) = K anom ( ) = 0.50 , F H ( ) = F obj ( ) = 0.10 .
Hence
T H , M H , C H , U H , I H , K H , F H ( ) = ( 0.90 , 0.70 , 0.20 , 0.40 , 0.30 , 0.50 , 0.10 ) ,
and the sum is 0.90 + 0.70 + 0.20 + 0.40 + 0.30 + 0.50 + 0.10 = 3.10 7 .
Strong objective truth (device check) and contextual support (reputation/consistency) coexist with moderate unknown/ignorance and a sizable relative-falsity signal due to price anomaly; objective falsity remains low.

2. Main Results

In this section, we present the main results of this paper.

2.1. Refined Quadripartitioned Neutrosophic Logic

Refined Quadripartitioned Neutrosophic Logic splits truth, contradiction, ignorance, falsity into vector components, combining with t-norms and t-conorms to model uncertainty.
Definition 6 
(n–Refined Quadripartitioned Neutrosophic Logic (n–RQNL)). Fix integers p , c , g , f 1 and put n : = p + c + g + f . An n–refined quadripartitioned truth value is
v = ( T C G F ) [ 0 , 1 ] n , T [ 0 , 1 ] p , C [ 0 , 1 ] c , G [ 0 , 1 ] g , F [ 0 , 1 ] f ,
where C (contradiction) and G (ignorance) refine two kinds of indeterminacy.
Let t be a t–norm and s its dual t–conorm. For a valuation Val : Form ( L ) [ 0 , 1 ] n , define, blockwise and componentwise,
Val ( φ ψ ) = T ( φ ) t t T ( ψ ) | C ( φ ) t s C ( ψ ) | G ( φ ) t s G ( ψ ) | F ( φ ) t s F ( ψ ) , Val ( φ ψ ) = T ( φ ) t s T ( ψ ) | C ( φ ) t t C ( ψ ) | G ( φ ) t t G ( ψ ) | F ( φ ) t t F ( ψ ) , Val ( ¬ φ ) = F ( φ ) | C ( φ ) | G ( φ ) | T ( φ ) .
Example 8 
(Medical Diagnosis (p=2, c=1, g=1, f=2; n=6)). (cf. [48,49]) Blocks and semantics.
v = ( T 1 , T 2 C G F 1 , F 2 ) [ 0 , 1 ] 6 ,
where T 1 (lab test) and T 2 (imaging) contribute to truth; C measures contradiction (e.g., conflicting clinicians); G captures ignorance/unknowns (e.g., missing data); F 1 , F 2 quantify falsity evidence from the same sources (lab, imaging).
Atomic formulas. Let A be “Patient has pneumonia” and B be “Patient has bacterial infection.”
Val ( A ) = ( 0.82 , 0.76 0.12 0.15 0.08 , 0.10 ) , Val ( B ) = ( 0.70 , 0.88 0.18 0.10 0.07 , 0.12 ) .
Conjunction and disjunction. With t = min and s = max :
Val ( A B ) = ( min ( 0.82 , 0.70 ) , min ( 0.76 , 0.88 ) max ( 0.12 , 0.18 ) max ( 0.15 , 0.10 ) max ( 0.08 , 0.07 ) , max ( 0.10 , 0.12 ) ) = ( 0.70 , 0.76 0.18 0.15 0.08 , 0.12 ) , Val ( A B ) = ( max ( 0.82 , 0.70 ) , max ( 0.76 , 0.88 ) min ( 0.12 , 0.18 ) min ( 0.15 , 0.10 ) min ( 0.08 , 0.07 ) , min ( 0.10 , 0.12 ) ) = ( 0.82 , 0.88 0.12 0.10 0.07 , 0.10 ) .
Negation. Swapping ( T 1 , T 2 ) with ( F 1 , F 2 ) and keeping ( C , G ) :
Val ( ¬ A ) = ( 0.08 , 0.10 0.12 0.15 0.82 , 0.76 ) .
The strong truth block and reduced falsity in Val ( A B ) support a joint diagnosis; the disjunction preserves peak evidence while contracting C , G via min.
Example 9 
(Cyber Intrusion Detection (p=2, c=1, g=1, f=2; n=6)). (cf. [50,51]) Blocks and semantics.
v = ( T 1 , T 2 C G F 1 , F 2 ) [ 0 , 1 ] 6 ,
where T 1 (network anomaly score) and T 2 (endpoint alert score) support truth; C is source–level contradiction; G denotes data gaps; F 1 , F 2 are falsity evidence (e.g., benign baselines for network/endpoint).
Atomic formulas. Let C be “Ongoing intrusion present” and D be “Privileged-account compromise.”
Val ( C ) = ( 0.80 , 0.67 0.20 0.12 0.15 , 0.22 ) ,
Val ( D ) = ( 0.62 , 0.74 0.25 0.18 0.20 , 0.17 ) .
Conjunction and disjunction. With t = min and s = max :
Val ( C D ) = ( min ( 0.80 , 0.62 ) , min ( 0.67 , 0.74 ) max ( 0.20 , 0.25 ) max ( 0.12 , 0.18 ) max ( 0.15 , 0.20 ) , max ( 0.22 , 0.17 ) ) = ( 0.62 , 0.67 0.25 0.18 0.20 , 0.22 ) , Val ( C D ) = ( max ( 0.80 , 0.62 ) , max ( 0.67 , 0.74 ) min ( 0.20 , 0.25 ) min ( 0.12 , 0.18 ) min ( 0.15 , 0.20 ) , min ( 0.22 , 0.17 ) ) = ( 0.80 , 0.74 0.20 0.12 0.15 , 0.17 ) .
Negation.
Val ( ¬ C ) = ( 0.15 , 0.22 0.20 0.12 0.80 , 0.67 ) .
The ∧ result shows conservative truth (via min) with elevated contradiction/ignorance, appropriate for joint high–risk events; the ∨ result preserves the higher alerts and suppresses falsity.
Lemma 1 
(Canonical merge and split between n–RQNL and n–RNL). Let r : = c + g . Define the merge (a surjection)
π Q : [ 0 , 1 ] p + c + g + f [ 0 , 1 ] p + r + f , π Q ( T C G F ) : = ( T C G F ) ,
and for any fixed disjoint partition of indices J C J G = { 1 , , r } with | J C | = c , | J G | = g , the split (an injection)
σ Q : [ 0 , 1 ] p + r + f [ 0 , 1 ] p + c + g + f , σ Q ( T I F ) : = T I J C | I J G | F .
Then π Q σ Q = id .
Proof. 
Fix integers p , c , g , f 1 and put r : = c + g . By hypothesis we fix a disjoint partition J C J G = { 1 , , r } with | J C | = c , | J G | = g , and we agree that for any I = ( I 1 , , I r ) [ 0 , 1 ] r the restricted subvectors I J C [ 0 , 1 ] c and I J G [ 0 , 1 ] g are listed in the increasing order of indices in J C and J G , respectively. Concatenation ∥ means simple blockwise juxtaposition: if u [ 0 , 1 ] c and v [ 0 , 1 ] g then u v [ 0 , 1 ] r is the r–tuple obtained by putting u first and v second.
Claim. For every ( T I F ) [ 0 , 1 ] p + r + f one has
( π Q σ Q ) ( T I F ) = ( T I F ) .
Coordinatewise verification. By definition of the split,
σ Q ( T I F ) = T | I J C | I J G | F [ 0 , 1 ] p + c + g + f .
Applying the merge π Q gives
π Q σ Q ( T I F ) = T | I J C I J G ( * ) | F [ 0 , 1 ] p + r + f .
It remains to show that ( * ) = I . Write J C = { j 1 C < < j c C } and J G = { j 1 G < < j g G } . Then, by the ordering convention for restrictions,
I J C = ( I j 1 C , , I j c C ) , I J G = ( I j 1 G , , I j g G ) ,
and hence
I J C I J G = I j 1 C , , I j c C , I j 1 G , , I j g G .
Because { j 1 C , , j c C , j 1 G , , j g G } = { 1 , , r } and the resulting r–tuple is exactly the chosen order of the r-block in [ 0 , 1 ] p + r + f (first the J C coordinates, then the J G coordinates), we identify it with I = ( I 1 , , I r ) under this fixed convention. Therefore ( * ) = I , and consequently
( π Q σ Q ) ( T I F ) = ( T I F ) .
Since this holds for all ( T I F ) , we obtain π Q σ Q = id [ 0 , 1 ] p + r + f .
From π Q σ Q = id it follows immediately that σ Q is injective (it has a left inverse) and π Q is surjective (it has a right inverse), as stated. This completes the proof. □
Theorem 1 
(n–RQNL generalizes n–RNL and quadripartitioned logic). With the connectives of Definitions 6 (n–RQNL) and of n–RNL above, for every t–norm t and its dual s:
  • (Reduction to n–RNL). The merge π Q is an algebra homomorphism for , , ¬ , i.e.,
    π Q Val Q ( ) = Val N ( ) on [ 0 , 1 ] p + r + f , { , , ¬ } ,
    where Val Q (resp. Val N ) denotes an n–RQNL (resp. n–RNL) valuation and r = c + g . Consequently, n–RQNL reduces to n–RNL via π Q .
  • (Embedding of n–RNL). The split σ Q is an algebra embedding: σ Q Val N ( ) = Val Q ( ) for { , , ¬ } . Hence n–RQNL extends n–RNL.
  • (Unrefined case). For p = c = g = f = 1 the above block rules collapse to the usual quadripartitioned neutrosophic logic on scalars ( T , C , G , F ) ; therefore n–RQNL strictly generalizes the unrefined quadripartitioned logic.
Proof. (1) Write Val Q ( φ ) = ( T φ C φ G φ F φ ) . By definition of ∧ in n–RQNL and n–RNL, and by componentwise lifting,
π Q Val Q ( φ ψ ) = ( T φ t t T ψ C φ t s C ψ G φ t s G ψ length r F φ t s F ψ ) .
In n–RNL, the indeterminacy block is length r and combines by s componentwise, so this equals Val N ( φ ψ ) after π Q ’s concatenation. The ∨ case is analogous, using s on T and t on each indeterminacy component; ¬ follows from swapping T F and keeping C , G fixed, which under π Q becomes the standard ( T I F ) ( F I T ) .
(2) For σ Q , the split just relabels the r-dimensional indeterminacy vector into two subblocks without changing component values. Because the n–RNL and n–RQNL rules act componentwise by the same operator on each indeterminacy coordinate (s for ∧, t for ∨), σ Q commutes with all three connectives, hence is an embedding. Finally π Q σ Q = id by Lemma 1.
(3) Setting p = c = g = f = 1 turns each block into a scalar and reproduces the usual rules: T uses t/s for / , while C , G , F use s/t for / . The refined case strictly extends the unrefined because, e.g., with c = 2 one can distinguish two contradictory sources that the scalar C collapses. □

2.2. Refined Pentapartitioned Neutrosophic Logic

Refined Pentapartitioned Neutrosophic Logic partitions evidence into truth, contradiction, ignorance, unknown, falsity, and combines blocks by designated t-norms and t-conorms.
Definition 7 
(n–Refined Pentapartitioned Neutrosophic Logic (n–RPNL)). Fix integers p , c , g , u , f 1 and put n : = p + c + g + u + f . An n–refined pentapartitioned truth value is
v = ( T C G U F ) [ 0 , 1 ] n ,
with block lengths p , c , g , u , f (truth, contradiction, ignorance, unknown, falsity). For a valuation Val, define (componentwise within each block), with the same t–norm t and t–conorm s:
Val ( φ ψ ) = T ( φ ) t t T ( ψ ) | C ( φ ) t s C ( ψ ) | G ( φ ) t s G ( ψ ) | U ( φ ) t s U ( ψ ) | F ( φ ) t s F ( ψ ) , Val ( φ ψ ) = T ( φ ) t s T ( ψ ) | C ( φ ) t t C ( ψ ) | G ( φ ) t t G ( ψ ) | U ( φ ) t t U ( ψ ) | F ( φ ) t t F ( ψ ) , Val ( ¬ φ ) = F ( φ ) | C ( φ ) | G ( φ ) | U ( φ ) | T ( φ ) .
Example 10 
(Autonomous Driving: Pedestrian Stop Decision (p=2, c=1, g=1, u=1, f=2; n=7)). Blocks and semantics.
v = ( T 1 , T 2 C G U F 1 , F 2 ) [ 0 , 1 ] 7 ,
where T 1 (camera) and T 2 (LiDAR) support truth; C quantifies contradiction (e.g. camera vs. LiDAR disagreement); G captures ignorance (missing/low–quality data); U represents unknown factors (unmodelled occlusions); and F 1 , F 2 are falsity evidence (e.g. negative detectors for the two sensors).
Atomic propositions.Let A be “a pedestrian is present in the crosswalk” and B be “the crosswalk is currently occupied.” Assign:
Val ( A ) = ( 0.83 , 0.78 0.12 0.10 0.08 0.07 , 0.11 ) , Val ( B ) = ( 0.75 , 0.81 0.15 0.09 0.06 0.10 , 0.08 ) .
Conjunction and disjunction. With t = min and s = max :
Val ( A B ) = ( min ( 0.83 , 0.75 ) , min ( 0.78 , 0.81 ) max ( 0.12 , 0.15 ) max ( 0.10 , 0.09 ) max ( 0.08 , 0.06 ) max ( 0.07 , 0.10 ) , max ( 0.11 , 0.08 ) ) = ( 0.75 , 0.78 0.15 0.10 0.08 0.10 , 0.11 ) , Val ( A B ) = ( max ( 0.83 , 0.75 ) , max ( 0.78 , 0.81 ) min ( 0.12 , 0.15 ) min ( 0.10 , 0.09 ) min ( 0.08 , 0.06 ) min ( 0.07 , 0.10 ) , min ( 0.11 , 0.08 ) ) = ( 0.83 , 0.81 0.12 0.09 0.06 0.07 , 0.08 ) .
Negation. Swapping truth and falsity blocks, keeping ( C , G , U ) :
Val ( ¬ A ) = ( 0.07 , 0.11 0.12 0.10 0.08 0.83 , 0.78 ) .
The conservative ∧ pushes T down via min and raises ( C , G , U ) via max, reflecting stricter joint evidence; the liberal ∨ preserves peak truth while shrinking indeterminacies through min, indicating that evidence for either condition suffices to trigger caution.
Example 11 
(Financial Security: Fraud and Account Takeover(p=2, c=1, g=1, u=1, f=2; n=7)). Blocks and semantics.
v = ( T 1 , T 2 C G U F 1 , F 2 ) [ 0 , 1 ] 7 ,
where T 1 (supervised classifier) and T 2 (rule engine) support truth; C captures conflict between models; G encodes missing features; U collects unknown, unexplained anomalies; and F 1 , F 2 quantify benign evidence (merchant whitelist, customer behavioral baseline).
Atomic propositions. Let C be “the transaction is fraudulent” and D be “the account is taken over.” Assign:
Val ( C ) = ( 0.69 , 0.77 0.22 0.18 0.12 0.25 , 0.14 ) ,
Val ( D ) = ( 0.74 , 0.63 0.19 0.20 0.15 0.21 , 0.17 ) .
Conjunction and disjunction. With t = min and s = max :
Val ( C D ) = ( min ( 0.69 , 0.74 ) , min ( 0.77 , 0.63 ) max ( 0.22 , 0.19 ) max ( 0.18 , 0.20 ) max ( 0.12 , 0.15 ) max ( 0.25 , 0.21 ) , max ( 0.14 , 0.17 ) ) = ( 0.69 , 0.63 0.22 0.20 0.15 0.25 , 0.17 ) , Val ( C D ) = ( max ( 0.69 , 0.74 ) , max ( 0.77 , 0.63 ) min ( 0.22 , 0.19 ) min ( 0.18 , 0.20 ) min ( 0.12 , 0.15 ) min ( 0.25 , 0.21 ) , min ( 0.14 , 0.17 ) ) = ( 0.74 , 0.77 0.19 0.18 0.12 0.21 , 0.14 ) .
Negation.
Val ( ¬ C ) = ( 0.25 , 0.14 0.22 0.18 0.12 0.69 , 0.77 ) .
The joint claim ( ) enforces conservative truth and elevates ( C , G , U ) , appropriate for simultaneous fraud and takeover. The alternative claim ( ) keeps the strongest truth signals while damping falsity/indeterminacies, capturing “either risk suffices” alerting behavior.
Lemma 2 
(Canonical merge and split between n–RPNL and n–RNL). Let r : = c + g + u . Define
π P ( T C G U F ) : = ( T C G U F ) ,
and for any disjoint partition J C J G J U = { 1 , , r } with | J C | = c , | J G | = g , | J U | = u define
σ P ( T I F ) : = T I J C | I J G | I J U | F .
Then π P σ P = id .
Proof. 
Fix p , c , g , u , f 1 and put r : = c + g + u . Let ( T I F ) [ 0 , 1 ] p + r + f be arbitrary, where I = ( I 1 , , I r ) . Apply the split:
σ P ( T I F ) = T | I J C | I J G | I J U | F [ 0 , 1 ] p + c + g + u + f .
Now apply the merge:
π P σ P ( T I F ) = T | I J C I J G I J U | F .
Because J C J G J U = { 1 , , r } is a disjoint partition and each restriction lists coordinates in increasing index order, the concatenation exactly reconstructs I :
I J C I J G I J U = I .
Hence ( π P σ P ) ( T I F ) = ( T I F ) for all inputs, i.e. π P σ P = id [ 0 , 1 ] p + r + f . In particular, σ P is injective (it has a left inverse) and π P is surjective (it has a right inverse). □
Theorem 2 
(n–RPNL generalizes n–RNL and pentapartitioned logic). For the connectives in Definitions 7 and n–RNL, and any t , s :
  • (Reduction to n–RNL). π P is an algebra homomorphism for , , ¬ , hence n–RPNL reduces to n–RNL via π P .
  • (Embedding of n–RNL). σ P is an algebra embedding that commutes with , , ¬ , so n–RPNL extends n–RNL.
  • (Unrefined case). For p = c = g = u = f = 1 one recovers the classical pentapartitioned neutrosophic logic on scalars ( T , C , G , U , F ) ; thus the refined version strictly generalizes the unrefined one.
Proof. 
Identical to Theorem 1, using that in both n–RPNL and n–RNL the indeterminacy coordinates combine componentwise by s (for ∧) and by t (for ∨). Concatenation (merge) and fixed index splits commute with componentwise application of the same binary operator by associativity and commutativity of t and s. Negation swaps only T and F . □

2.3. Refined Heptapartitioned Neutrosophic Logic

Refined Heptapartitioned Neutrosophic Logic models truth, relative-truth, contradiction, unknown/ignorance, relative-falsity, falsity; combines with t-norms and t-conorms; negation swaps truth-like/falsity-like blocks.
Definition 8 
(n–Refined Heptapartitioned Neutrosophic Logic (n–RHNL)). Fix integers p , m , c , u , i , k , f 1 and set n : = p + m + c + u + i + k + f . An n–refined heptapartitioned neutrosophic truth value is a 7–block vector
v = T M C U I K F [ 0 , 1 ] n ,
with block lengths T [ 0 , 1 ] p , M [ 0 , 1 ] m , C [ 0 , 1 ] c , U [ 0 , 1 ] u , I [ 0 , 1 ] i , K [ 0 , 1 ] k , F [ 0 , 1 ] f . Here M (relative truth) is truth–like, K (relative falsity) is falsity–like, and C , U , I refine three kinds of indeterminacy.
Let t be a (left–continuous) t–norm and s its dual t–conorm. A valuation Val : Form ( L ) [ 0 , 1 ] n assigns to each formula φ a 7–block vector, with the connectives defined blockwise and componentwise by
Val ( φ ψ ) = ( T ( φ ) t t T ( ψ ) | M ( φ ) t t M ( ψ ) | C ( φ ) t s C ( ψ ) | U ( φ ) t s U ( ψ ) | I ( φ ) t s I ( ψ ) | K ( φ ) t s K ( ψ ) | F ( φ ) t s F ( ψ ) ) , Val ( φ ψ ) = ( T ( φ ) t s T ( ψ ) | M ( φ ) t s M ( ψ ) | C ( φ ) t t C ( ψ ) | U ( φ ) t t U ( ψ ) | I ( φ ) t t I ( ψ ) | K ( φ ) t t K ( ψ ) | F ( φ ) t t F ( ψ ) ) , Val ( ¬ φ ) = ( F ( φ ) | K ( φ ) | C ( φ ) | U ( φ ) | I ( φ ) | M ( φ ) | T ( φ ) ) ,
i.e., negation swaps truth–like with falsity–like blocks ( T F , M K ) and leaves the three indeterminacy blocks invariant. (If a strong negation is desired, one may also apply x 1 x componentwise within the swapped blocks.)
Example 12 
(n–RHNL in Autonomous–Driving Perception (p=2, m=1, c=u=i=1, k=1, f=2; n = 9 )). Blocks and semantics.
v = ( T 1 , T 2 truth , M relative truth C contradiction U unknown I ignorance K relative falsity F 1 , F 2 falsity )
Interpretation: T 1 (camera), T 2 (LiDAR) truth; M (contextual/relative truth); C contradiction; U unknown; I ignorance; K relative falsity; F 1 , F 2 falsity (camera/LiDAR).
Atomic formulas. Let A be “Pedestrian is in the crosswalk” and B be “Traffic light is red”. Take the valuations (all in [ 0 , 1 ] ):
Val ( A ) = ( 0.85 , 0.70 0.60 0.10 0.20 0.15 0.25 0.05 , 0.10 ) ,
Val ( B ) = ( 0.90 , 0.65 0.55 0.05 0.10 0.10 0.30 0.05 , 0.15 ) .
Connectives. With t = min , s = max , apply Definition 8 blockwise:
Val ( A B ) = ( min ( 0.85 , 0.90 ) , min ( 0.70 , 0.65 ) | min ( 0.60 , 0.55 ) | max ( 0.10 , 0.05 ) | max ( 0.20 , 0.10 ) | max ( 0.15 , 0.10 ) | max ( 0.25 , 0.30 ) | max ( 0.05 , 0.05 ) , max ( 0.10 , 0.15 ) ) = ( 0.85 , 0.65 0.55 0.10 0.20 0.15 0.30 0.05 , 0.15 ) , Val ( A B ) = ( max ( 0.85 , 0.90 ) , max ( 0.70 , 0.65 ) | max ( 0.60 , 0.55 ) | min ( 0.10 , 0.05 ) | min ( 0.20 , 0.10 ) | min ( 0.15 , 0.10 ) | min ( 0.25 , 0.30 ) | min ( 0.05 , 0.05 ) , min ( 0.10 , 0.15 ) ) = ( 0.90 , 0.70 0.60 0.05 0.10 0.10 0.25 0.05 , 0.10 ) .
Negation and implication. Negation swaps ( T 1 , T 2 ) ( F 1 , F 2 ) and M K :
Val ( ¬ A ) = ( 0.05 , 0.10 0.25 0.10 0.20 0.15 0.60 0.85 , 0.70 ) .
Define implication by A B : = ¬ A B . Then
Val ( A B ) = Val ( ¬ A B ) = ( max ( 0.05 , 0.90 ) , max ( 0.10 , 0.65 ) max ( 0.25 , 0.55 ) min ( 0.10 , 0.05 ) min ( 0.20 , 0.10 ) min ( 0.15 , 0.10 ) min ( 0.60 , 0.30 ) min ( 0.85 , 0.05 ) , min ( 0.70 , 0.15 ) ) = ( 0.90 , 0.65 0.55 0.05 0.10 0.10 0.30 0.05 , 0.15 ) .
Reading. The high truth block and low falsity in Val ( A B ) indicate a strong support for “if pedestrian detected then light is red,” moderated by small C , U , I .
Example 13 
(n–RHNL in Historical Source Authentication (p=1, m=2, c=1, u=1, i=2, k=2, f=1; n = 10 )). Blocks and semantics.
v = ( T objective truth M 1 , M 2 relative / contextual truth C contradiction U unknown I 1 , I 2 ignorance types K 1 , K 2 relative falsity F objective falsity )
Interpretation: T (e.g., carbon dating), ( M 1 , M 2 ) (paleography, provenance), C (conflicting testimony), U (gaps), ( I 1 , I 2 ) (measurement limits, ambiguous style), ( K 1 , K 2 ) (contextual counter–evidence), F (hard falsification).
Atomic formulas. Let H be “Document D is authentic” and G be “Document D is forged”.
Val ( H ) = ( 0.72 0.65 , 0.55 0.20 0.10 0.25 , 0.30 0.35 , 0.40 0.18 ) ,
Val ( G ) = ( 0.30 0.40 , 0.45 0.25 0.15 0.20 , 0.25 0.50 , 0.55 0.60 ) .
Negation. With p = f = 1 and m = k = 2 , negation is blockwise well–typed:
Val ( ¬ G ) = ( 0.60 0.50 , 0.55 0.25 0.15 0.20 , 0.25 0.40 , 0.45 0.30 ) .
Conjunction H ¬ G . Using t = min , s = max ,
Val ( H ¬ G ) = ( min ( 0.72 , 0.60 ) | min ( 0.65 , 0.50 ) , min ( 0.55 , 0.55 ) | max ( 0.20 , 0.25 ) | max ( 0.10 , 0.15 ) | max ( 0.25 , 0.20 ) , max ( 0.30 , 0.25 ) | max ( 0.35 , 0.40 ) , max ( 0.40 , 0.45 ) | max ( 0.18 , 0.30 ) ) = ( 0.60 0.50 , 0.55 0.25 0.15 0.25 , 0.30 0.40 , 0.45 0.30 ) .
Disjunction H G .
Val ( H G ) = ( max ( 0.72 , 0.30 ) | max ( 0.65 , 0.40 ) , max ( 0.55 , 0.45 ) | min ( 0.20 , 0.25 ) | min ( 0.10 , 0.15 ) | min ( 0.25 , 0.20 ) , min ( 0.30 , 0.25 ) | min ( 0.35 , 0.50 ) , min ( 0.40 , 0.55 ) | min ( 0.18 , 0.60 ) ) = ( 0.72 0.65 , 0.55 0.20 0.10 0.20 , 0.25 0.35 , 0.40 0.18 ) .
The strong T and M in Val ( H ¬ G ) , with controlled C , U , I and moderate ( K 1 , K 2 , F ) , support authenticity given current evidence. The disjunction Val ( H G ) preserves the higher truth–like components while contracting falsity–like ones via t–norms.
Remark 1. 
The optional “sum–boundedness” constraint ( all components ) [ 0 , n ] can be imposed independently; it is preserved by the above operators since t , s : [ 0 , 1 ] 2 [ 0 , 1 ] and the rules act componentwise.
Lemma 3 
(Merge/Split between n–RHNL and n–RQNL). Let r Q : = c + ( u + i ) and p Q : = p + m , f Q : = f + k . Define the merge
π Q : [ 0 , 1 ] p + m + c + u + i + k + f [ 0 , 1 ] p Q + r Q + f Q , π Q ( T M C U I K F ) : = ( T M ) C ( U I ) ( F K ) .
Fix any disjoint index split of the r Q –dimensional middle block into J C of size c and J G of size u + i ; also split the truth/falsity aggregates into J T of size p and J M of size m, and J F of size f and J K of size k. Define the split
σ Q ( T G F ) : = T J T | T J M | G J C | G J G | 0 i | F J K | F J F ,
where 0 i is the all–zeros vector used only if one chooses to keep U and I distinguished (alternatively, distribute J G further into U and I with sizes u and i respectively). Then π Q σ Q = id .
Proof. 
Put r Q : = c + ( u + i ) , p Q : = p + m , and f Q : = f + k . Fix disjoint index partitions
J T J M = { 1 , , p Q } , J C J G = { 1 , , r Q } , J F J K = { 1 , , f Q } ,
with | J T | = p , | J M | = m , | J C | = c , | J G | = u + i , | J F | = f , | J K | = k . (For the middle block, further split J G as J G = J U J I with | J U | = u and | J I | = i ; this is the “distributed” option mentioned in the lemma.)
Let ( T G F ) [ 0 , 1 ] p Q + r Q + f Q be arbitrary. By definition of the split we take
σ Q ( T G F ) = T J T | T J M | G J C | G J U | G J I | F J K | F J F [ 0 , 1 ] p + m + c + u + i + k + f .
Now apply the merge π Q :
π Q σ Q ( T G F ) = ( T J T ) ( T J M ) | ( G J C ) ( G J U ) ( G J I ) | ( F J F ) ( F J K ) .
Since J T J M , J C J U J I , and J F J K are disjoint unions covering their respective index sets (and each restriction lists coordinates in increasing index order), we have the equalities of r-tuples
( T J T ) ( T J M ) = T , ( G J C ) ( G J U ) ( G J I ) = G , ( F J F ) ( F J K ) = F .
Therefore
( π Q σ Q ) ( T G F ) = ( T G F ) ,
for all ( T G F ) , i.e. π Q σ Q = id [ 0 , 1 ] p Q + r Q + f Q . Consequently, σ Q is injective (it has a left inverse) and π Q is surjective (it has a right inverse). □
Theorem 3 
(n–RHNL generalizes n–RQNL). Let Val H be an n–RHNL valuation and Val Q an n–RQNL valuation defined with the same t and s. Then the merge π Q is a homomorphism for , , ¬ :
π Q Val H ( ) = Val Q ( ) for { , , ¬ } .
Consequently, n–RHNL reduces to n–RQNL under π Q and strictly extends it via the split σ Q .
Proof. 
Write Val H ( φ ) = ( T φ M φ C φ U φ I φ K φ F φ ) . For ∧, the T and M blocks combine by t (truth–like), while all C , U , I , K , F combine by s. After concatenation by π Q , the first block ( T M ) is t–combined componentwise, matching the T–block of n–RQNL. The middle block C ( U I ) is s–combined, matching the two indeterminacy subblocks in n–RQNL (both use s for ∧). The last block ( F K ) is s–combined, matching the F–block in n–RQNL. The ∨ case is analogous with t s on the corresponding blocks. For ¬, swapping ( T , M ) ( F , K ) and fixing ( C , U , I ) maps under π Q to the standard ( T G F ) ( F G T ) . Thus π Q is a homomorphism. Since σ Q refines blocks without changing component values (or splits them by fixed indices), it commutes with componentwise operators and gives an embedding. Strictness follows because different distributions between M and T (or between K and F , or among U , I ) collapse under π Q . □
Lemma 4 
(Merge/Split between n–RHNL and n–RPNL). Let p P : = p + m , f P : = f + k , and r P : = c + u + i . Define
π P ( T M C U I K F ) : = ( T M ) | C | I | U | ( F K ) [ 0 , 1 ] p P + c + i + u + f P .
Conversely, fix index splits of the first and last blocks into ( J T , J M ) and ( J F , J K ) with sizes ( p , m ) and ( f , k ) , and split the indeterminacy middle into J C , J I , J U with sizes ( c , i , u ) . Then define σ P by placing the corresponding coordinates back into ( T , M , C , U , I , K , F ) in that order. One has π P σ P = id .
Proof. 
Put p P : = p + m , f P : = f + k , r P : = c + u + i . Fix disjoint index partitions
J T J M = { 1 , , p P } , J F J K = { 1 , , f P } ,
with | J T | = p , | J M | = m , | J F | = f , | J K | = k . (Each restriction lists coordinates in increasing index order.)
Let ( T C I U F ) [ 0 , 1 ] p P + c + i + u + f P be arbitrary. By definition of the split map σ P (placing coordinates back into the heptapartitioned order ( T , M , C , U , I , K , F ) ), we set
σ P ( T C I U F ) = T J T | T J M | C | U | I | F J K | F J F [ 0 , 1 ] p + m + c + u + i + k + f .
Now apply the merge π P :
π P σ P ( T C I U F ) = ( T J T ) ( T J M ) | C | I | U | ( F J F ) ( F J K ) .
Because J T J M and J F J K are disjoint partitions of their index sets, we have
( T J T ) ( T J M ) = T , ( F J F ) ( F J K ) = F .
Hence
( π P σ P ) ( T C I U F ) = ( T C I U F ) ,
for all inputs, i.e. π P σ P = id [ 0 , 1 ] p P + c + i + u + f P . In particular, σ P is injective (it has a left inverse) and π P is surjective (it has a right inverse). □
Theorem 4 
(n–RHNL generalizes n–RPNL). For valuations defined with the same pair ( t , s ) , the merge π P is a homomorphism for , , ¬ . Hence n–RHNL reduces to n–RPNL under π P and strictly extends it via σ P .
Proof. 
Identical in spirit to Theorem 3. Under ∧, truth–like blocks T , M use t (carried to the T–block of n–RPNL), while C , I , U , K , F use s (carried to C , G , U , F respectively, with K merged into F). Under ∨, the roles of t and s are swapped consistently with the target definitions. Negation swap in heptapartitioned form becomes the usual T F with C , G , U fixed after merging. Strictness follows from the loss of refinement when collapsing ( T , M ) into T, ( F , K ) into F, and keeping C , G , U only three–way. □
Theorem 5 
(Recovery of (scalar) Heptapartitioned Neutrosophic Logic). Setting all block lengths to 1 ( p = m = c = u = i = k = f = 1 ) turns each block into a scalar ( T , M , C , U , I , K , F ) [ 0 , 1 ] 7 . Then the operators of Definition 8 reduce to
: ( T , M ; C , U , I ; K , F ) t ( T , T ) , t ( M , M ) ; s ( C , C ) , s ( U , U ) , s ( I , I ) ; s ( K , K ) , s ( F , F ) ,
: ( T , M ; C , U , I ; K , F ) s ( T , T ) , s ( M , M ) ; t ( C , C ) , t ( U , U ) , t ( I , I ) ; t ( K , K ) , t ( F , F ) ,
¬ : ( T , M ; C , U , I ; K , F ) ( F , K ; C , U , I ; M , T ) ,
which is the standard (unrefined) heptapartitioned behavior: truth–like ( T , M ) act dually to falsity–like ( F , K ), while the three indeterminacy facets ( C , U , I ) are fixed by negation and behave conjunctively/disjunctively via s / t . Therefore n–RHNL specializes to Heptapartitioned Neutrosophic Logic.
Proof. 
Immediate by substituting block length 1 and reading Definition 8 componentwise. □

2.4. Iterative Refined Neutrosophic Logic (Refined of .... of Refined)

Iterative Refined Neutrosophic Logic (IRNL) provides a schema that repeatedly splits truth-, indeterminacy-, and falsity-like components, applies t-norm/t-conorm connectives, and guarantees functorial stability across refinements.
Definition 9 
(t–norm / t–conorm). At–norm is a map t : [ 0 , 1 ] 2 [ 0 , 1 ] that is associative, commutative, monotone in each argument, and satisfies t ( x , 1 ) = x . Its (De Morgan) dualt–conorm s : [ 0 , 1 ] 2 [ 0 , 1 ] is associative, commutative, monotone, with s ( x , 0 ) = x . Typical choices are t = min with s = max , or product t ( x , y ) = x y with s ( x , y ) = x + y x y .
Definition 10 
(Refinement schema). An iterative refinement schema is a tuple
Σ = T , I , F , δ ,
where T , I , F are finite, pairwise disjoint index sets of truth–like, indeterminacy–like, and falsity–like leaves, and δ : T F is a fixed bijection (the duality pairing). Write L : = T I F and
V Σ : = [ 0 , 1 ] L = v = ( v ) L : v [ 0 , 1 ] .
For v V Σ we denote its restrictions by v | T , v | I , v | F .
Example 14 
(Refinement schema in a smart–thermostat decision). Scenario. A smart thermostat decides whether to heat now. We build a refinement schema
Σ = T , I , F , δ ,
where the leaves and the duality pairing are
T = { T dmd , T wthr } , I = { I sens } , F = { F , F occ } ,
δ ( T dmd ) = F , δ ( T wthr ) = F occ .
Here, T dmd = “heat–demand from setpoint gap,” T wthr = “cold weather,” I sens = “sensor uncertainty,” F = “incoming solar gain,” and F occ = “low occupancy (discourages heating).”
A concrete state. On a winter morning, the thermostat aggregates signals into
v V Σ = [ 0 , 1 ] T I F , v = 0.78 , 0.65 | 0.12 | 0.30 , 0.10 ,
ordered as ( T dmd , T wthr I sens F , F occ ) . Thus v | T = ( 0.78 , 0.65 ) , v | I = ( 0.12 ) , v | F = ( 0.30 , 0.10 ) . The duality δ pairs truth–like demand with solar counter–evidence and truth–like weather with occupancy counter–evidence, preparing for negation in IRNL.
Definition 11 
(Iterative Refined Neutrosophic Logic (IRNL)). Fix a language of formulas Form , a refinement schema Σ as in Definition 10, and a pair ( t , s ) consisting of a t–norm and its dual t–conorm. An IRNL valuation is a map
Val : Form V Σ ,
together with connectives defined coordinatewise on L by:
Conjunction : Val ( φ ψ ) = t Val ( φ ) , Val ( ψ ) , T ( truth like ) ; s Val ( φ ) , Val ( ψ ) , I F ( indet . / falsity like ) . Disjunction : Val ( φ ψ ) = s Val ( φ ) , Val ( ψ ) , T ; t Val ( φ ) , Val ( ψ ) , I F . Negation : Val ( ¬ φ ) = Val ( φ ) δ 1 ( ) , F ( swap truth falsity ) ; Val ( φ ) δ ( ) , T ; Val ( φ ) , I ( indeterminacy fixed ) .
(If a strong negation is desired, additionally apply x 1 x in the swapped coordinates.)
Example 15 
(IRNL on the same schema with explicit calculations). Fix the Gödel pair ( t , s ) = ( min , max ) . Consider two formulas:
φ : = Heat now is appropriate , ψ : = A cold front is imminent .
Using the schema of Example 14, assign valuations
Val ( φ ) = 0.78 , 0.65 | 0.12 | 0.30 , 0.10 , Val ( ψ ) = 0.55 , 0.80 | 0.15 | 0.20 , 0.05 .
Conjunction  φ ψ (truth–like by min, indeterminacy/falsity–like by max):
T = min ( 0.78 , 0.55 ) , min ( 0.65 , 0.80 ) = ( 0.55 , 0.65 ) ,
I = max ( 0.12 , 0.15 ) = 0.15 , F = max ( 0.30 , 0.20 ) , max ( 0.10 , 0.05 ) = ( 0.30 , 0.10 ) ,
hence
Val ( φ ψ ) = 0.55 , 0.65 | 0.15 | 0.30 , 0.10 .
Disjunction  φ ψ (truth–like by max, indeterminacy/falsity–like by min):
T = max ( 0.78 , 0.55 ) , max ( 0.65 , 0.80 ) = ( 0.78 , 0.80 ) ,
I = min ( 0.12 , 0.15 ) = 0.12 , F = min ( 0.30 , 0.20 ) , min ( 0.10 , 0.05 ) = ( 0.20 , 0.05 ) ,
so
Val ( φ ψ ) = 0.78 , 0.80 | 0.12 | 0.20 , 0.05 .
Negation  ¬ φ (swap via δ; T dmd F , T wthr F occ ; keep I sens ):
Val ( ¬ φ ) = 0.30 , 0.10 from ( F , F occ ) | 0.12 | 0.78 , 0.65 from ( T dmd , T wthr ) .
These computations make explicit the coordinatewise action of IRNL on a real household control task.
Definition 12 
(One refinement step). Let Σ = ( T , I , F , δ ) be a schema. Choose a finite family of disjoint leaf blocks B T I F to be refined. For each β B select an integer k ( β ) 1 and create children indices β ( 1 ) , , β ( k ( β ) ) of the same type as β. Obtain a new schema
Σ = T , I , F , δ
by replacing each β with its children inside the appropriate set ( T or I or F ), and extend the duality pairing by refining paired leaves symmetrically : if β T is refined, also refine δ ( β ) F with the same k ( β ) , and set δ β ( j ) : = δ ( β ) ( j ) for j = 1 , , k ( β ) . (Leaves not in B and their pairings are kept unchanged.)
Example 16 
(One refinement step: splitting a weather leaf and its dual). Starting from Σ in Example 14, choose the refinement set
B = { T wthr , F occ } ,
and set k ( β ) = 2 for each β B . Create children
T wthr ( 1 ) = T wthr A ( forecast model A ) , T wthr ( 2 ) = T wthr B ( forecast model B ) ,
F occ ( 1 ) = F cal ( calendar non - occupancy ) , F occ ( 2 ) = F mot ( no motion detected ) .
Define the refined schema Σ = ( T , I , F , δ ) by replacing T wthr with its two children in T , and F occ with its two children in F , keeping other leaves unchanged. Extend the duality pairing symmetrically:
δ ( T wthr A ) = F cal , δ ( T wthr B ) = F mot ,
and keep δ ( T dmd ) = F , δ unchanged on non–refined pairs. This respects Definition 12 by refining a truth–leaf and its falsity–dual with the same arity.
Definition 13 
(Split map (canonical embedding)). The refinement step Σ Σ induces a split (or refinement) map σ : V Σ V Σ defined by
σ ( v ) β ( j ) : = v β for every refined β B and j = 1 , , k ( β ) , σ ( v ) : = v for all non - refined .
Thus each refined coordinate is replicated across its children.
Example 17 
(Split map (canonical embedding) with explicit numbers). Let v = 0.78 , 0.65 | 0.12 | 0.30 , 0.10 V Σ be as in Example 14, with ordering ( T dmd , T wthr I sens F , F occ ) . The split map σ : V Σ V Σ from Example 16 replicates each refined coordinate across its children:
σ ( v ) T wthr A = σ ( v ) T wthr B = v T wthr = 0.65 , σ ( v ) F cal = σ ( v ) F mot = v F occ = 0.10 ,
and leaves other coordinates unchanged:
σ ( v ) T dmd = 0.78 , σ ( v ) I sens = 0.12 , σ ( v ) F = 0.30 .
Thus, in the ordered block
T dmd , T wthr A , T wthr B | I sens | F , F cal , F mot ,
the embedded vector is
σ ( v ) = 0.78 , 0.65 , 0.65 | 0.12 | 0.30 , 0.10 , 0.10 ,
which satisfies Definition 13 exactly: every refined child inherits its parent’s value, and all non–refined leaves are preserved verbatim.
Theorem 6 
(Functoriality under refinement). Let Val : Form V Σ be an IRNL valuation and σ : V Σ V Σ the split map of Definition 13. Define Val : = σ Val : Form V Σ . Then for all formulas φ , ψ ,
Val ( φ ψ ) = Val ( φ ) Val ( ψ ) , Val ( φ ψ ) = Val ( φ ) Val ( ψ ) , Val ( ¬ φ ) = ¬ Val ( φ ) ,
where the connectives on V Σ are those of Definition 11 built from ( t , s ) and the refined schema Σ . In particular, refining after evaluating equals evaluating after refining.
Proof. 
All operations are componentwise. If β is refined, both Val ( φ ) and Val ( ψ ) assign the same parent values v β , w β duplicated across β ( 1 ) , , β ( k ) . For ∧, each child coordinate computes either t ( v β , w β ) (truth–like) or s ( v β , w β ) (indet./falsity–like), which is exactly the duplication of the parent result; hence σ Val ( φ ψ ) = Val ( φ ) Val ( ψ ) . The ∨ case is identical with t s on the corresponding types. For ¬, the symmetric refinement of paired leaves guarantees that swapping via δ acts child–by–child as the swap via δ followed by replication. Thus σ Val ( ¬ φ ) = ¬ Val ( φ ) . Non–refined coordinates are untouched in all cases. □
Corollary 1 
(Iterative (“refined of refined”) consistency). By repeating Definition 12 a finite number of times, one obtains a chain
Σ 0 Σ 1 Σ m ,
with split maps σ j : V Σ j 1 V Σ j . The composite σ 1 σ m is a homomorphism for , , ¬ ; hence the IRNL semantics is stable under arbitrary finite iterations of refinement.
Remark 2 
(Relation to n–refined logics). If T , I , F are partitioned into blocks of sizes p , r , s respectively (single refinement level), then V Σ [ 0 , 1 ] p + r + s and Definition 11 specializes to the usual n–refined neutrosophic semantics (with n = p + r + s ). The iterative scheme above permits arbitrarily many subsequent splits of any coordinate while preserving the connective behavior.
In addition to the examples discussed so far, two further concrete examples are provided below.
Example 18 
(IRNL in Medical Diagnosis with One Refinement Step). Schema. Let the refinement schema be
T = { T lab , T img } , I = { I amb , I miss } , F = { F lab , F img } ,
with the duality pairing δ ( T lab ) = F lab and δ ( T img ) = F img . We fix t = min and s = max and order coordinates as
( T lab , T img I amb , I miss F lab , F img ) .
Atoms. Let p mean “Disease A is present” and q mean “Disease B is present”. Suppose the current evidential degrees are:
Val ( p ) = ( 0.70 , 0.60 0.20 , 0.10 0.10 , 0.20 ) , Val ( q ) = ( 0.50 , 0.80 0.10 , 0.30 0.20 , 0.10 ) .
Connectives (before refinement).
Val ( p q ) = ( min ( 0.70 , 0.50 ) , min ( 0.60 , 0.80 ) | max ( 0.20 , 0.10 ) , max ( 0.10 , 0.30 ) | max ( 0.10 , 0.20 ) , max ( 0.20 , 0.10 ) ) = ( 0.50 , 0.60 0.20 , 0.30 0.20 , 0.20 ) , Val ( p q ) = ( max ( 0.70 , 0.50 ) , max ( 0.60 , 0.80 ) | min ( 0.20 , 0.10 ) , min ( 0.10 , 0.30 ) | min ( 0.10 , 0.20 ) , min ( 0.20 , 0.10 ) ) = ( 0.70 , 0.80 0.10 , 0.10 0.10 , 0.10 ) , Val ( ¬ p ) = ( F lab , F img I amb , I miss T lab , T img ) - swap of Val ( p ) = ( 0.10 , 0.20 0.20 , 0.10 0.70 , 0.60 ) .
Iterative refinement step. Refine the truth–falsity pair ( T lab , F lab ) into two children each:
T lab T lab ( 1 ) , T lab ( 2 ) , F lab F lab ( 1 ) , F lab ( 2 ) ,
and extend the pairing by δ T lab ( j ) = F lab ( j ) ( j = 1 , 2 ). The split map σ replicates the parent coordinate into its children:
σ Val ( p ) = 0.70 , 0.70 , 0.60 0.20 , 0.10 0.10 , 0.10 , 0.20 ,
in the refined order ( T lab ( 1 ) , T lab ( 2 ) , T img I amb , I miss F lab ( 1 ) , F lab ( 2 ) , F img ) , and similarly
σ Val ( q ) = 0.50 , 0.50 , 0.80 0.10 , 0.30 0.20 , 0.20 , 0.10 .
Connectives (after refinement). Using the same t = min , s = max componentwise,
Val ( p q ) = σ ( Val ( p ) ) σ ( Val ( q ) ) = ( min ( 0.70 , 0.50 ) , min ( 0.70 , 0.50 ) , min ( 0.60 , 0.80 ) max ( 0.20 , 0.10 ) , max ( 0.10 , 0.30 ) max ( 0.10 , 0.20 ) , max ( 0.10 , 0.20 ) , max ( 0.20 , 0.10 ) ) = 0.50 , 0.50 , 0.60 0.20 , 0.30 0.20 , 0.20 , 0.20 .
On the other hand,
σ Val ( p q ) = σ 0.50 , 0.60 0.20 , 0.30 0.20 , 0.20 = 0.50 , 0.50 , 0.60 0.20 , 0.30 0.20 , 0.20 , 0.20 ,
which coincides with Val ( p q ) , illustrating the refinement invariance σ Val ( · ) = Val ( · ) for ∧. The same equality holds for ∨ and ¬ (omitted for brevity).
Example 19 
(IRNL in Autonomous Driving with Indeterminacy Refinement). Schema. Let
T = { T cam , T lid , T rad } , I = { I occl , I weather } , F = { F cam , F lid , F rad } ,
paired as δ ( T x ) = F x for x { cam , lid , rad } . Again we take t = min , s = max and order coordinates as
( T cam , T lid , T rad I occl , I weather F cam , F lid , F rad ) .
Atoms. Let o denote “Obstacle ahead” and l denote “Left lane is free”. Assume
Val ( o ) = ( 0.80 , 0.90 , 0.60 0.30 , 0.20 0.10 , 0.05 , 0.20 ) ,
Val ( l ) = ( 0.40 , 0.30 , 0.50 0.20 , 0.40 0.40 , 0.50 , 0.30 ) .
Negation swaps truth/falsity blocks (same pairing) and keeps indeterminacy:
Val ( ¬ l ) = ( 0.40 , 0.50 , 0.30 0.20 , 0.40 0.40 , 0.30 , 0.50 ) .
A compound:  o ¬ l (“obstacle ahead and not left–free”):
Val ( o ¬ l ) = ( min ( 0.80 , 0.40 ) , min ( 0.90 , 0.50 ) , min ( 0.60 , 0.30 ) | max ( 0.30 , 0.20 ) , max ( 0.20 , 0.40 ) | max ( 0.10 , 0.40 ) , max ( 0.05 , 0.30 ) , max ( 0.20 , 0.50 ) ) = ( 0.40 , 0.50 , 0.30 0.30 , 0.40 0.40 , 0.30 , 0.50 ) .
Iterative refinement step (on indeterminacy). Split I occl into two children of the same type:
I occl I occl ( s ) ( static occlusion ) , I occl ( d ) ( dynamic occlusion ) ,
keeping all other leaves unchanged (no dual pairing is needed for I ). The split map σ duplicates the occlusion value:
σ Val ( o ) = ( 0.80 , 0.90 , 0.60 0.30 , 0.30 , 0.20 0.10 , 0.05 , 0.20 ) ,
σ Val ( ¬ l ) = ( 0.40 , 0.50 , 0.30 0.20 , 0.20 , 0.40 0.40 , 0.30 , 0.50 ) ,
where the refined order is
( T cam , T lid , T rad I occl ( s ) , I occl ( d ) , I weather F cam , F lid , F rad ) .
Connective after refinement.
Val ( o ¬ l ) = σ Val ( o ) σ Val ( ¬ l ) = ( min ( 0.80 , 0.40 ) , min ( 0.90 , 0.50 ) , min ( 0.60 , 0.30 ) | max ( 0.30 , 0.20 ) , max ( 0.30 , 0.20 ) , max ( 0.20 , 0.40 ) | max ( 0.10 , 0.40 ) , max ( 0.05 , 0.30 ) , max ( 0.20 , 0.50 ) ) = ( 0.40 , 0.50 , 0.30 0.30 , 0.30 , 0.40 0.40 , 0.30 , 0.50 ) .
Applying σ to the pre–refinement result gives
σ Val ( o ¬ l ) = ( 0.40 , 0.50 , 0.30 0.30 , 0.30 , 0.40 0.40 , 0.30 , 0.50 ) ,
which matches Val ( o ¬ l ) exactly. Hence the evaluation commutes with the refinement step also when only indeterminacy leaves are split.

3. Conclusions

In this paper, we introduced an n–Refined Neutrosophic Logic (n–RNL) that decomposes the classical truth–indeterminacy–falsity triplet into interpretable subcomponents. Within this framework we instantiated refined quadripartitioned, pentapartitioned, and heptapartitioned families that explicitly separate contradiction, ignorance, unknown, and relative (context–dependent) truth/falsity. Logical connectives are defined blockwise by a chosen t–norm / t–conorm pair, preserving associativity, commutativity, and monotonicity componentwise. We proved merge/split homomorphisms that (i) reduce refined models to their unrefined counterparts without breaking algebraic structure, and (ii) embed unrefined semantics faithfully into refined spaces. We further proposed Iterative Refined Neutrosophic Logic (IRNL), which supports repeated, symmetric refinements and guarantees functorial stability of evaluations: refining then evaluating yields the same result as evaluating then refining.
As future work, we plan to extend this program over richer combinatorial and set–theoretic substrates, including graphs [52,53], hypergraphs [54,55,56], superhypergraphs [57,58,59,60,61], plithogenic sets [62,63,64,65], HyperFuzzy Sets [66,67,68,69,70,71], Soft Sets [72,73,74], Rough Set [75,76,77], Chemical Graphs [78,79], and hyperneutrosophic sets [80,81]. We expect these directions to enable more expressive modeling of heterogeneous evidence, multi–source contradictions, and context–sensitive reasoning at scale.

Research Integrity

The author confirms that this manuscript is original, has not been published elsewhere, and is not under consideration by any other journal.

Use of Computational Tools

All proofs and derivations were performed manually; no computational software (e.g., Mathematica, SageMath, Coq) was used.

Code Availability

No code or software was developed for this study.

Ethical Approval

This research did not involve human participants or animals, and therefore did not require ethical approval.

Use of Generative AI and AI-Assisted Tools

We use generative AI and AI-assisted tools for tasks such as English grammar checking, and We do not employ them in any way that violates ethical standards.

Disclaimer

The ideas presented here are theoretical and have not yet been validated through empirical testing. While we have strived for accuracy and proper citation, inadvertent errors may remain. Readers should verify any referenced material independently. The opinions expressed are those of the authors and do not necessarily reflect the views of their institutions.

Funding

No external funding was received for this work.

Data Availability Statement

This paper is theoretical and did not generate or analyze any empirical data. We welcome future studies that apply and test these concepts in practical settings.

Acknowledgments

We thank all colleagues, reviewers, and readers whose comments and questions have greatly improved this manuscript. We are also grateful to the authors of the works cited herein for providing the theoretical foundations that underpin our study. Finally, we appreciate the institutional and technical support that enabled this research.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this work.

References

  1. Smarandache, F. Neutrosophy: neutrosophic probability, set, and logic: analytic synthesis & synthetic analysis 1998.
  2. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets; Infinite study, 2010.
  3. Broumi, S.; Talea, M.; Bakali, A.; Smarandache, F. Single valued neutrosophic graphs. Journal of New theory 2016, 86–101. [Google Scholar]
  4. Muhiuddin, G.; Hussain S, S.; Nagarajan, D. Quadripartitioned Bipolar Neutrosophic Competition Graph with Novel Application. Neutrosophic Sets and Systems 2025, 82, 8. [Google Scholar]
  5. Akram, M.; Sarwar, M.; Dudek, W.A.; Akram, M.; Sarwar, M.; Dudek, W.A. Bipolar neutrosophic graph structures. Graphs for the Analysis of Bipolar Fuzzy Information 2021, 393–446. [Google Scholar]
  6. Akram, M.; Sarwar, M. New applications of m-polar fuzzy competition graphs. New Mathematics and Natural Computation 2018, 14, 249–276. [Google Scholar] [CrossRef]
  7. Zhao, H.; Zhang, H.Y. On hesitant neutrosophic rough set over two universes and its application. Artificial Intelligence Review 2020, 53, 4387–4406. [Google Scholar] [CrossRef]
  8. Zhao, H.; Zhang, H. On hesitant neutrosophic rough set over two universes and its application. Artificial Intelligence Review 2019, 53, 4387–4406. [Google Scholar] [CrossRef]
  9. Pang, Y.; Yang, W. Hesitant Neutrosophic Linguistic Sets and Their Application in Multiple Attribute Decision Making. Inf. 2018, 9, 88. [Google Scholar] [CrossRef]
  10. Ali, S.; Ali, A.; Azim, A.B.; Aloqaily, A.; Mlaiki, N. Utilizing aggregation operators based on q-rung orthopair neutrosophic soft sets and their applications in multi-attributes decision making problems. Heliyon 2024. [Google Scholar] [CrossRef]
  11. Santhoshkumar, S.; Aldring, J.; Ajay, D. Analyzing Aggregation Operators on Complex q-Rung Orthopair Neutrosophic Sets with their Application. In Proceedings of the International Conference on Intelligent and Fuzzy Systems; Springer, 2024; pp. 744–751. [Google Scholar]
  12. Voskoglou, M.G.; Smarandache, F.; Mohamed, M. q-Rung Neutrosophic Sets and Topological Spaces; Infinite Study, 2024.
  13. Zadeh, L.A. Fuzzy sets. Information and control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  14. Mordeson, J.N.; Nair, P.S. Fuzzy graphs and fuzzy hypergraphs; Vol. 46, Physica, 2012.
  15. Atanassov, K.T.; Gargov, G. Intuitionistic fuzzy logics; Springer, 2017.
  16. Atanassov, K.; Gargov, G. Elements of intuitionistic fuzzy logic. Part I. Fuzzy sets and systems 1998, 95, 39–52. [Google Scholar] [CrossRef]
  17. Lu, A.; Ng, W. Vague sets or intuitionistic fuzzy sets for handling vague data: which one is better? In Proceedings of the International conference on conceptual modeling. Springer; 2005; pp. 401–416. [Google Scholar]
  18. Torra, V.; Narukawa, Y. On hesitant fuzzy sets and decision. In Proceedings of the 2009 IEEE international conference on fuzzy systems. IEEE; 2009; pp. 1378–1382. [Google Scholar]
  19. Xu, Z. Hesitant fuzzy sets theory; Vol. 314, Springer, 2014.
  20. Smarandache, F. A unifying field in Logics: Neutrosophic Logic. In Philosophy; American Research Press, 1999; pp. 1–141.
  21. Khan, S.A.; Iqbal, K.; Mohammad, N.; Akbar, R.; Ali, S.S.A.; Siddiqui, A.A. A novel fuzzy-logic-based multi-criteria metric for performance evaluation of spam email detection algorithms. Applied Sciences 2022, 12, 7043. [Google Scholar] [CrossRef]
  22. AMOURA, M. A method for detecting spam emails based on fuzzy logic 2024.
  23. El-Alfy, E.S.M.; Al-Qunaieer, F.S. A fuzzy similarity approach for automated spam filtering. In Proceedings of the 2008 IEEE/ACS International Conference on Computer Systems and Applications. IEEE; 2008; pp. 544–550. [Google Scholar]
  24. Henig, O.; Kaye, K.S. Bacterial pneumonia in older adults. Infectious disease clinics of North America 2017, 31, 689. [Google Scholar] [CrossRef]
  25. Abobala, M.; Ibrahim, M. An introduction to refined neutrosophic number theory. Neutrosophic sets and systems 2021, 45, 40–53. [Google Scholar]
  26. Merkepci, M.; Abobala, M. Security Model for Encrypting Uncertain Rational Data Units Based on Refined Neutrosophic Integers Fusion and El Gamal Algorithm; Infinite Study, 2023.
  27. Ibrahim, M.A.; Badmus, B.; Akinleye, S.; et al. On refined neutrosophic vector spaces I. International Journal of Neutrosophic Science 2020, 7, 97–109. [Google Scholar] [CrossRef]
  28. Abobala, M. A study of AH-substructures in n-refined neutrosophic vector spaces. International Journal of Neutrosophic Science 2020, 9, 74–85. [Google Scholar] [CrossRef]
  29. Deli, I. Refined neutrosophic sets and refined neutrosophic soft sets: theory and applications. In Handbook of research on generalized and hybrid set structures and applications for soft computing; IGI Global, 2016; pp. 321–343.
  30. Alkhazaleh, S. n-Valued refined neutrosophic soft set theory. Journal of Intelligent & Fuzzy Systems 2017, 32, 4311–4318. [Google Scholar] [CrossRef]
  31. Alkhazaleh, S.; Hazaymeh, A.A. N-valued refined neutrosophic soft sets and their applications in decision making problems and medical diagnosis. Journal of Artificial Intelligence and Soft Computing Research 2018, 8, 79–86. [Google Scholar] [CrossRef]
  32. Smarandache, F. n-Valued refined neutrosophic logic and its applications to physics. Infinite study 2013, 4, 143–146. [Google Scholar]
  33. Zheng, Y.; Mo, L.; Ye, Z. A Robust Multi-Dimensional n-Valued Refined Neutrosophic Logic Framework for Competitive Calculation in the Leisure Sports Industry. Neutrosophic Sets and Systems 2025, 86, 631–643. [Google Scholar]
  34. Alhasan, Y.A.; Smarandache, F.; Alfahal, A.; Abdulfatah, R.A. The extended study of 2-refined neutrosophic numbers; Infinite Study, 2024.
  35. Fialho, A.S.; Vieira, S.M.; Kaymak, U.; Almeida, R.J.; Cismondi, F.; Reti, S.R.; Finkelstein, S.N.; Sousa, J.M. Mortality prediction of septic shock patients using probabilistic fuzzy systems. Applied Soft Computing 2016, 42, 194–203. [Google Scholar] [CrossRef]
  36. Pires, H.H.G.; Neves, F.F.; Pazin-Filho, A. Triage and flow management in sepsis. International Journal of Emergency Medicine 2019, 12, 36. [Google Scholar] [CrossRef]
  37. Yiarayong, P. Some weighted aggregation operators of quadripartitioned single-valued trapezoidal neutrosophic sets and their multi-criteria group decision-making method for developing green supplier selection criteria. OPSEARCH 2024, 1–55. [Google Scholar] [CrossRef]
  38. Shil, B.; Das, R.; Granados, C.; Das, S.; Tripathy, B.C. Single-Valued Quadripartitioned Neutrosophic d-Ideal of d-Algebra. Neutrosophic Sets and Systems 2024, 67, 105–114. [Google Scholar]
  39. Hussain, S.S.; Aslam, M.; Rahmonlou, H.; Durga, N. Applying Interval Quadripartitioned Single-Valued Neutrosophic Sets to Graphs and Climatic Analysis. In Data-Driven Modelling with Fuzzy Sets; CRC Press, 2025; pp. 100–143. [Google Scholar]
  40. Chatterjee, T.; Pramanik, S. Triangular fuzzy quadripartitioned neutrosophic set and its properties. Neutrosophic Sets and Systems 2025, 75, 15–28. [Google Scholar]
  41. Das, S.; Poojary, P.; GR, V.B.; Sharma, S.K. Single-Valued Pentapartitioned Neutrosophic Bi-Topological Spaces. International Journal of Neutrosophic Science (IJNS) 2024, 23. [Google Scholar]
  42. Das, R.; Das, S. Pentapartitioned Neutrosophic Subtraction Algebra. Neutrosophic Sets and Systems 2024, 68, 89–98. [Google Scholar]
  43. Myvizhi, M.; Elkholy, M.; Abdelhafeez, A.; Elbehiery, H.; et al. Enhanced MADM Strategy with Heptapartitioned Neutrosophic Distance Metrics. Neutrosophic Sets and Systems 2025, 78, 74–96. [Google Scholar]
  44. Mythili, T.; Jeyanthi, V.; et al. Enhanced decision-making academic libraries with topsis methods using blockchain technology by heptapartitioned neutrosophic number. In Leveraging Blockchain for Future-Ready Libraries; IGI Global Scientific Publishing, 2025; pp. 123–140. [Google Scholar]
  45. Elbehiery, H. Enhanced MADM Strategy with Heptapartitioned Neutrosophic Distance Metrics. Neutrosophic Sets and Systems, vol. 78/2025: An International Journal in Information Science and Engineering, 2025; 74. [Google Scholar]
  46. Tonazzi, S.; Prenovost, L.; Scheuermann, S. Delayed antibiotic prescribing to reduce antibiotic use: an urgent care practice change. BMJ Open Quality 2022, 11. [Google Scholar] [CrossRef]
  47. MacNeil, H.M.; Mak, B. Constructions of authenticity. Library Trends 2007, 56, 26–52. [Google Scholar] [CrossRef]
  48. Ye, S.; Ye, J. Dice similarity measure between single valued neutrosophic multisets and its application in medical diagnosis. Neutrosophic sets and systems 2014, 6, 9. [Google Scholar]
  49. Shinoj, T.; John, S.J. Intuitionistic fuzzy multisets and its application in medical diagnosis. World academy of science, engineering and technology 2012, 6, 1418–1421. [Google Scholar]
  50. Cristiani, A.L.; Lieira, D.D.; Meneguette, R.I.; Camargo, H.A. A fuzzy intrusion detection system for identifying cyber-attacks on iot networks. In Proceedings of the 2020 IEEE Latin-American Conference on Communications (LATINCOM). Ieee; 2020; pp. 1–6. [Google Scholar]
  51. Abushark, Y.B.; Khan, A.I.; Alsolami, F.; Almalawi, A.; Alam, M.M.; Agrawal, A.; Kumar, R.; Khan, R.A. Cyber security analysis and evaluation for intrusion detection systems. Comput. Mater. Contin 2022, 72, 1765–1783. [Google Scholar] [CrossRef]
  52. Diestel, R. Graph theory; Springer (print edition); Reinhard Diestel (eBooks), 2024.
  53. Gross, J.L.; Yellen, J.; Anderson, M. Graph theory and its applications; Chapman and Hall/CRC, 2018.
  54. Bretto, A. Hypergraph theory. In An introduction. Mathematical Engineering; Springer: Cham, 2013; Volume 1. [Google Scholar]
  55. Gao, Y.; Zhang, Z.; Lin, H.; Zhao, X.; Du, S.; Zou, C. Hypergraph learning: Methods and practices. IEEE Transactions on Pattern Analysis and Machine Intelligence 2020, 44, 2548–2566. [Google Scholar] [CrossRef] [PubMed]
  56. Feng, Y.; You, H.; Zhang, Z.; Ji, R.; Gao, Y. Hypergraph neural networks. Proceedings of the Proceedings of the AAAI conference on artificial intelligence 2019, 33, 3558–3565. [Google Scholar] [CrossRef]
  57. Smarandache, F. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra; Infinite Study, 2020.
  58. Hamidi, M.; Smarandache, F.; Davneshvar, E. Spectrum of superhypergraphs via flows. Journal of Mathematics 2022, 2022, 9158912. [Google Scholar] [CrossRef]
  59. Fujita, T. A Hierarchical Hypergraph and Superhypergraph Framework for Semantic and Behavioral Graphs in Psychology and the Social Sciences. Psychology Nexus 2025. [Google Scholar]
  60. Fujita, T. Superhypergraph Neural Networks and Plithogenic Graph Neural Networks: Theoretical Foundations. arXiv 2024, arXiv:2412.01176. [Google Scholar] [CrossRef]
  61. Hamidi, M.; Taghinezhad, M. Application of Superhypergraphs-Based Domination Number in Real World; Infinite Study, 2023.
  62. Smarandache, F. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic sets-revisited; Infinite study, 2018.
  63. Fujita, T.; Smarandache, F. A Review of the Hierarchy of Plithogenic, Neutrosophic, and Fuzzy Graphs: Survey and Applications. In Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond (Second Volume); Biblio Publishing, 2024.
  64. Kandasamy, W.V.; Ilanthenral, K.; Smarandache, F. Plithogenic Graphs; Infinite Study, 2020.
  65. Sultana, F.; Gulistan, M.; Ali, M.; Yaqoob, N.; Khan, M.; Rashid, T.; Ahmed, T. A study of plithogenic graphs: applications in spreading coronavirus disease (COVID-19) globally. Journal of ambient intelligence and humanized computing 2023, 14, 13139–13159. [Google Scholar] [CrossRef]
  66. Fujita, T.; Mehmood, A.; Ghaib, A.A. HyperFuzzy Control System and SuperHyperFuzzy Control System. Smart Multi-Criteria Analytics and Reasoning Technologies 2025, 1, 1–21. [Google Scholar]
  67. Fujita, T. HyperFuzzy and SuperHyperFuzzy Group Decision-Making. Spectrum of Decision Making and Applications 2027, 1–18. [Google Scholar] [CrossRef]
  68. Jun, Y.B.; Hur, K.; Lee, K.J. Hyperfuzzy subalgebras of BCK/BCI-algebras. Annals of Fuzzy Mathematics and Informatics 2017. [Google Scholar]
  69. Fujita, T.; Mehmood, A.; Ghaib, A.A. Hyperfuzzy OffGraphs: A Unified Graph-Based Theoretical Decision Framework for Hierarchical Under Off-Uncertainty. Applied Decision Analytics 2025, 1, 1–14. [Google Scholar]
  70. Fujita, T. On the Structure and Properties of Hyperfuzzy and Superhyperfuzzy Intervals. Soft Computing Fusion With Applications 2025. [Google Scholar] [CrossRef]
  71. Ghosh, J.; Samanta, T.K. Hyperfuzzy sets and hyperfuzzy group. Int. J. Adv. Sci. Technol 2012, 41, 27–37. [Google Scholar]
  72. Jose, J.; George, B.; Thumbakara, R.K. Soft directed graphs, their vertex degrees, associated matrices and some product operations. New Mathematics and Natural Computation 2023, 19, 651–686. [Google Scholar] [CrossRef]
  73. Maji, P.K.; Biswas, R.; Roy, A.R. Soft set theory. Computers & mathematics with applications 2003, 45, 555–562. [Google Scholar]
  74. Smarandache, F. Extension of soft set to hypersoft set, and then to plithogenic hypersoft set. Neutrosophic sets and systems 2018, 22, 168–170. [Google Scholar]
  75. Pawlak, Z. Rough sets. International journal of computer & information sciences 1982, 11, 341–356. [Google Scholar]
  76. Broumi, S.; Smarandache, F.; Dhar, M. Rough neutrosophic sets. Infinite Study 2014, 32, 493–502. [Google Scholar]
  77. Pawlak, Z.; Skowron, A. Rudiments of rough sets. Information sciences 2007, 177, 3–27. [Google Scholar] [CrossRef]
  78. García-Domenech, R.; Gálvez, J.; de Julián-Ortiz, J.V.; Pogliani, L. Some new trends in chemical graph theory. Chemical Reviews 2008, 108, 1127–1169. [Google Scholar] [CrossRef]
  79. Wagner, S.; Wang, H. Introduction to chemical graph theory; Chapman and Hall/CRC, 2018.
  80. Smarandache, F. Hyperuncertain, superuncertain, and superhyperuncertain sets/logics/probabilities/statistics; Infinite Study, 2017.
  81. Fujita, T. Advancing Uncertain Combinatorics through Graphization, Hyperization, and Uncertainization: Fuzzy, Neutrosophic, Soft, Rough, and Beyond; Biblio Publishing, 2025.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated