Preprint
Article

This version is not peer-reviewed.

Review of Structured-Degree Uncertain Sets: Vector-, Matrix-, and Tensor-Valued Membership

Submitted:

28 December 2025

Posted:

29 December 2025

You are already at the latest version

Abstract
An Uncertain Set assigns to each element a generalized uncertainty value, providing a unified language that encompasses fuzzy, intuitionistic fuzzy, neutrosophic, plithogenic, and related models [1]. In this paper, we extend the notion of a membership function from scalar degrees to structured degrees represented by vectors, matrices, and higher-order tensors. We introduce vector-valued, matrix-valued, and tensor-valued Uncertain Sets (including the corresponding fuzzy, neutrosophic, and related special cases) and investigate their fundamental properties.
Keywords: 
;  ;  ;  

1. Preliminaries

This section fixes notation and reviews the background material needed in the sequel. Unless stated otherwise, all sets considered in this paper are finite.

1.1. Fuzzy Sets and Neutrosophic Sets

In classical set theory, membership is a two-valued notion: an element either belongs to a set or it does not. In many real situations, however, assessments are gradual rather than strictly yes/no. Fuzzy set theory formalizes graded membership by assigning to each element a degree in the unit interval [ 0 , 1 ] [2]. As related concepts, intuitionistic fuzzy sets [3], bipolar fuzzy sets [4], and hesitant fuzzy sets [5,6] are well known. We recall their standard definitions.
Definition 1
(Fuzzy Set). [2,7] Let Y be a nonempty universe. Afuzzy seton Y is a mapping
τ : Y [ 0 , 1 ] .
Afuzzy relationon Y is a mapping
δ : Y × Y [ 0 , 1 ] .
We say that δ is afuzzy relation onτ if, for all y , z Y ,
δ ( y , z ) min { τ ( y ) , τ ( z ) } .
Neutrosophic sets further refine graded membership by recording three (typically independent) components that quantify support, hesitation, and rejection. These components are commonly interpreted as the degrees of truth, indeterminacy, and falsity, respectively [8,9,10,11]. Related notions are also well known, including Double-valued Neutrosophic Sets [12,13], Bipolar Neutrosophic Sets [14,15], and Plithogenic Sets [16,17]. We adopt the widely used single-valued formulation.
Definition 2
(Neutrosophic Set). [8,18] Let X be a nonempty set. ANeutrosophic Set (NS)A on X is specified by three functions
T A : X [ 0 , 1 ] , I A : X [ 0 , 1 ] , F A : X [ 0 , 1 ] ,
where, for each x X , the values T A ( x ) , I A ( x ) , and F A ( x ) represent, respectively, the degrees of truth, indeterminacy, and falsity of x with respect to A. These degrees satisfy
0 T A ( x ) + I A ( x ) + F A ( x ) 3 ( x X ) .

1.2. Uncertain Set and Functorial Set

An Uncertain Set provides a uniform way to attach “uncertainty values” to elements, where the values take their meaning from a chosen degree-domain. By selecting the degree-domain appropriately, one can recover fuzzy, intuitionistic fuzzy, neutrosophic, plithogenic, and many other degree-based models as special cases [1,19].
Definition 3 Uncertain Set (U-Set)). [1] Let X be a nonempty universe and let M be an uncertainty model with degree-domain
Dom ( M ) [ 0 , 1 ] k
for some integer k 1 . AnUncertain Set of type M(briefly, aU-Set of type M) on X is a mapping
μ M : X Dom ( M ) .
For each x X , the value μ M ( x ) Dom ( M ) is called theM-membership degree(orM-uncertainty value) of x. Different choices of M yield the usual fuzzy, intuitionistic fuzzy, neutrosophic, plithogenic, and other degree-based set models.

2. Main Results

In this section, we define the vector-, matrix-, and tensor-valued Uncertain Sets summarized in Table 1 and examine their fundamental properties.
Note that vectors[20,21], matrices[22,23,24], and tensors [25,26,27,28] have been studied extensively and have a wide range of applications in fields such as machine learning and artificial intelligence. Moreover, for reference, we also examine in the Appendix A the notion of a MetaTensor (a tensor whose entries are tensors), as well as its iterated extension, the Iterated MetaTensor (a tensor of … of tensors). In addition, hierarchical tensors referred to as HyperTensors and SuperHyperTensors are also examined in the Appendix.

2.1. Vector-valued Fuzzy, Neutrosophic, and Uncertain Sets

In many applications, a single scalar degree in [ 0 , 1 ] is too coarse: one may wish to record several aspects of uncertainty simultaneously (e.g., confidence, reliability, relevance, or scores coming from multiple sensors). A simple way to model such situations is to replace the unit interval by a vector-valued degree domain.
Definition 4
(Vector-valued unit interval and componentwise order). Fix an integer d 1 and define thed-dimensional unit cube
[ 0 , 1 ] d : = { ( x 1 , , x d ) R d 0 x i 1 ( 1 i d ) } .
For u = ( u 1 , , u d ) and v = ( v 1 , , v d ) in [ 0 , 1 ] d , define thecomponentwise order
u v : u i v i ( 1 i d ) .
We also use componentwise operations: for instance,
( u + v ) i : = u i + v i , ( min { u , v } ) i : = min { u i , v i } ,
and we write 0 : = ( 0 , , 0 ) and 1 : = ( 1 , , 1 ) .
Definition 5
(Vector-valued fuzzy set). Let X be a nonempty universe and fix d 1 . Avector-valued fuzzy set of dimension don X is a mapping
μ : X [ 0 , 1 ] d .
For each x X , the value μ ( x ) = ( μ 1 ( x ) , , μ d ( x ) ) is interpreted as a d-component membership/uncertainty vector associated with x. When d = 1 , this definition reduces to the classical (scalar) fuzzy set.
Optionally, avector-valued fuzzy relationon X is a mapping
δ : X × X [ 0 , 1 ] d .
We say that δ iscompatiblewith μ if, for all x , y X ,
δ ( x , y ) min { μ ( x ) , μ ( y ) } ,
where the minimum is taken componentwise.
Definition 6
(Vector-valued neutrosophic set). Let X be a nonempty universe and fix d 1 . Avector-valued neutrosophic set of dimension don X consists of three mappings
T : X [ 0 , 1 ] d , I : X [ 0 , 1 ] d , F : X [ 0 , 1 ] d ,
where, for each x X , the vectors T ( x ) , I ( x ) , and F ( x ) represent, respectively, the truth, indeterminacy, and falsity degrees of x.
These degrees satisfy the componentwise constraint
0 T ( x ) + I ( x ) + F ( x ) 3 1 ( x X ) ,
where addition and inequalities are interpreted componentwise. When d = 1 , this is exactly the usual single-valued neutrosophic set constraint 0 T ( x ) + I ( x ) + F ( x ) 3 .
Example 1
(Real-life example of a vector-valued neutrosophic set). Consider a hospital triage unit that must assess whether a patient should be treated as ahigh-risk respiratory case. Let X be the set of patients arriving in a given day. Fix d = 3 and interpret the three dimensions as independent evidence channels:
( 1 ) symptoms , ( 2 ) rapid test , ( 3 ) imaging ( e . g . , X - ray / CT ) .
A vector-valued neutrosophic set on X assigns to each patient x X three vectors
T ( x ) = ( T 1 ( x ) , T 2 ( x ) , T 3 ( x ) ) [ 0 , 1 ] 3 ,
I ( x ) = ( I 1 ( x ) , I 2 ( x ) , I 3 ( x ) ) [ 0 , 1 ] 3 ,
F ( x ) = ( F 1 ( x ) , F 2 ( x ) , F 3 ( x ) ) [ 0 , 1 ] 3 ,
where, for each channel j { 1 , 2 , 3 } :
  • T j ( x ) measures how strongly channel j supports the statement “patient x is a high-risk respiratory case”;
  • F j ( x ) measures how strongly channel j supports the negation “patient x is not a high-risk respiratory case”;
  • I j ( x ) measures the indeterminacy in channel j (e.g., missing data, borderline values, poor image quality, or conflicting findings).
For instance, suppose a patient x presents clear symptoms but has an inconclusive rapid test and a moderately suggestive image. One may encode this as
T ( x ) = ( 0.85 , 0.40 , 0.65 ) , I ( x ) = ( 0.10 , 0.45 , 0.20 ) , F ( x ) = ( 0.20 , 0.55 , 0.30 ) .
The interpretation is: symptoms provide strong support ( T 1 = 0.85 ) with low uncertainty ( I 1 = 0.10 ); the rapid test is ambiguous ( I 2 = 0.45 ) and leans against high-risk ( F 2 = 0.55 ); imaging provides moderate support ( T 3 = 0.65 ) with moderate uncertainty ( I 3 = 0.20 ).
The componentwise constraint
0 T ( x ) + I ( x ) + F ( x ) 3 1
holds automatically here because each coordinate lies in [ 0 , 1 ] , hence each coordinate-sum lies in [ 0 , 3 ] . Such a representation is useful when decisions must be justified by multiple evidence channels: it keeps track of support, opposition, and uncertaintyper channel, rather than collapsing everything into a single score.
Definition 7
(Vector-valued uncertainty model). Fix integers d 1 and k 1 . Avector-valued uncertainty model of shape ( k , d ) is specified by a degree-domain
Dom ( M ) ( [ 0 , 1 ] d ) k [ 0 , 1 ] d k .
Elements of Dom ( M ) are viewed as k-tuples of d-dimensional degree vectors.
Definition 8 (Vector-valued Uncertain Set (vector-valued U-Set)). Let X be a nonempty universe and let M be a vector-valued uncertainty model with
Dom ( M ) ( [ 0 , 1 ] d ) k .
Avector-valued Uncertain Set of type M(briefly, avector-valued U-Set of type M) on X is a mapping
μ M : X Dom ( M ) .
For each x X , the value μ M ( x ) Dom ( M ) is called theM-membership degree(orM-uncertainty value) of x.
Remark 1
(Special cases and interpretation). The vector-valued U-Set template subsumes the preceding notions by suitable choices of Dom ( M ) .
(i)
If k = 1 and Dom ( M ) = [ 0 , 1 ] d , then a vector-valued U-Set of type M is exactly a vector-valued fuzzy set μ : X [ 0 , 1 ] d .
(ii)
If k = 3 and Dom ( M ) = ( [ 0 , 1 ] d ) 3 , then a vector-valued U-Set of type M can be written as μ M ( x ) = ( T ( x ) , I ( x ) , F ( x ) ) and thus corresponds to a vector-valued neutrosophic description.
(iii)
The parameter d represents the number of simultaneously tracked components (criteria, sensors, or features). The set Dom ( M ) can impose additional structural constraints beyond simple box constraints, for example admissible correlations, normalization conditions, or application-specific feasibility regions.

2.2. Matrix-valued Fuzzy, Neutrosophic, and Uncertain Sets

In some settings, uncertainty is naturally structured as a matrix rather than a scalar or a vector. Typical examples include uncertainty indexed simultaneously by (criterion × scenario), (role × time), (sensor × feature), or (agent × attribute). To capture such two-dimensional structure, we replace the unit interval [ 0 , 1 ] by a matrix-valued degree domain.
Definition 9
(Matrix-valued unit cube and componentwise order). Fix integers p , q 1 and define
[ 0 , 1 ] p × q : = X = ( x i j ) R p × q 0 x i j 1 ( 1 i p , 1 j q ) .
For X = ( x i j ) and Y = ( y i j ) in [ 0 , 1 ] p × q , define thecomponentwise order
X Y : x i j y i j ( 1 i p , 1 j q ) .
We also use componentwise operations:
( X + Y ) i j : = x i j + y i j , ( min { X , Y } ) i j : = min { x i j , y i j } .
Let 0 p × q and 1 p × q denote the p × q zero and all-ones matrices, respectively.
Definition 10
(Matrix-valued fuzzy set). Let X be a nonempty universe and fix p , q 1 . Amatrix-valued fuzzy set of shape ( p , q ) on X is a mapping
μ : X [ 0 , 1 ] p × q .
For each x X , the matrix μ ( x ) = ( μ ( x ) i j ) is interpreted as a two-dimensional membership/uncertainty profile of x.
Optionally, amatrix-valued fuzzy relationon X is a mapping
δ : X × X [ 0 , 1 ] p × q .
We say that δ iscompatiblewith μ if, for all x , y X ,
δ ( x , y ) min { μ ( x ) , μ ( y ) } ,
where the minimum is taken componentwise.
Definition 11
(Matrix-valued neutrosophic set). Let X be a nonempty universe and fix p , q 1 . Amatrix-valued neutrosophic set of shape ( p , q ) on X consists of three mappings
T : X [ 0 , 1 ] p × q , I : X [ 0 , 1 ] p × q , F : X [ 0 , 1 ] p × q ,
where for each x X , the matrices T ( x ) , I ( x ) , and F ( x ) represent, respectively, the truth, indeterminacy, and falsity degrees of x in a two-dimensional index system.
These degrees satisfy the componentwise constraint
0 p × q T ( x ) + I ( x ) + F ( x )
3 1 p × q ( x X ) ,
where addition and inequalities are interpreted componentwise. When ( p , q ) = ( 1 , 1 ) , this reduces to the usual single-valued neutrosophic set constraint 0 T ( x ) + I ( x ) + F ( x ) 3 .
Example 2
(Real-life example of a matrix-valued neutrosophic set). Consider an enterprise security team that must assess whether each software service should be classified ashigh cyber-riskfor prioritizing remediation. Let X be the set of services (or applications) deployed in the organization.
Fix two indexing dimensions:
p = 3 evidence sources ( static scan , runtime telemetry , external intelligence ) ,
q = 4 risk aspects ( vulnerability , exposure , impact , exploitability ) .
For each service x X , a matrix-valued neutrosophic set assigns three matrices
T ( x ) , I ( x ) , F ( x ) [ 0 , 1 ] 3 × 4 ,
where entry ( s , a ) (source s, aspect a) has the following meaning:
  • T s , a ( x ) : how strongly source s supports the statement “service x is high-risk” with respect to aspect a;
  • F s , a ( x ) : how strongly source s supports the opposite statement “service x is not high-risk” with respect to aspect a;
  • I s , a ( x ) : how indeterminate the assessment from source s is for aspect a (e.g., missing logs, noisy signals, stale intelligence, or conflicting findings).
For example, for a particular service x, suppose a static scanner finds multiple severe CVEs, telemetry shows intermittent suspicious behavior but incomplete logging, and external intelligence suggests active exploitation of similar stacks. One may encode:
T ( x ) = 0.90 0.70 0.60 0.80 0.55 0.65 0.50 0.60 0.75 0.60 0.55 0.85 ,
I ( x ) = 0.05 0.10 0.15 0.05 0.30 0.20 0.35 0.25 0.15 0.20 0.20 0.10 ,
F ( x ) = 0.20 0.30 0.40 0.15 0.35 0.25 0.45 0.30 0.25 0.30 0.35 0.20 .
Here the first row corresponds tostatic scan, the second toruntime telemetry, and the third toexternal intelligence; the four columns correspond, respectively, tovulnerability,exposure,impact, andexploitability.
This representation keeps a two-dimensional audit trail: it separateswhich sourceproduced the evidence andwhich risk aspectit concerns, while still recording support (T), uncertainty (I), and opposition (F). The componentwise constraint
0 3 × 4 T ( x ) + I ( x ) + F ( x ) 3 1 3 × 4
is satisfied because each entry lies in [ 0 , 1 ] and thus each entrywise sum lies in [ 0 , 3 ] . Such matrix-valued neutrosophic modeling is useful in practice because remediation decisions can be justified granularly (per source and per aspect) rather than by a single aggregated score.
Definition 12
(Matrix-valued uncertainty model). Fix integers p , q 1 and k 1 . Amatrix-valued uncertainty model of shape ( k ; p , q ) is specified by a degree-domain
Dom ( M ) [ 0 , 1 ] p × q k .
Elements of Dom ( M ) are viewed as k-tuples of ( p × q ) -matrices, and Dom ( M ) may encode additional admissibility constraints (e.g., normalization, sparsity, monotonicity, or correlations) beyond entrywise bounds.
Definition 13 (Matrix-valued Uncertain Set (matrix-valued U-Set)). Let X be a nonempty universe and let M be a matrix-valued uncertainty model with
Dom ( M ) [ 0 , 1 ] p × q k .
Amatrix-valued Uncertain Set of type M(briefly, amatrix-valued U-Set of type M) on X is a mapping
μ M : X Dom ( M ) .
For each x X , the value μ M ( x ) Dom ( M ) is called theM-membership degree(orM-uncertainty value) of x.
Remark 2
(Special cases and interpretation). The matrix-valued U-Set framework subsumes the preceding notions via suitable choices of Dom ( M ) :
(i)
If k = 1 and Dom ( M ) = [ 0 , 1 ] p × q , then a matrix-valued U-Set is exactly a matrix-valued fuzzy set μ : X [ 0 , 1 ] p × q .
(ii)
If k = 3 and Dom ( M ) = ( [ 0 , 1 ] p × q ) 3 , then μ M ( x ) = ( T ( x ) , I ( x ) , F ( x ) ) yields a matrix-valued neutrosophic description.
(iii)
The indices ( i , j ) can represent, for example, (criterion × scenario) or (time × role), so the matrix encodes structured uncertainty that is not naturally captured by a single scalar degree. The admissible domain Dom ( M ) can enforce application-specific constraints on such matrices.

2.3. Tensor-valued Fuzzy, Neutrosophic, and Uncertain Sets

In complex applications, uncertainty may be indexed by more than two dimensions, for example (agent × criterion × time), (sensor × feature × location), or (scenario × stage × resource × constraint). Such structures are naturally represented by multi-order tensors. Accordingly, we replace the unit interval [ 0 , 1 ] by a tensor-valued degree domain.
Definition 14
(Tensor-valued unit cube). Fix an integer n 1 (the order) and a size vector
d = ( d 1 , , d n ) N 1 n .
Let
[ 0 , 1 ] d 1 × × d n : = X = ( x i 1 i n ) R d 1 × × d n | 0 x i 1 i n 1 ( i 1 , , i n ) .
Elements of [ 0 , 1 ] d 1 × × d n are callednth-order tensors(with mode sizes d 1 , , d n ) whose entries lie in [ 0 , 1 ] . We write 0 d and 1 d for the all-zeros and all-ones tensors of this shape.
Definition 15
(Componentwise order and operations on tensors). For X = ( x i 1 i n ) and Y = ( y i 1 i n ) in [ 0 , 1 ] d 1 × × d n , define thecomponentwise order
X Y : x i 1 i n y i 1 i n for all indices ( i 1 , , i n ) .
We also use componentwise operations:
( X + Y ) i 1 i n : = x i 1 i n + y i 1 i n , ( min { X , Y } ) i 1 i n : = min { x i 1 i n , y i 1 i n } .
Scalar multiplication is defined entrywise, so that ( λ X ) i 1 i n : = λ x i 1 i n .
Definition 16
(Tensor-valued fuzzy set). Let X be a nonempty universe and fix an order n 1 and shape d = ( d 1 , , d n ) . Atensor-valued fuzzy set of shape d on X is a mapping
μ : X [ 0 , 1 ] d 1 × × d n .
For each x X , the tensor μ ( x ) is interpreted as an n-way membership/uncertainty profile of x.
Optionally, atensor-valued fuzzy relationon X is a mapping
δ : X × X [ 0 , 1 ] d 1 × × d n .
We say that δ iscompatiblewith μ if, for all x , y X ,
δ ( x , y ) min { μ ( x ) , μ ( y ) } ,
where the minimum is taken componentwise.
Definition 17
(Tensor-valued neutrosophic set). Let X be a nonempty universe and fix an order n 1 and shape d = ( d 1 , , d n ) . Atensor-valued neutrosophic set of shape d on X consists of three mappings
T : X [ 0 , 1 ] d 1 × × d n , I : X [ 0 , 1 ] d 1 × × d n , F : X [ 0 , 1 ] d 1 × × d n ,
where, for each x X , the tensors T ( x ) , I ( x ) , and F ( x ) represent, respectively, the truth, indeterminacy, and falsity degrees of x in an n-dimensional index system.
These degrees satisfy the componentwise constraint
0 d T ( x ) + I ( x ) + F ( x ) 3 1 d ( x X ) ,
where addition and inequalities are interpreted componentwise. When n = 1 and d 1 = 1 , this reduces to the usual single-valued neutrosophic set constraint 0 T ( x ) + I ( x ) + F ( x ) 3 .
Example 3
(Real-life example of a tensor-valued neutrosophic set). Consider a global manufacturer that must assess whether eachsuppliershould be classified ashigh disruption-riskfor procurement planning. Let X be the set of candidate suppliers.
Risk assessment is inherently multi-dimensional: it depends onwhich criterionis considered,which regionthe supplier operates in,which time windowis relevant, andwhich scenario(e.g., baseline vs. stress) is assumed. We therefore fix an order n = 4 tensor shape
d = ( d 1 , d 2 , d 3 , d 4 ) = ( 5 , 3 , 4 , 2 ) ,
with the following index semantics:
  • d 1 = 5 criteria: (1) financial stability, (2) delivery performance, (3) quality history, (4) geopolitical exposure, (5) capacity flexibility;
  • d 2 = 3 regions: Asia, Europe, Americas;
  • d 3 = 4 time horizons: next month, next quarter, next half-year, next year;
  • d 4 = 2 scenarios: baseline, stress (e.g., port congestion or regional conflict escalation).
A tensor-valued neutrosophic set on X assigns to each supplier x X three tensors
T ( x ) , I ( x ) , F ( x ) [ 0 , 1 ] 5 × 3 × 4 × 2 ,
where, for each multi-index ( i , r , t , s ) ,
  • T i , r , t , s ( x ) quantifies the degree to which evidence supports the statement “supplier x is high disruption-risk” under criterion i, region r, horizon t, scenario s;
  • F i , r , t , s ( x ) quantifies the degree to which evidence supports the opposite statement “supplier x is not high disruption-risk” in the same context;
  • I i , r , t , s ( x ) captures indeterminacy in that context (e.g., incomplete audit data, unreliable forecasts, missing geopolitical intelligence, or conflicting reports).
For a concrete supplier x, the procurement team might encode the following qualitative situation:
  • In Asia under the stress scenario, geopolitical exposure is judged strongly risky in the short term, but forecasts are uncertain;
  • In Europe under baseline conditions, delivery performance is consistently reliable with low uncertainty;
  • For the Americas, capacity flexibility is unclear because the latest capacity audit is outdated.
This can be represented, for example, by entries such as
T geo , Asia , quarter , stress ( x ) = 0.80 , I geo , Asia , quarter , stress ( x ) = 0.35 , F geo , Asia , quarter , stress ( x ) = 0.20 , T deliv , Europe , half - year , baseline ( x ) = 0.15 , I deliv , Europe , half - year , baseline ( x ) = 0.05 , F deliv , Europe , half - year , baseline ( x ) = 0.85 , T cap , Americas , year , baseline ( x ) = 0.45 , I cap , Americas , year , baseline ( x ) = 0.40 , F cap , Americas , year , baseline ( x ) = 0.35 .
The first line indicates strong support for high risk (large T) with substantial uncertainty (large I), the second indicates strong evidence against high risk (large F) with low uncertainty, and the third reflects an ambiguous capacity assessment (high indeterminacy).
Finally, the componentwise neutrosophic constraint
0 d T ( x ) + I ( x ) + F ( x ) 3 1 d
is satisfied because each tensor entry lies in [ 0 , 1 ] , hence each entrywise sum lies in [ 0 , 3 ] . Tensor-valued neutrosophic modeling is practically useful here because it preserves the full context (criterion × region × horizon × scenario) needed for transparent risk governance and targeted mitigation actions.
Definition 18
(Tensor-valued uncertainty model). Fix an order n 1 , a shape d = ( d 1 , , d n ) , and an integer k 1 . Atensor-valued uncertainty model of shape ( k ; d ) is specified by a degree-domain
Dom ( M ) [ 0 , 1 ] d 1 × × d n k .
Elements of Dom ( M ) are viewed as k-tuples of tensors of shape d . The admissible set Dom ( M ) may enforce additional structural constraints (e.g., normalization, low-rank structure, sparsity, monotonicity across modes, or correlations), beyond the entrywise bounds [ 0 , 1 ] .
Definition 19 (Tensor-valued Uncertain Set (tensor-valued U-Set)). Let X be a nonempty universe and let M be a tensor-valued uncertainty model with
Dom ( M ) [ 0 , 1 ] d 1 × × d n k .
Atensor-valued Uncertain Set of type M(briefly, atensor-valued U-Set of type M) on X is a mapping
μ M : X Dom ( M ) .
For each x X , the value μ M ( x ) Dom ( M ) is called theM-membership degree(orM-uncertainty value) of x.
Remark 3
(Special cases and interpretation). The tensor-valued U-Set framework recovers vector- and matrix-valued notions as lower-order cases.
(i)
If n = 1 and d = ( d ) , then [ 0 , 1 ] d 1 × × d n = [ 0 , 1 ] d , and tensor-valued fuzzy/neutrosophic/U-sets reduce to the vector-valued definitions.
(ii)
If n = 2 and d = ( p , q ) , then [ 0 , 1 ] d 1 × × d n = [ 0 , 1 ] p × q , and tensor-valued fuzzy/neutrosophic/U-sets reduce to the matrix-valued definitions.
(iii)
Taking k = 1 and Dom ( M ) = [ 0 , 1 ] d 1 × × d n yields a tensor-valued fuzzy set. Taking k = 3 and Dom ( M ) = ( [ 0 , 1 ] d 1 × × d n ) 3 yields a tensor-valued neutrosophic description.

3. Additional Results: Tensor-valued Functorial Set

A Functorial Set packages a family of structured sets by specifying a category together with a functor to Set . Objects are interpreted as carriers of structure, while morphisms describe how structure is transported[1]. In particular, the functor assigns to each object a set of admissible structures, and to each morphism a structure-preserving map between such sets.
In the present work we focus on tensor-valued structures. Informally, a tensor-valued functorial set associates to each set X the collection of all tensor-valued assignments on X, and transports those assignments along functions by pullback (precomposition). This perspective is convenient because it makes change-of-variables and restriction maps automatic consequences of functoriality.
Definition 20
(Functorial Set). [1] Let C be a category and
F : C Set
be a (covariant) functor. For any object X Ob ( C ) , anF-set over Xis an element
s F ( X ) .
We denote the collection of all F-sets over X simply by F ( X ) . A morphism f : X Y in C induces apushforward
F ( f ) : F ( X ) F ( Y ) , s F ( f ) ( s ) .
Definition 21
(Tensor-valued structure functor). Fix a tensor degree-domain Ten k , d . Define a functor
F k , d : Set op Set
by:
  • for each set X, set
    F k , d ( X ) : = { μ : X Ten k , d } = Ten k , d X ;
  • for each function f : X Y in Set (hence a morphism f op : Y X in Set op ), define
    F k , d ( f op ) : F k , d ( Y ) F k , d ( X ) , μ μ f .
Proposition 1.
F k , d is a well-defined covariant functor Set op Set .
Proof. 
Let X be a set. For the identity id X : X X , we have for any μ F k , d ( X ) :
F k , d ( ( id X ) op ) ( μ ) = μ id X = μ ,
so F k , d ( ( id X ) op ) = id F k , d ( X ) . Next, for composable functions X f Y g Z and any μ F k , d ( Z ) ,
F k , d ( ( g f ) op ) ( μ ) = μ ( g f ) = ( μ g ) f = F k , d ( f op ) F k , d ( g op ) ( μ ) .
Hence F k , d ( ( g f ) op ) = F k , d ( f op ) F k , d ( g op ) . Therefore F k , d is a functor. □
Definition 22
(Tensor-valued Functorial Set). Fix ( k ; d ) . The pair Set op , F k , d is called thetensor-valued functorial set(of shape ( k ; d ) ). For a set X, an element μ F k , d ( X ) is called atensor-valued F k , d -set over X, i.e., a tensor-valued assignment μ : X Ten k , d .
Example 4
(A concrete example of a tensor-valued functorial set). Fix an order n 1 and a shape d = ( d 1 , , d n ) , and take k = 1 for simplicity. Let
Ten d : = [ 0 , 1 ] d 1 × × d n .
Consider the category FinSet of finite sets and functions, and work with its opposite category FinSet op .
Define a functor
F d : FinSet op Set
by:
F d ( X ) : = Ten d X = { μ : X Ten d } ,
and for any function f : X Y in FinSet (hence f op : Y X in FinSet op ),
F d ( f op ) : F d ( Y ) F d ( X ) , μ μ f .
Then ( FinSet op , F d ) is a tensor-valued functorial set in the sense of Definition 22 (restricted from Set op to FinSet op ).
Interpretation (data refinement by reindexing).Think of an object X FinSet as a finite collection ofitems(products, patients, services, etc.). An element μ F d ( X ) assigns to each item x X a tensor
μ ( x ) Ten d ,
which may encode a structured, multi-index score profile (e.g., criterion × time × scenario).
Now let f : X Y be acoarse-grainingmap that sends each fine item x X to its group label f ( x ) Y (e.g., product ↦ product-category, patient ↦ ward, service ↦ business-unit). Given a tensor assignment μ : Y Ten d at the coarse level, the functorial pullback
F d ( f op ) ( μ ) = μ f : X Ten d
produces the induced fine-level assignment byreplicating the group tensor to every memberof that group. Concretely, for each x X ,
μ f ( x ) = μ f ( x ) .
A small explicit instance.Let Y = { A , B } be two categories and X = { 1 , 2 , 3 , 4 } four items with
f ( 1 ) = f ( 2 ) = A , f ( 3 ) = f ( 4 ) = B .
Take n = 2 and d = ( 2 , 2 ) , so Ten d = [ 0 , 1 ] 2 × 2 . Define μ F d ( Y ) by
μ ( A ) = 0.9 0.4 0.2 0.7 , μ ( B ) = 0.3 0.8 0.6 0.1 .
Then the pulled-back assignment μ f F d ( X ) is
( μ f ) ( 1 ) = ( μ f ) ( 2 ) = μ ( A ) , ( μ f ) ( 3 ) = ( μ f ) ( 4 ) = μ ( B ) .
Why this is a tensor-valued functorial example.This example illustrates the essential functorial mechanism: tensor-valued structure is assigned to each object (as Ten d -valued functions), and transported along morphisms by pullback (precomposition), so that identities and compositions are respected automatically.
Definition 23
(Subfunctor determined by an uncertainty model). Let M be a tensor-valued uncertainty model with Dom ( M ) Ten k , d . Define a functor
F M : Set op Set
by restricting the codomain:
F M ( X ) : = { μ : X Dom ( M ) } = Dom ( M ) X , F M ( f op ) ( μ ) : = μ f .
Proposition 2.
F M is a well-defined functor and a subfunctor of F k , d (i.e., F M ( X ) F k , d ( X ) for all X, and the action on morphisms agrees).
Proof. 
Functoriality follows by the same identity/composition calculation as Proposition 1. Moreover, since Dom ( M ) Ten k , d , any map μ : X Dom ( M ) is also a map μ : X Ten k , d , hence F M ( X ) F k , d ( X ) . The morphism action is identical (precomposition), so F M is a subfunctor. □
Theorem 1
(Tensor-valued Functorial Sets subsume Tensor-valued Uncertain Sets). Let M be a tensor-valued uncertainty model with Dom ( M ) Ten k , d . For every set X, there is a canonical bijection between:
  • tensor-valued Uncertain Sets of type M on X (i.e., maps μ M : X Dom ( M ) ), and
  • F M -sets over X (i.e., elements of F M ( X ) ).
Moreover, this identification is functorial: for any function f : X Y , the transport of tensor-valued uncertain sets along f is exactly given by the functor action F M ( f op ) ( μ ) = μ f .
Proof. 
Fix X. By definition, a tensor-valued Uncertain Set of type M on X is precisely a map μ M : X Dom ( M ) . On the other hand,
F M ( X ) = Dom ( M ) X = { μ : X Dom ( M ) } .
Hence the correspondence “take the same function” gives a bijection between the two collections.
For functoriality, let f : X Y and let μ : Y Dom ( M ) represent a tensor-valued uncertain set on Y. The induced assignment on X obtained by pullback along f is μ f : X Dom ( M ) . But by Definition 23, this is exactly F M ( f op ) ( μ ) . Therefore the identification respects the morphism action, and the tensor-valued functorial set framework indeed generalizes tensor-valued uncertain sets. □

4. Conclusions

We introduced vector-valued, matrix-valued, and tensor-valued Uncertain Sets (including the corresponding fuzzy, neutrosophic, and related special cases) and investigate their fundamental properties. We expect that future work will further advance extensions based on hypergraphs [29,30] and SuperHyperGraphs [31,32].

Funding

This study was conducted without any financial support from external organizations or grants.

Data Availability Statement

Since this research is purely theoretical and mathematical, no empirical data or computational analysis was utilized. Researchers are encouraged to expand upon these findings with data-oriented or experimental approaches in future studies.

Acknowledgments

We would like to express our sincere gratitude to everyone who provided valuable insights, support, and encouragement throughout this research. We also extend our thanks to the readers for their interest and to the authors of the referenced works, whose scholarly contributions have greatly influenced this study. Lastly, we are deeply grateful to the publishers and reviewers who facilitated the dissemination of this work.

Conflicts of Interest

The authors declare that they have no conflicts of interest related to the content or publication of this paper.

Use of Artificial Intelligence

I use generative AI and AI-assisted tools for tasks such as English grammar checking, and I do not employ them in any way that violates ethical standards.

Disclaimer

This work presents theoretical ideas and frameworks that have not yet been empirically validated. Readers are encouraged to explore practical applications and further refine these concepts. Although care has been taken to ensure accuracy and appropriate citations, any errors or oversights are unintentional. The perspectives and interpretations expressed herein are solely those of the authors and do not necessarily reflect the viewpoints of their affiliated institutions.

Appendix A. Appendix: MetaTensor and Iterated MetaTensor

MetaStructure has been investigated in several papers. In this Appendix, we examine MetaTensors and Iterated MetaTensors [33,34,35]. A MetaTensor is a tensor whose entries are tensors; equivalently, it can be viewed as a higher-order tensor via canonical flattening of indices. An Iterated MetaTensor applies the MetaTensor construction recursively, producing a tensor of tensors of … of scalars, with a depth parameter specifying the number of iterations.
Throughout this appendix, let K be a fixed base set (typically a field such as R ), and write [ d ] : = { 1 , 2 , , d } for d N 1 . For a shape vector d = ( d 1 , , d n ) N 1 n , set
I d : = [ d 1 ] × × [ d n ] .
Definition A1
(Tensor as a function). Let n 1 and d N 1 n . A K -valued tensor of shape d is a function
A : I d K , ( i 1 , , i n ) a i 1 i n .
We denote the set of all such tensors by
Ten d ( K ) : = K I d K d 1 × × d n .
(When n = 1 this is a vector space of length d 1 , and when n = 2 it is a d 1 × d 2 matrix space.)
Definition A2
(MetaTensor). Fix two shapes
d = ( d 1 , , d n ) N 1 n , e = ( e 1 , , e m ) N 1 m .
A ( d , e ) -MetaTensor over K is a tensor whose entries are themselves K -valued tensors of shape e , namely an element
X Ten d Ten e ( K ) = Ten e ( K ) I d .
Equivalently, X is a family X i 1 i n ( i 1 , , i n ) I d such that each entry X i 1 i n is an K -valued tensor of shape e . In coordinates, one may write
X i 1 i n = x i 1 i n ; j 1 j m ( j 1 , , j m ) I e ,
so a MetaTensor is a nested array indexed first by d and then by e .
Remark A1
(Canonical flattening). When the inner shape e is fixed (as in Definition A2), there is a canonical bijection (“flattening”)
Flat d , e : Ten d Ten e ( K ) Ten ( d , e ) ( K ) ,
where ( d , e ) denotes concatenation of the shape vectors, defined by
Flat d , e ( X ) i 1 i n j 1 j m : = X i 1 i n j 1 j m .
Thus, a MetaTensor can be viewed either as a “tensor of tensors” or as an ordinary higher-order tensor whose modes are the concatenation of the outer and inner modes.
Remark A2
(MetaVector/MetaMatrix as special cases of MetaTensor). Fix a base field (or commutative semiring) K and write Ten d ( K ) for the set of d = ( d 1 , , d n ) -shaped tensors with entries in K .
  • MetaVector as a MetaTensor.AMetaVectoris a 1st-order MetaTensor: it is a vector whose entries are tensors, i.e., an element of Ten d ( K ) m for some m 1 . Equivalently, it is a MetaTensor ofouter shape ( m ) with entry-domain Ten d ( K ) . Via canonical flattening, a MetaVector may be viewed as an ordinary tensor in Ten ( m , d 1 , , d n ) ( K ) .
  • MetaMatrix as a MetaTensor.AMetaMatrixis a 2nd-order MetaTensor: it is a matrix whose entries are tensors, i.e., an element of Ten d ( K ) p × q for some p , q 1 . Equivalently, it is a MetaTensor ofouter shape ( p , q ) with entry-domain Ten d ( K ) . Via canonical flattening, a MetaMatrix may be viewed as an ordinary tensor in Ten ( p , q , d 1 , , d n ) ( K ) .
Definition A3
(Iterated MetaTensor of depth t). Fix an integer t 1 , a base set K , and a sequence of shapes
d ( 1 ) , d ( 2 ) , , d ( t ) , d ( s ) N 1 n s ( 1 s t ) .
Define recursively thelevel-s tensor universe Ten ( s ) by
Ten ( 0 ) : = K , Ten ( s ) : = Ten d ( s ) Ten ( s 1 ) ( 1 s t ) .
AnIterated MetaTensor of depth t(with level shapes d ( 1 ) , , d ( t ) ) is an element
X ( t ) Ten ( t ) .
Equivalently, X ( t ) is a tensor of shape d ( t ) whose entries are depth- ( t 1 ) Iterated MetaTensors, and so on down to scalars in K at depth 0.
Remark A3
(Flattening an Iterated MetaTensor). By repeated application of Remark A1, an Iterated MetaTensor of depth t with fixed level shapes admits a canonical flattening into an ordinary K -valued tensor whose shape is the concatenation of all level shapes:
Ten ( t ) Ten ( d ( t ) , d ( t 1 ) , , d ( 1 ) ) ( K ) ,
with coordinate identification obtained by concatenating the multi-indices across levels.
Remark A4
(Iterated MetaVector/Iterated MetaMatrix as special cases of Iterated MetaTensor). Fix a base entry set K and a depth parameter t N . AnIterated MetaTensor of depth tis obtained by repeatedly taking “tensors whose entries are tensors” for t iterations (starting from scalars in K ).
  • Iterated MetaVector.AnIterated MetaVector of depth tis the special case in which, at each iteration level, the outer tensor shape is 1st-order (a vector shape). In other words, it is an Iterated MetaTensor where every outer layer is a vector of the previous-layer objects.
  • Iterated MetaMatrix.AnIterated MetaMatrix of depth tis the special case in which, at each iteration level, the outer tensor shape is 2nd-order (a matrix shape). Equivalently, it is an Iterated MetaTensor where every outer layer is a matrix of the previous-layer objects.
In both cases, repeated canonical flattening identifies these objects with ordinary (single-layer) tensors whose order and mode sizes are obtained by concatenating all outer shapes with the base-level inner shapes.

Appendix B. Appendix: HyperTensor and SuperHyperTensor

Research on hierarchical concepts such as HyperStructures and SuperHyperStructures has also been actively pursued in recent years [36,37,38,39]. A HyperTensor is a tensor-indexed map assigning each multi-index a subset of a base set, allowing multi-outcomes. A SuperHyperTensor assigns each tensor entry an element of an iterated powerset, capturing nested sets-of-sets, depth-controlled uncertainty.
Throughout, let S be a nonempty base set. For an integer d 1 , write
[ d ] : = { 1 , 2 , , d } .
For an order n 1 and a shape vector d = ( d 1 , , d n ) with d i 1 , define the multi-index set
I d : = [ d 1 ] × × [ d n ] .
Definition A4
(HyperTensor). Let S be a nonempty set and fix an order n 1 and shape d = ( d 1 , , d n ) . A(set-valued) HyperTensor over S of shape d is a mapping
T : I d P ( S ) ,
where P ( S ) denotes the powerset of S.
Equivalently, T is an n-way array
T = T i 1 i n ( i 1 , , i n ) I d , T i 1 i n P ( S ) ,
whose entries aresubsetsof S rather than single elements. If one requires nonempty outcomes, one may instead impose T i 1 i n P ( S ) { } for all indices.
Definition A5
(Iterated powersets). For a nonempty set S, define iterated powersets recursively by
P 0 ( S ) : = S , P t + 1 ( S ) : = P P t ( S ) ( t N ) .
Thus P 1 ( S ) = P ( S ) , P 2 ( S ) = P ( P ( S ) ) , and so on. Optionally, the nonempty variant is
( P 0 ) * ( S ) : = S , ( P t + 1 ) * ( S ) : = P ( P t ) * ( S ) { } .
Definition A6
(SuperHyperTensor). Let S be a nonempty set. Fix adepth t 1 , an order n 1 , and a shape d = ( d 1 , , d n ) . ASuperHyperTensor over S of depth t and shape d is a mapping
T ( t ) : I d P t ( S ) ,
i.e., an n-way array whose entries aret-fold iterated subsetsof S (sets of … of sets of elements of S, with t levels of powerset).
If one wishes to exclude empty outcomes at every level, one may require T i 1 i n ( t ) ( P t ) * ( S ) for all indices.
Sanity checks.
  • When t = 1 , a depth-1 SuperHyperTensor is exactly a HyperTensor in the sense of Definition A4.
  • When one informally allows t = 0 , the codomain becomes P 0 ( S ) = S , recovering an ordinary S-valued tensor I d S .

References

  1. Fujita, T.; Smarandache, F. A Unified Framework for U-Structures and Functorial Structure: Managing Super, Hyper, SuperHyper, Tree, and Forest Uncertain Over/Under/Off Models. Neutrosophic Sets and Systems 2025, 91, 337–380. [Google Scholar]
  2. Zadeh, L.A. Fuzzy sets. Information and control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  3. Atanassov, K.T. Circular intuitionistic fuzzy sets. Journal of Intelligent & Fuzzy Systems 2020, 39, 5981–5986. [Google Scholar]
  4. Akram, M. Bipolar fuzzy graphs. Information sciences 2011, 181, 5548–5564. [Google Scholar] [CrossRef]
  5. Torra, V. Hesitant fuzzy sets. International journal of intelligent systems 2010, 25, 529–539. [Google Scholar] [CrossRef]
  6. Torra, V.; Narukawa, Y. On hesitant fuzzy sets and decision. In Proceedings of the 2009 IEEE international conference on fuzzy systems. IEEE, 2009; pp. 1378–1382. [Google Scholar]
  7. Zadeh, L.A. Fuzzy logic, neural networks, and soft computing. In Fuzzy sets, fuzzy logic, and fuzzy systems: selected papers by Lotfi A Zadeh; World Scientific, 1996; pp. 775–782. [Google Scholar]
  8. Smarandache, F. A unifying field in Logics: Neutrosophic Logic. In Philosophy; American Research Press, 1999; pp. 1–141. [Google Scholar]
  9. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets; Infinite study . 2010. [Google Scholar]
  10. Broumi, S.; Talea, M.; Bakali, A.; Smarandache, F. Single valued neutrosophic graphs. Journal of New theory 2016, 86–101. [Google Scholar]
  11. Wang, H.; Smarandache, F.; Sunderraman, R.; Zhang, Y.Q. interval neutrosophic sets and logic: theory and applications in computing: Theory and applications in computing;Infinite Study . 2005; Vol. 5. [Google Scholar]
  12. Khan, Q.; Liu, P.; Mahmood, T. Some generalized dice measures for double-valued neutrosophic sets and their applications. Mathematics 2018, 6, 121. [Google Scholar] [CrossRef]
  13. Wei, L. An integrated decision-making framework for blended teaching quality evaluation in college English courses based on the double-valued neutrosophic sets. J. Intell. Fuzzy Syst. 2023, 45, 3259–3266. [Google Scholar] [CrossRef]
  14. Abdel-Basset, M.; Gamal, A.; Son, L.H.; Smarandache, F. A bipolar neutrosophic multi criteria decision making framework for professional selection. Applied Sciences 2020, 10, 1202. [Google Scholar] [CrossRef]
  15. Ali, M.; Son, L.H.; Deli, I.; Tien, N.D. Bipolar neutrosophic soft sets and applications in decision making. J. Intell. Fuzzy Syst. 2017, 33, 4077–4087. [Google Scholar] [CrossRef]
  16. Smarandache, F. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic sets-revisited; Infinite study . 2018. [Google Scholar]
  17. Azeem, M.; Rashid, H.; Jamil, M.K.; Gütmen, S.; Tirkolaee, E.B. Plithogenic fuzzy graph: A study of fundamental properties and potential applications. Journal of Dynamics and Games 2024, 0–0. [Google Scholar] [CrossRef]
  18. Smarandache, F. Neutrosophy: neutrosophic probability, set, and logic: analytic synthesis & synthetic analysis; 1998. [Google Scholar]
  19. Fujita, T.; Smarandache, F. A Dynamic Survey of Fuzzy, Intuitionistic Fuzzy, Neutrosophic, Plithogenic, and Extensional Sets; Neutrosophic Science International Association (NSIA), 2025. [Google Scholar]
  20. Pourghasemi, H.R.; Gayen, A.; Lasaponara, R.; Tiefenbacher, J.P. Application of learning vector quantization and different machine learning techniques to assessing forest fire influence factors and spatial modelling. Environmental research 2020, 184, 109321. [Google Scholar] [CrossRef] [PubMed]
  21. Pisner, D.A.; Schnyer, D.M. Support vector machine. In Machine learning; Elsevier, 2020; pp. 101–121. [Google Scholar]
  22. Zhang, X.D. A matrix algebra approach to artificial intelligence; Springer, 2020. [Google Scholar]
  23. Mökander, J.; Sheth, M.; Watson, D.S.; Floridi, L. The switch, the ladder, and the matrix: models for classifying AI systems. Minds and Machines 2023, 33, 221–248. [Google Scholar] [CrossRef]
  24. Furstenberg, H.; Kesten, H. Products of random matrices. The Annals of Mathematical Statistics 1960, 31, 457–469. [Google Scholar] [CrossRef]
  25. Sokolnikoff, I.S. Tensor analysis; Wiley New York, 1964. [Google Scholar]
  26. McConnell, A.J. Applications of tensor analysis; Courier Corporation, 2014. [Google Scholar]
  27. Simmonds, J.G. A brief on tensor analysis; Springer Science & Business Media, 1997. [Google Scholar]
  28. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM review 2009, 51, 455–500. [Google Scholar] [CrossRef]
  29. Bretto, A. Hypergraph theory. An introduction. In Mathematical Engineering; Springer: Cham, 2013; Volume 1. [Google Scholar]
  30. Feng, Y.; You, H.; Zhang, Z.; Ji, R.; Gao, Y. Hypergraph neural networks. Proceedings of the Proceedings of the AAAI conference on artificial intelligence 2019, Vol. 33, 3558–3565. [Google Scholar] [CrossRef]
  31. Smarandache, F. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra; Infinite Study . 2020. [Google Scholar]
  32. Smarandache, F. Introduction to the n-SuperHyperGraph-the most general form of graph today; Infinite Study . 2022. [Google Scholar]
  33. Fujita, T. MetaStructure, Meta-HyperStructure, and Meta-SuperHyper Structure Received: 05.10.2025; Accepted: 30.10.2025. Journal of Computers and Applications Published. 2025, 1, 1–22. [Google Scholar] [CrossRef]
  34. Fujita, T. MetaHyperGraphs, MetaSuperHyperGraphs, and Iterated MetaGraphs: Modeling Graphs of Graphs, Hypergraphs of Hypergraphs, Superhypergraphs of Superhypergraphs, and Beyond. Current Research in Interdisciplinary Studies 2025, 4, 1–23. [Google Scholar]
  35. Fujita, T. Some Meta-Graph Structures: Mixed Graph, DiHyperGraph, Knowledge Graph, Intersection Graph, and Chemical Graph. 2025. [Google Scholar]
  36. Al Tahan, M.; Davvaz, B. Weak chemical hyperstructures associated to electrochemical cells. Iranian Journal of Mathematical Chemistry 2018, 9, 65–75. [Google Scholar]
  37. Vougiouklis, T. Hv-groups defined on the same set. Discrete Mathematics 1996, 155, 259–265. [Google Scholar] [CrossRef]
  38. Smarandache, F. Foundation of SuperHyperStructure & Neutrosophic SuperHyperStructure. Neutrosophic Sets and Systems 2024, 63, 21. [Google Scholar]
  39. Smarandache, F. Foundation of revolutionary topologies: An overview, examples, trend analysis, research issues, challenges, and future directions. 2024. [Google Scholar] [CrossRef]
Table 1. Overview of scalar-, vector-, matrix-, and tensor-valued Uncertain Sets.
Table 1. Overview of scalar-, vector-, matrix-, and tensor-valued Uncertain Sets.
Concept Degree-domain (typical form) Membership map and interpretation
(Uncertain Set) (scalar-valued) Dom ( M ) [ 0 , 1 ] k (often k = 1 ) A U-Set of type M on a universe X is a map μ M : X Dom ( M ) . Each element x X receives a degree  μ M ( x ) ; choosing M recovers fuzzy ( k = 1 ), neutrosophic ( k = 3 ), plithogenic ( k = m ), etc.
Vector-valued Uncertain Set Dom ( M ) [ 0 , 1 ] d k [ 0 , 1 ] d k A vector-valued U-Set assigns μ M : X Dom ( M ) where μ M ( x ) is a k-tuple of vectors in [ 0 , 1 ] d . This represents multi-index degrees (e.g., criteria, agents, time points) at the level of vectors.
Matrix-valued Uncertain Set Dom ( M ) [ 0 , 1 ] p × q k [ 0 , 1 ] p q k A matrix-valued U-Set assigns μ M : X Dom ( M ) where μ M ( x ) is a k-tuple of matrices in [ 0 , 1 ] p × q . This captures two-dimensional structured degrees (e.g., criterion × time, feature × context).
Tensor-valued Uncertain Set Dom ( M ) [ 0 , 1 ] d 1 × × d n k A tensor-valued U-Set assigns μ M : X Dom ( M ) where μ M ( x ) is a k-tuple of nth-order tensors with entries in [ 0 , 1 ] . This accommodates higher-order structured degrees (e.g., agent × criterion × time × scenario).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated