1. Preliminaries
This section fixes notation and reviews the background material needed in the sequel. Unless stated otherwise, all sets considered in this paper are finite.
1.1. Fuzzy Sets and Neutrosophic Sets
In classical set theory, membership is a two-valued notion: an element either belongs to a set or it does not. In many real situations, however, assessments are gradual rather than strictly yes/no. Fuzzy set theory formalizes graded membership by assigning to each element a degree in the unit interval
[
2]. As related concepts, intuitionistic fuzzy sets [
3], bipolar fuzzy sets [
4], and hesitant fuzzy sets [
5,
6] are well known. We recall their standard definitions.
Definition 1 (Fuzzy Set).
[2,7] Let Y be a nonempty universe. Afuzzy set
on Y is a mapping
Afuzzy relation
on Y is a mapping
We say that δ is afuzzy relation on
τ if, for all ,
Neutrosophic sets further refine graded membership by recording three (typically independent) components that quantify support, hesitation, and rejection. These components are commonly interpreted as the degrees of truth, indeterminacy, and falsity, respectively [
8,
9,
10,
11]. Related notions are also well known, including Double-valued Neutrosophic Sets [
12,
13], Bipolar Neutrosophic Sets [
14,
15], and Plithogenic Sets [
16,
17]. We adopt the widely used single-valued formulation.
Definition 2 (Neutrosophic Set).
[8,18] Let X be a nonempty set. ANeutrosophic Set (NS)
A on X is specified by three functions where, for each , the values , , and represent, respectively, the degrees of truth, indeterminacy, and falsity of x with respect to A. These degrees satisfy
1.2. Uncertain Set and Functorial Set
An
Uncertain Set provides a uniform way to attach “uncertainty values” to elements, where the values take their meaning from a chosen
degree-domain. By selecting the degree-domain appropriately, one can recover fuzzy, intuitionistic fuzzy, neutrosophic, plithogenic, and many other degree-based models as special cases [
1,
19].
Definition 3 Uncertain Set (U-Set)).
[1] Let X be a nonempty universe and let M be an uncertainty model with degree-domain for some integer . AnUncertain Set of type
M(briefly, aU-Set of type
M) on X is a mapping For each , the value is called theM-membership degree
(orM-uncertainty value
) of x. Different choices of M yield the usual fuzzy, intuitionistic fuzzy, neutrosophic, plithogenic, and other degree-based set models.
2. Main Results
In this section, we define the vector-, matrix-, and tensor-valued Uncertain Sets summarized in
Table 1 and examine their fundamental properties.
Note that vectors[
20,
21], matrices[
22,
23,
24], and tensors [
25,
26,
27,
28] have been studied extensively and have a wide range of applications in fields such as machine learning and artificial intelligence. Moreover, for reference, we also examine in the Appendix A the notion of a
MetaTensor (a tensor whose entries are tensors), as well as its iterated extension, the
Iterated MetaTensor (a tensor of … of tensors). In addition, hierarchical tensors referred to as
HyperTensors and
SuperHyperTensors are also examined in the Appendix.
2.1. Vector-valued Fuzzy, Neutrosophic, and Uncertain Sets
In many applications, a single scalar degree in is too coarse: one may wish to record several aspects of uncertainty simultaneously (e.g., confidence, reliability, relevance, or scores coming from multiple sensors). A simple way to model such situations is to replace the unit interval by a vector-valued degree domain.
Definition 4 (Vector-valued unit interval and componentwise order).
Fix an integer and define thed-dimensional unit cube
For and in , define thecomponentwise order
We also use componentwise operations: for instance,
and we write and .
Definition 5 (Vector-valued fuzzy set).
Let X be a nonempty universe and fix . Avector-valued fuzzy set of dimension
don X is a mapping For each , the value is interpreted as a d-component membership/uncertainty vector associated with x. When , this definition reduces to the classical (scalar) fuzzy set.
Optionally, avector-valued fuzzy relation
on X is a mapping
We say that δ iscompatible
with μ if, for all ,
where the minimum is taken componentwise.
Definition 6 (Vector-valued neutrosophic set).
Let X be a nonempty universe and fix . Avector-valued neutrosophic set of dimension
don X consists of three mappings where, for each , the vectors , , and represent, respectively, the truth, indeterminacy, and falsity degrees of x.
These degrees satisfy the componentwise constraint
where addition and inequalities are interpreted componentwise. When , this is exactly the usual single-valued neutrosophic set constraint .
Example 1 (Real-life example of a vector-valued neutrosophic set).
Consider a hospital triage unit that must assess whether a patient should be treated as ahigh-risk respiratory case
. Let X be the set of patients arriving in a given day. Fix and interpret the three dimensions as independent evidence channels:
A vector-valued neutrosophic set on X assigns to each patient three vectors
where, for each channel :
measures how strongly channel j supports the statement “patient x is a high-risk respiratory case”;
measures how strongly channel j supports the negation “patient x is not a high-risk respiratory case”;
measures the indeterminacy in channel j (e.g., missing data, borderline values, poor image quality, or conflicting findings).
For instance, suppose a patient x presents clear symptoms but has an inconclusive rapid test and a moderately suggestive image. One may encode this as
The interpretation is: symptoms provide strong support () with low uncertainty (); the rapid test is ambiguous () and leans against high-risk (); imaging provides moderate support () with moderate uncertainty ().
The componentwise constraint
holds automatically here because each coordinate lies in , hence each coordinate-sum lies in . Such a representation is useful when decisions must be justified by multiple evidence channels: it keeps track of support, opposition, and uncertaintyper channel, rather than collapsing everything into a single score.
Definition 7 (Vector-valued uncertainty model).
Fix integers and . Avector-valued uncertainty model of shape
is specified by a degree-domain
Elements of are viewed as k-tuples of d-dimensional degree vectors.
Definition 8 (Vector-valued Uncertain Set (vector-valued U-Set)).
Let X be a nonempty universe and let M be a vector-valued uncertainty model with
Avector-valued Uncertain Set of type
M(briefly, avector-valued U-Set of type
M) on X is a mapping
For each , the value is called theM-membership degree(orM-uncertainty value) of x.
Remark 1 (Special cases and interpretation). The vector-valued U-Set template subsumes the preceding notions by suitable choices of .
- (i)
If and , then a vector-valued U-Set of type M is exactly a vector-valued fuzzy set .
- (ii)
If and , then a vector-valued U-Set of type M can be written as and thus corresponds to a vector-valued neutrosophic description.
- (iii)
The parameter d represents the number of simultaneously tracked components (criteria, sensors, or features). The set can impose additional structural constraints beyond simple box constraints, for example admissible correlations, normalization conditions, or application-specific feasibility regions.
2.2. Matrix-valued Fuzzy, Neutrosophic, and Uncertain Sets
In some settings, uncertainty is naturally structured as a matrix rather than a scalar or a vector. Typical examples include uncertainty indexed simultaneously by (criterion × scenario), (role × time), (sensor × feature), or (agent × attribute). To capture such two-dimensional structure, we replace the unit interval by a matrix-valued degree domain.
Definition 9 (Matrix-valued unit cube and componentwise order).
Fix integers and define
For and in , define thecomponentwise order
We also use componentwise operations:
Let and denote the zero and all-ones matrices, respectively.
Definition 10 (Matrix-valued fuzzy set).
Let X be a nonempty universe and fix . Amatrix-valued fuzzy set of shape
on X is a mapping
For each , the matrix is interpreted as a two-dimensional membership/uncertainty profile of x.
Optionally, amatrix-valued fuzzy relation
on X is a mapping
We say that δ iscompatible
with μ if, for all ,
where the minimum is taken componentwise.
Definition 11 (Matrix-valued neutrosophic set).
Let X be a nonempty universe and fix . Amatrix-valued neutrosophic set of shape
on X consists of three mappings where for each , the matrices , , and represent, respectively, the truth, indeterminacy, and falsity degrees of x in a two-dimensional index system.
These degrees satisfy the componentwise constraint
where addition and inequalities are interpreted componentwise. When , this reduces to the usual single-valued neutrosophic set constraint .
Example 2 (Real-life example of a matrix-valued neutrosophic set). Consider an enterprise security team that must assess whether each software service should be classified ashigh cyber-riskfor prioritizing remediation. Let X be the set of services (or applications) deployed in the organization.
Fix two indexing dimensions:
For each service , a matrix-valued neutrosophic set assigns three matrices
where entry (source s, aspect a) has the following meaning:
: how strongly source s supports the statement “service x is high-risk” with respect to aspect a;
: how strongly source s supports the opposite statement “service x is not high-risk” with respect to aspect a;
: how indeterminate the assessment from source s is for aspect a (e.g., missing logs, noisy signals, stale intelligence, or conflicting findings).
For example, for a particular service x, suppose a static scanner finds multiple severe CVEs, telemetry shows intermittent suspicious behavior but incomplete logging, and external intelligence suggests active exploitation of similar stacks. One may encode:
Here the first row corresponds tostatic scan, the second toruntime telemetry, and the third toexternal intelligence; the four columns correspond, respectively, tovulnerability,exposure,impact, andexploitability.
This representation keeps a two-dimensional audit trail: it separateswhich source
produced the evidence andwhich risk aspect
it concerns, while still recording support (T), uncertainty (I), and opposition (F). The componentwise constraint
is satisfied because each entry lies in and thus each entrywise sum lies in . Such matrix-valued neutrosophic modeling is useful in practice because remediation decisions can be justified granularly (per source and per aspect) rather than by a single aggregated score.
Definition 12 (Matrix-valued uncertainty model).
Fix integers and . Amatrix-valued uncertainty model of shape
is specified by a degree-domain
Elements of are viewed as k-tuples of -matrices, and may encode additional admissibility constraints (e.g., normalization, sparsity, monotonicity, or correlations) beyond entrywise bounds.
Definition 13 (Matrix-valued Uncertain Set (matrix-valued U-Set)).
Let X be a nonempty universe and let M be a matrix-valued uncertainty model with
Amatrix-valued Uncertain Set of type
M(briefly, amatrix-valued U-Set of type
M) on X is a mapping
For each , the value is called theM-membership degree(orM-uncertainty value) of x.
Remark 2 (Special cases and interpretation). The matrix-valued U-Set framework subsumes the preceding notions via suitable choices of :
- (i)
If and , then a matrix-valued U-Set is exactly a matrix-valued fuzzy set .
- (ii)
If and , then yields a matrix-valued neutrosophic description.
- (iii)
The indices can represent, for example, (criterion × scenario) or (time × role), so the matrix encodes structured uncertainty that is not naturally captured by a single scalar degree. The admissible domain can enforce application-specific constraints on such matrices.
2.3. Tensor-valued Fuzzy, Neutrosophic, and Uncertain Sets
In complex applications, uncertainty may be indexed by more than two dimensions, for example (agent × criterion × time), (sensor × feature × location), or (scenario × stage × resource × constraint). Such structures are naturally represented by multi-order tensors. Accordingly, we replace the unit interval by a tensor-valued degree domain.
Definition 14 (Tensor-valued unit cube).
Fix an integer (the order) and a size vector
Elements of are callednth-order tensors(with mode sizes ) whose entries lie in . We write and for the all-zeros and all-ones tensors of this shape.
Definition 15 (Componentwise order and operations on tensors).
For and in , define thecomponentwise order
We also use componentwise operations:
Scalar multiplication is defined entrywise, so that .
Definition 16 (Tensor-valued fuzzy set).
Let X be a nonempty universe and fix an order and shape . Atensor-valued fuzzy set of shape
on X is a mapping
For each , the tensor is interpreted as an n-way membership/uncertainty profile of x.
Optionally, atensor-valued fuzzy relation
on X is a mapping
We say that δ iscompatible
with μ if, for all ,
where the minimum is taken componentwise.
Definition 17 (Tensor-valued neutrosophic set).
Let X be a nonempty universe and fix an order and shape . Atensor-valued neutrosophic set of shape
on X consists of three mappings where, for each , the tensors , , and represent, respectively, the truth, indeterminacy, and falsity degrees of x in an n-dimensional index system.
These degrees satisfy the componentwise constraint
where addition and inequalities are interpreted componentwise. When and , this reduces to the usual single-valued neutrosophic set constraint .
Example 3 (Real-life example of a tensor-valued neutrosophic set). Consider a global manufacturer that must assess whether eachsuppliershould be classified ashigh disruption-riskfor procurement planning. Let X be the set of candidate suppliers.
Risk assessment is inherently multi-dimensional: it depends onwhich criterion
is considered,which region
the supplier operates in,which time window
is relevant, andwhich scenario
(e.g., baseline vs. stress) is assumed. We therefore fix an order tensor shape
with the following index semantics:
criteria: (1) financial stability, (2) delivery performance, (3) quality history, (4) geopolitical exposure, (5) capacity flexibility;
regions: Asia, Europe, Americas;
time horizons: next month, next quarter, next half-year, next year;
scenarios: baseline, stress (e.g., port congestion or regional conflict escalation).
A tensor-valued neutrosophic set on X assigns to each supplier three tensors
where, for each multi-index ,
quantifies the degree to which evidence supports the statement “supplier x is high disruption-risk” under criterion i, region r, horizon t, scenario s;
quantifies the degree to which evidence supports the opposite statement “supplier x is not high disruption-risk” in the same context;
captures indeterminacy in that context (e.g., incomplete audit data, unreliable forecasts, missing geopolitical intelligence, or conflicting reports).
For a concrete supplier x, the procurement team might encode the following qualitative situation:
In Asia under the stress scenario, geopolitical exposure is judged strongly risky in the short term, but forecasts are uncertain;
In Europe under baseline conditions, delivery performance is consistently reliable with low uncertainty;
For the Americas, capacity flexibility is unclear because the latest capacity audit is outdated.
This can be represented, for example, by entries such as
The first line indicates strong support for high risk (large T) with substantial uncertainty (large I), the second indicates strong evidence against high risk (large F) with low uncertainty, and the third reflects an ambiguous capacity assessment (high indeterminacy).
Finally, the componentwise neutrosophic constraint
is satisfied because each tensor entry lies in , hence each entrywise sum lies in . Tensor-valued neutrosophic modeling is practically useful here because it preserves the full context (criterion × region × horizon × scenario) needed for transparent risk governance and targeted mitigation actions.
Definition 18 (Tensor-valued uncertainty model).
Fix an order , a shape , and an integer . Atensor-valued uncertainty model of shape
is specified by a degree-domain
Elements of are viewed as k-tuples of tensors of shape . The admissible set may enforce additional structural constraints (e.g., normalization, low-rank structure, sparsity, monotonicity across modes, or correlations), beyond the entrywise bounds .
Definition 19 (Tensor-valued Uncertain Set (tensor-valued U-Set)).
Let X be a nonempty universe and let M be a tensor-valued uncertainty model with
Atensor-valued Uncertain Set of type
M(briefly, atensor-valued U-Set of type
M) on X is a mapping
For each , the value is called theM-membership degree(orM-uncertainty value) of x.
Remark 3 (Special cases and interpretation). The tensor-valued U-Set framework recovers vector- and matrix-valued notions as lower-order cases.
- (i)
If and , then , and tensor-valued fuzzy/neutrosophic/U-sets reduce to the vector-valued definitions.
- (ii)
If and , then , and tensor-valued fuzzy/neutrosophic/U-sets reduce to the matrix-valued definitions.
- (iii)
Taking and yields a tensor-valued fuzzy set. Taking and yields a tensor-valued neutrosophic description.
3. Additional Results: Tensor-valued Functorial Set
A
Functorial Set packages a family of structured sets by specifying a category together with a functor to
. Objects are interpreted as carriers of structure, while morphisms describe how structure is transported[
1]. In particular, the functor assigns to each object a set of admissible structures, and to each morphism a structure-preserving map between such sets.
In the present work we focus on tensor-valued structures. Informally, a tensor-valued functorial set associates to each set X the collection of all tensor-valued assignments on X, and transports those assignments along functions by pullback (precomposition). This perspective is convenient because it makes change-of-variables and restriction maps automatic consequences of functoriality.
Definition 20 (Functorial Set).
[1] Let be a category and
be a (covariant) functor. For any object , anF-set over
Xis an element
We denote the collection of all F-sets over X simply by . A morphism in induces apushforward
Definition 21 (Tensor-valued structure functor).
Fix a tensor degree-domain . Define a functor
by:
for each function in (hence a morphism in ), define
Proposition 1. is a well-defined covariant functor .
Proof. Let
X be a set. For the identity
, we have for any
:
so
. Next, for composable functions
and any
,
Hence . Therefore is a functor. □
Definition 22 (Tensor-valued Functorial Set). Fix . The pair is called thetensor-valued functorial set(of shape ). For a set X, an element is called atensor-valued -set over X, i.e., a tensor-valued assignment .
Example 4 (A concrete example of a tensor-valued functorial set).
Fix an order and a shape , and take for simplicity. Let
Consider the category of finite sets and functions, and work with its opposite category .
and for any function in (hence in ),
Then is a tensor-valued functorial set in the sense of Definition 22 (restricted from to ).
Interpretation (data refinement by reindexing).Think of an object as a finite collection ofitems
(products, patients, services, etc.). An element assigns to each item a tensor
which may encode a structured, multi-index score profile (e.g., criterion × time × scenario).
Now let be acoarse-graining
map that sends each fine item to its group label (e.g., product ↦ product-category, patient ↦ ward, service ↦ business-unit). Given a tensor assignment at the coarse level, the functorial pullback
produces the induced fine-level assignment byreplicating the group tensor to every member
of that group. Concretely, for each ,
A small explicit instance.Let be two categories and four items with
Take and , so . Define by
Then the pulled-back assignment is
Why this is a tensor-valued functorial example.This example illustrates the essential functorial mechanism: tensor-valued structure is assigned to each object (as -valued functions), and transported along morphisms by pullback (precomposition), so that identities and compositions are respected automatically.
Definition 23 (Subfunctor determined by an uncertainty model).
Let M be a tensor-valued uncertainty model with . Define a functor
by restricting the codomain:
Proposition 2. is a well-defined functor and a subfunctor of (i.e., for all X, and the action on morphisms agrees).
Proof. Functoriality follows by the same identity/composition calculation as Proposition 1. Moreover, since , any map is also a map , hence . The morphism action is identical (precomposition), so is a subfunctor. □
Theorem 1 (Tensor-valued Functorial Sets subsume Tensor-valued Uncertain Sets). Let M be a tensor-valued uncertainty model with . For every set X, there is a canonical bijection between:
tensor-valued Uncertain Sets of type M on X (i.e., maps ), and
-sets over X (i.e., elements of ).
Moreover, this identification is functorial: for any function , the transport of tensor-valued uncertain sets along f is exactly given by the functor action .
Proof. Fix
X. By definition, a tensor-valued Uncertain Set of type
M on
X is precisely a map
. On the other hand,
Hence the correspondence “take the same function” gives a bijection between the two collections.
For functoriality, let and let represent a tensor-valued uncertain set on Y. The induced assignment on X obtained by pullback along f is . But by Definition 23, this is exactly . Therefore the identification respects the morphism action, and the tensor-valued functorial set framework indeed generalizes tensor-valued uncertain sets. □
4. Conclusions
We introduced
vector-valued,
matrix-valued, and
tensor-valued Uncertain Sets (including the corresponding fuzzy, neutrosophic, and related special cases) and investigate their fundamental properties. We expect that future work will further advance extensions based on hypergraphs [
29,
30] and SuperHyperGraphs [
31,
32].
Funding
This study was conducted without any financial support from external organizations or grants.
Data Availability Statement
Since this research is purely theoretical and mathematical, no empirical data or computational analysis was utilized. Researchers are encouraged to expand upon these findings with data-oriented or experimental approaches in future studies.
Acknowledgments
We would like to express our sincere gratitude to everyone who provided valuable insights, support, and encouragement throughout this research. We also extend our thanks to the readers for their interest and to the authors of the referenced works, whose scholarly contributions have greatly influenced this study. Lastly, we are deeply grateful to the publishers and reviewers who facilitated the dissemination of this work.
Conflicts of Interest
The authors declare that they have no conflicts of interest related to the content or publication of this paper.
Use of Artificial Intelligence
I use generative AI and AI-assisted tools for tasks such as English grammar checking, and I do not employ them in any way that violates ethical standards.
Disclaimer
This work presents theoretical ideas and frameworks that have not yet been empirically validated. Readers are encouraged to explore practical applications and further refine these concepts. Although care has been taken to ensure accuracy and appropriate citations, any errors or oversights are unintentional. The perspectives and interpretations expressed herein are solely those of the authors and do not necessarily reflect the viewpoints of their affiliated institutions.
Appendix A. Appendix: MetaTensor and Iterated MetaTensor
MetaStructure has been investigated in several papers. In this Appendix, we examine MetaTensors and Iterated MetaTensors [
33,
34,
35]. A
MetaTensor is a tensor whose entries are tensors; equivalently, it can be viewed as a higher-order tensor via canonical flattening of indices. An
Iterated MetaTensor applies the MetaTensor construction recursively, producing a tensor of tensors of … of scalars, with a depth parameter specifying the number of iterations.
Throughout this appendix, let
be a fixed base set (typically a field such as
), and write
for
. For a shape vector
, set
Definition A1 (Tensor as a function).
Let and . A-valued tensor of shape
is a function
We denote the set of all such tensors by
(When this is a vector space of length , and when it is a matrix space.)
Definition A2 (MetaTensor).
Fix two shapes
A-MetaTensor over
is a tensor whose entries are themselves -valued tensors of shape , namely an element
Equivalently, is a family such that each entry is an -valued tensor of shape . In coordinates, one may write
so a MetaTensor is a nested array indexed first by and then by .
Remark A1 (Canonical flattening).
When the inner shape is fixed (as in Definition A2), there is a canonical bijection (“flattening”) where denotes concatenation of the shape vectors, defined by
Thus, a MetaTensor can be viewed either as a “tensor of tensors” or as an ordinary higher-order tensor whose modes are the concatenation of the outer and inner modes.
Remark A2 (MetaVector/MetaMatrix as special cases of MetaTensor). Fix a base field (or commutative semiring) and write for the set of -shaped tensors with entries in .
MetaVector as a MetaTensor.AMetaVectoris a 1st-order MetaTensor: it is a vector whose entries are tensors, i.e., an element of for some . Equivalently, it is a MetaTensor ofouter shape with entry-domain . Via canonical flattening, a MetaVector may be viewed as an ordinary tensor in .
MetaMatrix as a MetaTensor.AMetaMatrixis a 2nd-order MetaTensor: it is a matrix whose entries are tensors, i.e., an element of for some . Equivalently, it is a MetaTensor ofouter shape with entry-domain . Via canonical flattening, a MetaMatrix may be viewed as an ordinary tensor in .
Definition A3 (Iterated MetaTensor of depth
t).
Fix an integer , a base set , and a sequence of shapes
Define recursively thelevel-
s tensor universe
by
AnIterated MetaTensor of depth
t(with level shapes ) is an element
Equivalently, is a tensor of shape whose entries are depth- Iterated MetaTensors, and so on down to scalars in at depth 0.
Remark A3 (Flattening an Iterated MetaTensor).
By repeated application of Remark A1, an Iterated MetaTensor of depth t with fixed level shapes admits a canonical flattening into an ordinary -valued tensor whose shape is the concatenation of all level shapes:
with coordinate identification obtained by concatenating the multi-indices across levels.
Remark A4 (Iterated MetaVector/Iterated MetaMatrix as special cases of Iterated MetaTensor). Fix a base entry set and a depth parameter . AnIterated MetaTensor of depth tis obtained by repeatedly taking “tensors whose entries are tensors” for t iterations (starting from scalars in ).
Iterated MetaVector.AnIterated MetaVector of depth tis the special case in which, at each iteration level, the outer tensor shape is 1st-order (a vector shape). In other words, it is an Iterated MetaTensor where every outer layer is a vector of the previous-layer objects.
Iterated MetaMatrix.AnIterated MetaMatrix of depth tis the special case in which, at each iteration level, the outer tensor shape is 2nd-order (a matrix shape). Equivalently, it is an Iterated MetaTensor where every outer layer is a matrix of the previous-layer objects.
In both cases, repeated canonical flattening identifies these objects with ordinary (single-layer) tensors whose order and mode sizes are obtained by concatenating all outer shapes with the base-level inner shapes.
Appendix B. Appendix: HyperTensor and SuperHyperTensor
Research on hierarchical concepts such as HyperStructures and SuperHyperStructures has also been actively pursued in recent years [
36,
37,
38,
39]. A HyperTensor is a tensor-indexed map assigning each multi-index a subset of a base set, allowing multi-outcomes. A SuperHyperTensor assigns each tensor entry an element of an iterated powerset, capturing nested sets-of-sets, depth-controlled uncertainty.
Throughout, let
S be a nonempty base set. For an integer
, write
For an order
and a shape vector
with
, define the
multi-index set
Definition A4 (HyperTensor).
Let S be a nonempty set and fix an order and shape . A(set-valued) HyperTensor over
S of shape
is a mapping where denotes the powerset of S.
Equivalently, is an n-way array
whose entries aresubsetsof S rather than single elements. If one requires nonempty outcomes, one may instead impose for all indices.
Definition A5 (Iterated powersets).
For a nonempty set S, define iterated powersets recursively by
Thus , , and so on. Optionally, the nonempty variant is
Definition A6 (SuperHyperTensor).
Let S be a nonempty set. Fix adepth
, an order , and a shape . ASuperHyperTensor over
S of depth
t and shape
is a mapping
i.e., an n-way array whose entries aret-fold iterated subsetsof S (sets of … of sets of elements of S, with t levels of powerset).
If one wishes to exclude empty outcomes at every level, one may require for all indices.
Sanity checks.
When , a depth-1 SuperHyperTensor is exactly a HyperTensor in the sense of Definition A4.
When one informally allows , the codomain becomes , recovering an ordinary S-valued tensor .
References
- Fujita, T.; Smarandache, F. A Unified Framework for U-Structures and Functorial Structure: Managing Super, Hyper, SuperHyper, Tree, and Forest Uncertain Over/Under/Off Models. Neutrosophic Sets and Systems 2025, 91, 337–380. [Google Scholar]
- Zadeh, L.A. Fuzzy sets. Information and control 1965, 8, 338–353. [Google Scholar] [CrossRef]
- Atanassov, K.T. Circular intuitionistic fuzzy sets. Journal of Intelligent & Fuzzy Systems 2020, 39, 5981–5986. [Google Scholar]
- Akram, M. Bipolar fuzzy graphs. Information sciences 2011, 181, 5548–5564. [Google Scholar] [CrossRef]
- Torra, V. Hesitant fuzzy sets. International journal of intelligent systems 2010, 25, 529–539. [Google Scholar] [CrossRef]
- Torra, V.; Narukawa, Y. On hesitant fuzzy sets and decision. In Proceedings of the 2009 IEEE international conference on fuzzy systems. IEEE, 2009; pp. 1378–1382. [Google Scholar]
- Zadeh, L.A. Fuzzy logic, neural networks, and soft computing. In Fuzzy sets, fuzzy logic, and fuzzy systems: selected papers by Lotfi A Zadeh; World Scientific, 1996; pp. 775–782. [Google Scholar]
- Smarandache, F. A unifying field in Logics: Neutrosophic Logic. In Philosophy; American Research Press, 1999; pp. 1–141. [Google Scholar]
- Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets; Infinite study
. 2010. [Google Scholar]
- Broumi, S.; Talea, M.; Bakali, A.; Smarandache, F. Single valued neutrosophic graphs. Journal of New theory 2016, 86–101. [Google Scholar]
- Wang, H.; Smarandache, F.; Sunderraman, R.; Zhang, Y.Q. interval neutrosophic sets and logic: theory and applications in computing: Theory and applications in computing;Infinite Study
. 2005; Vol. 5. [Google Scholar]
- Khan, Q.; Liu, P.; Mahmood, T. Some generalized dice measures for double-valued neutrosophic sets and their applications. Mathematics 2018, 6, 121. [Google Scholar] [CrossRef]
- Wei, L. An integrated decision-making framework for blended teaching quality evaluation in college English courses based on the double-valued neutrosophic sets. J. Intell. Fuzzy Syst. 2023, 45, 3259–3266. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Gamal, A.; Son, L.H.; Smarandache, F. A bipolar neutrosophic multi criteria decision making framework for professional selection. Applied Sciences 2020, 10, 1202. [Google Scholar] [CrossRef]
- Ali, M.; Son, L.H.; Deli, I.; Tien, N.D. Bipolar neutrosophic soft sets and applications in decision making. J. Intell. Fuzzy Syst. 2017, 33, 4077–4087. [Google Scholar] [CrossRef]
- Smarandache, F. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic sets-revisited; Infinite study
. 2018. [Google Scholar]
- Azeem, M.; Rashid, H.; Jamil, M.K.; Gütmen, S.; Tirkolaee, E.B. Plithogenic fuzzy graph: A study of fundamental properties and potential applications. Journal of Dynamics and Games 2024, 0–0. [Google Scholar] [CrossRef]
- Smarandache, F. Neutrosophy: neutrosophic probability, set, and logic: analytic synthesis & synthetic analysis; 1998. [Google Scholar]
- Fujita, T.; Smarandache, F. A Dynamic Survey of Fuzzy, Intuitionistic Fuzzy, Neutrosophic, Plithogenic, and Extensional Sets; Neutrosophic Science International Association (NSIA), 2025. [Google Scholar]
- Pourghasemi, H.R.; Gayen, A.; Lasaponara, R.; Tiefenbacher, J.P. Application of learning vector quantization and different machine learning techniques to assessing forest fire influence factors and spatial modelling. Environmental research 2020, 184, 109321. [Google Scholar] [CrossRef] [PubMed]
- Pisner, D.A.; Schnyer, D.M. Support vector machine. In Machine learning; Elsevier, 2020; pp. 101–121. [Google Scholar]
- Zhang, X.D. A matrix algebra approach to artificial intelligence; Springer, 2020. [Google Scholar]
- Mökander, J.; Sheth, M.; Watson, D.S.; Floridi, L. The switch, the ladder, and the matrix: models for classifying AI systems. Minds and Machines 2023, 33, 221–248. [Google Scholar] [CrossRef]
- Furstenberg, H.; Kesten, H. Products of random matrices. The Annals of Mathematical Statistics 1960, 31, 457–469. [Google Scholar] [CrossRef]
- Sokolnikoff, I.S. Tensor analysis; Wiley New York, 1964. [Google Scholar]
- McConnell, A.J. Applications of tensor analysis; Courier Corporation, 2014. [Google Scholar]
- Simmonds, J.G. A brief on tensor analysis; Springer Science & Business Media, 1997. [Google Scholar]
- Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM review 2009, 51, 455–500. [Google Scholar] [CrossRef]
- Bretto, A. Hypergraph theory. An introduction. In Mathematical Engineering; Springer: Cham, 2013; Volume 1. [Google Scholar]
- Feng, Y.; You, H.; Zhang, Z.; Ji, R.; Gao, Y. Hypergraph neural networks. Proceedings of the Proceedings of the AAAI conference on artificial intelligence 2019, Vol. 33, 3558–3565. [Google Scholar] [CrossRef]
- Smarandache, F. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra; Infinite Study
. 2020. [Google Scholar]
- Smarandache, F. Introduction to the n-SuperHyperGraph-the most general form of graph today; Infinite Study
. 2022. [Google Scholar]
- Fujita, T. MetaStructure, Meta-HyperStructure, and Meta-SuperHyper Structure Received: 05.10.2025; Accepted: 30.10.2025. Journal of Computers and Applications Published. 2025, 1, 1–22. [Google Scholar] [CrossRef]
- Fujita, T. MetaHyperGraphs, MetaSuperHyperGraphs, and Iterated MetaGraphs: Modeling Graphs of Graphs, Hypergraphs of Hypergraphs, Superhypergraphs of Superhypergraphs, and Beyond. Current Research in Interdisciplinary Studies 2025, 4, 1–23. [Google Scholar]
- Fujita, T. Some Meta-Graph Structures: Mixed Graph, DiHyperGraph, Knowledge Graph, Intersection Graph, and Chemical Graph. 2025. [Google Scholar]
- Al Tahan, M.; Davvaz, B. Weak chemical hyperstructures associated to electrochemical cells. Iranian Journal of Mathematical Chemistry 2018, 9, 65–75. [Google Scholar]
- Vougiouklis, T. Hv-groups defined on the same set. Discrete Mathematics 1996, 155, 259–265. [Google Scholar] [CrossRef]
- Smarandache, F. Foundation of SuperHyperStructure & Neutrosophic SuperHyperStructure. Neutrosophic Sets and Systems 2024, 63, 21. [Google Scholar]
- Smarandache, F. Foundation of revolutionary topologies: An overview, examples, trend analysis, research issues, challenges, and future directions. 2024. [Google Scholar] [CrossRef]
Table 1.
Overview of scalar-, vector-, matrix-, and tensor-valued Uncertain Sets.
Table 1.
Overview of scalar-, vector-, matrix-, and tensor-valued Uncertain Sets.
| Concept |
Degree-domain (typical form) |
Membership map and interpretation |
| (Uncertain Set) (scalar-valued) |
(often ) |
A U-Set of type M on a universe X is a map
.
Each element receives a degree ; choosing M recovers fuzzy (), neutrosophic (), plithogenic (), etc. |
| Vector-valued Uncertain Set |
|
A vector-valued U-Set assigns
where is a k-tuple of vectors in .
This represents multi-index degrees (e.g., criteria, agents, time points) at the level of vectors. |
| Matrix-valued Uncertain Set |
|
A matrix-valued U-Set assigns
where is a k-tuple of matrices in .
This captures two-dimensional structured degrees (e.g., criterion × time, feature × context). |
| Tensor-valued Uncertain Set |
|
A tensor-valued U-Set assigns
where is a k-tuple of nth-order tensors with entries in .
This accommodates higher-order structured degrees (e.g., agent × criterion × time × scenario). |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).