Preprint
Article

This version is not peer-reviewed.

Graph Hyperembedding and Graph SuperHyperembedding: Extensions via HyperVector and SuperHyperVector Spaces

Submitted:

24 December 2025

Posted:

25 December 2025

You are already at the latest version

Abstract
Graph embedding assigns vectors to vertices (or to entire graphs) in a low-dimensional space so that adjacency information and broader structural relationships are reflected in the resulting representations. Hypervector spaces extend the classical vector-space setting by replacing ordinary scalar multiplication with a set-valued scalar action, allowing each scalar–vector pair to produce a nonempty set of vectors while maintaining familiar distributive and associative behavior in an appropriate sense. Building on this idea, (m, n)-SuperhyperVector spaces further generalize the framework by admitting an m-ary scalar superhyperoperation whose outputs belong to an n-level iterated nonempty powerset associated with the underlying abelian group. In this paper, we introduce Graph Hyperembedding and Graph SuperHyperembedding by using HyperVector and SuperHyperVector spaces, and we investigate fundamental properties of these new embedding models.
Keywords: 
;  ;  ;  ;  

1. Preliminaries

This section summarizes the terminology and notation used in the sequel. Unless explicitly stated otherwise, every set and every structure discussed in this paper is assumed to be finite.

1.1. Classical Structures, Hyperstructures, and n-Superhyperstructures

We briefly recall three nested levels of abstraction: classical structures, hyperstructures, and n-superhyperstructures. Informally, hyperstructures are obtained by applying the powerset construction once to a base set [1,2,3,4,5,6,7], whereas n-superhyperstructures use the nth iterated powerset [8,9]. Related concepts within the family of superhyperstructures include SuperHyperGraphs [10,11,12], chemical hyperstructure [13,14], and SuperHyperAlgebra [15].
Definition 1.1
(Base Set). A base set S is the ground collection on which all subsequent (higher-order) constructions are formed:
S = { x x is an element of the domain } .
In particular, every object in P ( S ) and in P n ( S ) is ultimately built from elements of S.
Definition 1.2
(Powerset). [16] For a set S, the powerset P ( S ) is the collection of all subsets of S, including :
P ( S ) = { A A S } .
Definition 1.3
(n-th Powerset). [17] Let H be a nonempty set. The n-th powerset of H, denoted P n ( H ) , is defined recursively by
P 1 ( H ) = P ( H ) , P k + 1 ( H ) = P P k ( H ) ( k 1 ) .
Likewise, the n-th nonempty powerset P n * ( H ) is given by
P 1 * ( H ) = P * ( H ) , P k + 1 * ( H ) = P * P k * ( H ) ,
where P * ( H ) = P ( H ) { } denotes the family of all nonempty subsets of H.
To set a precise basis for hyperstructures and superhyperstructures, we next record the core definitions.
Definition 1.4
(Classical Structure). (cf. [17,18]) A classical structure consists of a nonempty set H together with one or more classical operations satisfying specified axioms.
A classical operation is a map
# 0 : H m H ,
where m 1 and H m is the m-fold Cartesian product of H. Standard examples are addition and multiplication in groups, rings, and fields.
Definition 1.5
(Hyperoperation). (cf. [19,20]) Let S be a set. A hyperoperation on S is a map
: S × S P ( S ) ,
so that the combination of two elements produces a subset of S rather than a single element.
Definition 1.6
(Hyperstructure). (cf. [4,17]) A hyperstructure is a structure obtained by working on the powerset of a base set. Concretely, it is a pair
H = ( P ( S ) , ) ,
where S is a base set and ∘ is an operation acting on elements of P ( S ) . In this way, operations may be applied to collections of elements, not only to individual elements.
Definition 1.7
(SuperHyperOperations). (cf. [17]) Let H be a nonempty set. Define the iterated powersets by
P 0 ( H ) = H , P k + 1 ( H ) = P P k ( H ) ( k 0 ) .
A SuperHyperOperation of order ( m , n ) is an m-ary map
( m , n ) : H m P * n ( H ) ,
where P * n ( H ) denotes the n-th iterated powerset of H, with the convention that it may exclude or include , depending on the setting:
  • If is excluded from the codomain, then ( m , n ) is called a classical-type ( m , n ) -SuperHyperOperation.
  • If is allowed in the codomain, then ( m , n ) is called a Neutrosophic ( m , n ) -SuperHyperOperation.
Thus, SuperHyperOperations extend hyperoperations by permitting higher-order (iterated powerset) outputs.
An ( m , n ) -superhyperstructure extends the hyperstructure viewpoint by introducing an m-ary operation on the hierarchical set P n ( S ) , the nth iterated powerset of a finite base set.
Definition 1.8
( ( m , n ) -Superhyperstructure). [17,21] Let S be a nonempty set and let P n ( S ) be its nth iterated powerset (Definition 1.3). Fix an integer m 1 . An ( m , n ) -superhyperstructure is a pair
SH m , n = P n ( S ) , ( m , n ) ,
where
( m , n ) : P n ( S ) m P n ( S )
is an m-ary SuperHyperOperation of order ( m , n ) . That is, for any X = ( X 1 , , X m ) ( P n ( S ) ) m , the value ( m , n ) ( X ) is a subset of P n ( S ) satisfying whatever axioms (e.g. associativity or distributivity) are required in the intended application.

1.2. HyperVector Space

A vector space is a set V equipped with an addition + and a scalar multiplication by a field K, and it satisfies the usual linearity axioms [22,23]. Classical generalizations include fuzzy vector spaces [24,25] and neutrosophic vector spaces [26,27]. Vector spaces are known to have numerous real-world applications and play a central role in fields such as machine learning.
A hypervector space extends the vector-space paradigm by replacing the single-valued scalar multiplication with a set-valued scalar action (an external hyperoperation)
: K × V P * ( V ) ,
so that each pair ( a , x ) K × V is assigned a nonempty subset a x V , while distributive and associative behaviors are retained (cf. [28,29]).
Definition 1.9
(Hypervector Space). (cf. [28,29]) Let K be a field and let ( V , + ) be an abelian group. Write P * ( V ) for the family of all nonempty subsets of V. A hypervector space over K is a quadruple ( V , + , , K ) in which
: K × V P * ( V )
is an external hyperoperation satisfying, for all a , b K and x , y V , the axioms:
(H1)
a ( x + y ) ( a x ) + ( a y ) , (right distributivity)
(H2)
( a + b ) x ( a x ) + ( b x ) , (left distributivity)
(H3)
a b x = ( a b ) x , (associativity)
(H4)
a ( x ) = ( a ) x = ( a x ) ,
(H5)
x 1 x .
Here, for subsets A , B V ,
A + B = { u + v u A , v B } ,
and the expression a b x is interpreted as
a b x = y b x ( a y ) .

1.3. ( m , n ) -SuperhyperVector Space

An ( m , n ) -SuperhyperVector Space extends the hypervector-space framework by allowing an m-ary scalar SuperHyperOperation whose values lie in the n-th iterated (nonempty) powerset of the underlying abelian group [30,31].
Definition 1.10
( ( m , n ) -SuperhyperVector Space). [30,31] Let K be a field, let ( V , + ) be an abelian group, and fix integers m , n 1 . Let P n * ( V ) denote the family of all nonempty elements of the n-th iterated powerset P n ( V ) . An ( m , n ) -SuperhyperVector Space over K is a quadruple
V , + , ( m , n ) , K ,
where
( m , n ) : K m × V P n * ( V )
is an m-ary SuperHyperOperation such that, for all a = ( a 1 , , a m ) , b = ( b 1 , , b m ) K m and all x , y V ,
(SV1)
( m , n ) ( a , x + y ) ( m , n ) ( a , x ) + ( m , n ) ( a , y ) ,
(SV2)
( m , n ) ( a + b , x ) ( m , n ) ( a , x ) + ( m , n ) ( b , x ) ,
(SV3)
( m , n ) ( a , ( m , n ) ( b , x ) ) = ( m , n ) ( a · b , x ) ,
(SV4)
( m , n ) ( a , x ) = ( m , n ) ( a , x ) ,
(SV5)
x ( m , n ) ( ( 1 , , 1 ) , x ) .
Here,
a + b = ( a 1 + b 1 , , a m + b m ) , a · b = ( a 1 b 1 , , a m b m ) ,
and for subsets A , B V ,
A + B = { u + v u A , v B } .
For reference, a comparison of Vector Space, HyperVector Space, and ( m , n ) -SuperHyperVector Space is provided in Table 1.

1.4. Graph Embedding

Graph embedding represents vertices (or entire graphs) by vectors in a relatively low-dimensional space, with the aim of retaining adjacency information and broader structural similarity. Such representations enable efficient downstream tasks such as prediction and retrieval [32,33,34,35].
Definition 1.11
(Graph embedding in a vector space). [34,36,37] Let G = ( V , E , w ) be a (possibly weighted) simple graph with vertex set V = { v 1 , , v n } , edge set E V 2 , and weight function w : E R > 0 . Fix an integer d 1 .
A d-dimensional graph embedding consists of a map
φ : V R d , equivalently a matrix Z R n × d with Z i , : = φ ( v i ) ,
together with
s : V × V R ( a chosen graph proximity / relation ) ,
and
sim : R d × R d R ( a chosen similarity in the embedding space ) ,
such that, for the vertex pairs of interest ( u , v ) , the similarity score sim ( φ ( u ) , φ ( v ) ) provides an approximation to s ( u , v ) (or at least preserves its ranking).
A standard way to express this requirement is to select φ by solving an optimization problem of the form
min φ : V R d ( u , v ) Ω L s ( u , v ) , sim ( φ ( u ) , φ ( v ) ) + λ R ( φ ) ,
where Ω V × V is the set of training/evaluation pairs, L is a loss function, R is a regularizer, and λ 0 .

2. Results

This section presents the results.

2.1. Graph Hyperembedding

Graph Hyperembedding maps vertices to nonempty sets of hypervectors in a hypervector space, preserving graph proximities via aggregated similarity.
Definition 2.1
(Aggregation operator). An aggregation operator is a map
Agg : P * ( R ) R
satisfying the singleton normalization:
Agg ( { r } ) = r ( r R ) .
Typical examples include Agg ( X ) = max X , Agg ( X ) = min X , or Agg ( X ) = 1 | X | x X x for finite X.
Definition 2.2
(Induced similarity between nonempty subsets). Let ( H , + , , K ) be a hypervector space and let
sim : H × H R
be a prescribed similarity function on (hyper)vectors. Let Agg be an aggregation operator (Definition 2.1). For A , B P * ( H ) , define the induced similarity
SIM ( A , B ) = Agg { sim ( x , y ) x A , y B } .
Definition 2.3
(Graph hyperembedding in a hypervector space). Let G = ( V , E , w ) be a (possibly weighted) simple graph with V = { v 1 , , v n } and E V 2 . Fix a hypervector space ( H , + , , K ) . Fix a graph proximity function s : V × V R , a similarity sim : H × H R , and an aggregation operator Agg .
A graph hyperembedding of G into ( H , + , , K ) is a set-valued map
Φ : V P * ( H ) ,
together with the induced similarity SIM from Definition 2.2, such that SIM ( Φ ( u ) , Φ ( v ) ) approximates (or preserves the ordering of) s ( u , v ) for relevant pairs ( u , v ) .
As in ordinary graph embedding, one common specification is to define Φ as a minimizer of
min Φ : V P * ( H ) ( u , v ) Ω L s ( u , v ) , SIM ( Φ ( u ) , Φ ( v ) ) + λ R ( Φ ) ,
where Ω V × V , L is a loss function, R is a regularizer, and λ 0 .
Remark 2.4
(Deterministic (singleton) case). If Φ ( v ) is a singleton for every v V , i.e., Φ ( v ) = { h v } with h v H , then
SIM ( Φ ( u ) , Φ ( v ) ) = Agg ( { sim ( h u , h v ) } ) = sim ( h u , h v )
by Definition 2.1. Thus the hyperembedding reduces to an ordinary point embedding into H.
Example 2.5
(A concrete graph hyperembedding with set-valued vertex representations). Let G = ( V , E ) be the path on three vertices
V = { v 1 , v 2 , v 3 } , E = { { v 1 , v 2 } , { v 2 , v 3 } } .
(Target hypervector space.) Let K = R and let H = R 2 with the usual vector addition +. Define an external hyperoperation : K × H P * ( H ) by
a x = { a x } ( a R , x R 2 ) .
Then ( H , + , , K ) is a hypervector space.
(Proximity, similarity, and aggregation.) Define the graph proximity
s ( u , v ) = 1 , { u , v } E , 0 , { u , v } E ,
and define a similarity on H by the negative squared Euclidean distance:
sim ( x , y ) = x y 2 2 .
Choose the aggregation operator Agg : P * ( R ) R as
Agg ( X ) = max X .
Hence the induced similarity for nonempty sets A , B H is
SIM ( A , B ) = max { sim ( x , y ) x A , y B } .
(Set-valued embedding.) Define Φ : V P * ( H ) by
Φ ( v 1 ) = { ( 0 , 0 ) , ( 0.1 , 0 ) } , Φ ( v 2 ) = { ( 1 , 0 ) , ( 1.1 , 0 ) } , Φ ( v 3 ) = { ( 2 , 0 ) , ( 2.1 , 0 ) } .
This Φ is a graph hyperembedding because adjacent vertices obtain higher induced similarity than the non-adjacent pair, as shown by the explicit computations below.
(Verification by direct calculation.) For { v 1 , v 2 } E , compute all four squared distances:
( 0 , 0 ) ( 1 , 0 ) 2 2 = 1 , ( 0 , 0 ) ( 1.1 , 0 ) 2 2 = 1.21 ,
( 0.1 , 0 ) ( 1 , 0 ) 2 2 = 0.81 , ( 0.1 , 0 ) ( 1.1 , 0 ) 2 2 = 1 .
Thus the corresponding similarities are
1 , 1.21 , 0.81 , 1 ,
and therefore
SIM ( Φ ( v 1 ) , Φ ( v 2 ) ) = max { 1 , 1.21 , 0.81 , 1 } = 0.81 .
Similarly, for { v 2 , v 3 } E we obtain
SIM ( Φ ( v 2 ) , Φ ( v 3 ) ) = 0.81
(by the same distance pattern shifted by + 1 in the first coordinate).
For the non-edge pair { v 1 , v 3 } E , compute:
( 0 , 0 ) ( 2 , 0 ) 2 2 = 4 , ( 0 , 0 ) ( 2.1 , 0 ) 2 2 = 4.41 ,
( 0.1 , 0 ) ( 2 , 0 ) 2 2 = 3.61 , ( 0.1 , 0 ) ( 2.1 , 0 ) 2 2 = 4 ,
so the similarities are
4 , 4.41 , 3.61 , 4 ,
and hence
SIM ( Φ ( v 1 ) , Φ ( v 3 ) ) = max { 4 , 4.41 , 3.61 , 4 } = 3.61 .
Therefore,
SIM ( Φ ( v 1 ) , Φ ( v 2 ) ) = SIM ( Φ ( v 2 ) , Φ ( v 3 ) ) = 0.81 > 3.61 = SIM ( Φ ( v 1 ) , Φ ( v 3 ) ) ,
which matches the intended proximity ordering s ( v 1 , v 2 ) = s ( v 2 , v 3 ) = 1 > 0 = s ( v 1 , v 3 ) .
Theorem 2.6
(Graph hyperembedding generalizes graph embedding). Let ( W , + , · , K ) be a (classical) vector space over a field K. Define a hyperoperation : K × W P * ( W ) by
a x = { a · x } ( a K , x W ) .
Then ( W , + , , K ) is a hypervector space.
Moreover, let G = ( V , E , w ) be a graph, let φ : V W be a (classical) graph embedding, and let s : V × V R and sim : W × W R be the proximity and similarity used for φ. Choose any aggregation operator Agg . Define the singleton lift
Φ : V P * ( W ) , Φ ( v ) = { φ ( v ) } .
Then Φ is a graph hyperembedding of G into ( W , + , , K ) in the sense of Definition 2.3, and it reproduces the same pairwise similarities:
SIM ( Φ ( u ) , Φ ( v ) ) = sim ( φ ( u ) , φ ( v ) ) ( u , v V ) .
In particular, Graph Hyperembedding is a genuine generalization of Graph Embedding.
Proof. 
Step 1 (Hypervector space axioms). We verify the axioms (H1)–(H5) of the Hypervector Space definition for ( W , + , , K ) with a x = { a · x } . Recall that for subsets A , B W ,
A + B = { u + v u A , v B } .
(H1) Right distributivity:
a ( x + y ) = { a · ( x + y ) } = { a · x + a · y } { a · x } + { a · y } = ( a x ) + ( a y ) .
(H2) Left distributivity:
( a + b ) x = { ( a + b ) · x } = { a · x + b · x } { a · x } + { b · x } = ( a x ) + ( b x ) .
(H3) Associativity: First note b x = { b · x } . Hence, using the standard interpretation
a ( b x ) = y b x ( a y ) ,
we compute
a ( b x ) = y { b · x } { a · y } = { a · ( b · x ) } = { ( a b ) · x } = ( a b ) x .
(H4) Compatibility with additive inverses:
a ( x ) = { a · ( x ) } = { ( a · x ) } = ( { a · x } ) = ( a x ) ,
and similarly
( a ) x = { ( a ) · x } = { ( a · x ) } = ( a x ) .
(H5) Unit condition:
1 x = { 1 · x } = { x } ,
so x 1 x .
Therefore ( W , + , , K ) is a hypervector space.
Step 2 (Reduction of similarities). Let Φ ( v ) = { φ ( v ) } . By Definition 2.2,
SIM ( Φ ( u ) , Φ ( v ) ) = Agg { sim ( x , y ) x { φ ( u ) } , y { φ ( v ) } } = Agg ( { sim ( φ ( u ) , φ ( v ) ) } ) .
By the singleton normalization in Definition 2.1,
Agg ( { sim ( φ ( u ) , φ ( v ) ) } ) = sim ( φ ( u ) , φ ( v ) ) .
Hence SIM ( Φ ( u ) , Φ ( v ) ) = sim ( φ ( u ) , φ ( v ) ) for all u , v , so Φ satisfies the same proximity-preservation requirement as φ . Thus Φ is a graph hyperembedding, and Graph Hyperembedding generalizes Graph Embedding. □

2.2. Graph SuperHyperembedding

Graph SuperHyperembedding maps graph vertices to n-th iterated nonempty powerset elements in an (m,n)-SuperhyperVector Space, preserving proximities.
Definition 2.7
(Lifted Minkowski sum on P k * ( U ) ). Let ( U , + ) be an abelian group. Define binary operations k on P k * ( U ) recursively as follows.
For k = 0 , set 0 : = + on U.
For k 0 and X , Y P k + 1 * ( U ) = P * ( P k * ( U ) ) , define
X k + 1 Y = x k y | x X , y Y .
In particular, for k = 0 and nonempty subsets A , B U ,
A 1 B = { a + b a A , b B } ,
which is the usual Minkowski sum.
Definition 2.8
(Singleton tower operator). Let U be a nonempty set. Define maps ι k : P 0 * ( U ) P k * ( U ) by
ι 0 ( x ) = x , ι k + 1 ( x ) = { ι k ( x ) } ( k 0 ) .
More generally, for X P j * ( U ) and n j , define ι j n ( X ) P n * ( U ) by
ι j j ( X ) = X , ι j ( t + 1 ) ( X ) = { ι j t ( X ) } ( t j ) .
Thus ι 1 n ( A ) = { { { A } } } (with n 1 additional braces) for A P 1 * ( U ) .
Lemma 2.9
(Compatibility of ι n with lifted sum). Let ( U , + ) be an abelian group and define k as in Definition 2.7. Then for all n 1 and all u , v U ,
ι n ( u ) n ι n ( v ) = ι n ( u + v ) .
Proof. 
We prove by induction on n.
Base case n = 1 :
ι 1 ( u ) 1 ι 1 ( v ) = { u } 1 { v } = { u + v } = ι 1 ( u + v ) .
Induction step: assume ι n ( u ) n ι n ( v ) = ι n ( u + v ) . Then using Definition 2.7,
ι n + 1 ( u ) n + 1 ι n + 1 ( v ) = { ι n ( u ) } n + 1 { ι n ( v ) } = ι n ( u ) n ι n ( v ) .
By the induction hypothesis, ι n ( u ) n ι n ( v ) = ι n ( u + v ) , hence
ι n + 1 ( u ) n + 1 ι n + 1 ( v ) = { ι n ( u + v ) } = ι n + 1 ( u + v ) .
Definition 2.10
(Aggregation operator). An aggregation operator is a map
Agg : P * ( R ) R
satisfying
Agg ( { r } ) = r ( r R ) .
Definition 2.11
(Recursive induced similarity on P n * ( U ) ). Let U be a nonempty set and fix a base similarity
sim 0 : U × U R .
Fix an aggregation operator Agg (Definition 2.10). Define similarities SIM k : P k * ( U ) × P k * ( U ) R recursively by
SIM 0 = sim 0 ,
and for k 0 , for X , Y P k + 1 * ( U ) ,
SIM k + 1 ( X , Y ) = Agg { SIM k ( x , y ) x X , y Y } .
Lemma 2.12
(Singleton-collapse of the recursive similarity). With SIM k defined as in Definition 2.11, for any k 0 and any X , Y P k * ( U ) ,
SIM k + 1 ( { X } , { Y } ) = SIM k ( X , Y ) .
Consequently, for n 1 and any A , B P 1 * ( U ) ,
SIM n ι 1 n ( A ) , ι 1 n ( B ) = SIM 1 ( A , B ) .
Proof. 
First claim:
SIM k + 1 ( { X } , { Y } ) = Agg { SIM k ( x , y ) x { X } , y { Y } } = Agg ( { SIM k ( X , Y ) } ) = SIM k ( X , Y ) ,
using the singleton normalization Agg ( { r } ) = r .
Second claim follows by iterating the first claim n 1 times:
SIM n ( ι 1 n ( A ) , ι 1 n ( B ) ) = SIM n 1 ( ι 1 ( n 1 ) ( A ) , ι 1 ( n 1 ) ( B ) ) = = SIM 1 ( A , B ) .
Definition 2.13
(Graph superhyperembedding in an ( m , n ) -SuperhyperVector Space). Let G = ( V , E , w ) be a (possibly weighted) simple graph with V = { v 1 , , v N } and E V 2 . Fix an ( m , n ) -SuperhyperVector Space
U , + , ( m , n ) , K
(in particular ( U , + ) is an abelian group), and fix a graph proximity function
s : V × V R .
Fix also a base similarity sim 0 : U × U R and an aggregation operator Agg . Let SIM n be the induced similarity on P n * ( U ) from Definition 2.11.
A graph superhyperembedding of G into U , + , ( m , n ) , K is a map
Ψ : V P n * ( U )
such that SIM n ( Ψ ( u ) , Ψ ( v ) ) approximates (or preserves the ordering of) s ( u , v ) for relevant pairs ( u , v ) .
One common specification is to define Ψ as a minimizer of
min Ψ : V P n * ( U ) ( u , v ) Ω L s ( u , v ) , SIM n ( Ψ ( u ) , Ψ ( v ) ) + λ R ( Ψ ) ,
where Ω V × V , L is a loss function, R a regularizer, and λ 0 .
Example 2.14
(A level- n = 2 Graph SuperHyperembedding on a short path). Let G = ( V , E ) be the path on three vertices
V = { v 1 , v 2 , v 3 } , E = { v 1 , v 2 } , { v 2 , v 3 } .
Define the proximity
s ( u , v ) = 1 , { u , v } E , 0 , { u , v } E .
(Target ( m , n ) -SuperhyperVector Space.) Let K = R and let U = R 2 with the usual addition +. Fix ( m , n ) = ( 2 , 2 ) and define
( 2 , 2 ) : K 2 × U P 2 * ( U ) , ( 2 , 2 ) ( a 1 , a 2 ) , x = { ( a 1 a 2 ) x } = ι 2 ( a 1 a 2 ) x .
This is the canonical singleton-tower lifting of ordinary scalar multiplication; hence the axioms (SV1)–(SV5) follow from the usual vector-space identities exactly as in the singleton reduction arguments used for hypervector spaces.
(Base similarity and aggregation.) Let
sim 0 ( x , y ) = x y 2 2 ( x , y U ) ,
and choose the aggregation operator Agg ( X ) = max X . Let SIM 2 be the induced similarity on P 2 * ( U ) constructed recursively (Definition 2.11).
(Definition of the superhyperembedding.) Define Ψ : V P 2 * ( U ) by the following nonempty families of nonempty subsets:
Ψ ( v 1 ) = { A 1 , A 2 } , Ψ ( v 2 ) = { B 1 , B 2 } , Ψ ( v 3 ) = { C 1 , C 2 } ,
where
A 1 = { ( 0 , 0 ) , ( 0.2 , 0 ) } , A 2 = { ( 0 , 0.1 ) , ( 0.2 , 0.1 ) } ,
B 1 = { ( 1 , 0 ) , ( 1.2 , 0 ) } , B 2 = { ( 1 , 0.1 ) , ( 1.2 , 0.1 ) } ,
C 1 = { ( 3 , 0 ) , ( 3.2 , 0 ) } , C 2 = { ( 3 , 0.1 ) , ( 3.2 , 0.1 ) } .
Thus each Ψ ( v i ) is an element of P 2 * ( U ) = P * ( P * ( U ) ) .
(Verification: edge pairs have larger induced similarity than the non-edge pair.) First compute SIM 1 ( A 1 , B 1 ) . We list the four squared distances:
( 0 , 0 ) ( 1 , 0 ) 2 2 = 1 , ( 0 , 0 ) ( 1.2 , 0 ) 2 2 = 1.44 ,
( 0.2 , 0 ) ( 1 , 0 ) 2 2 = 0.64 , ( 0.2 , 0 ) ( 1.2 , 0 ) 2 2 = 1 .
Hence the four similarities sim 0 are
1 , 1.44 , 0.64 , 1 ,
so (since Agg = max )
SIM 1 ( A 1 , B 1 ) = max { 1 , 1.44 , 0.64 , 1 } = 0.64 .
By the same computation (shifted in the second coordinate),
SIM 1 ( A 2 , B 2 ) = 0.64 .
Therefore, using the level-2 recursion SIM 2 ( { A 1 , A 2 } , { B 1 , B 2 } ) = max i , j SIM 1 ( A i , B j ) , we obtain
SIM 2 ( Ψ ( v 1 ) , Ψ ( v 2 ) ) = 0.64 .
Next, for the edge { v 2 , v 3 } E , compute SIM 1 ( B 1 , C 1 ) . The closest pair is ( 1.2 , 0 ) B 1 and ( 3 , 0 ) C 1 , giving
( 1.2 , 0 ) ( 3 , 0 ) 2 2 = ( 1.8 ) 2 = 3.24 SIM 1 ( B 1 , C 1 ) 3.24 .
A direct enumeration of all four pairs in B 1 × C 1 shows the maximum similarity is indeed 3.24 , so SIM 1 ( B 1 , C 1 ) = 3.24 , and similarly SIM 1 ( B 2 , C 2 ) = 3.24 . Hence
SIM 2 ( Ψ ( v 2 ) , Ψ ( v 3 ) ) = 3.24 .
Finally, for the non-edge { v 1 , v 3 } E , the closest pair is ( 0.2 , 0 ) A 1 and ( 3 , 0 ) C 1 , giving
( 0.2 , 0 ) ( 3 , 0 ) 2 2 = ( 2.8 ) 2 = 7.84 SIM 2 ( Ψ ( v 1 ) , Ψ ( v 3 ) ) = 7.84 .
Consequently,
min SIM 2 ( Ψ ( v 1 ) , Ψ ( v 2 ) ) , SIM 2 ( Ψ ( v 2 ) , Ψ ( v 3 ) ) = min { 0.64 , 3.24 } = 3.24 > 7.84 = SIM 2 ( Ψ ( v 1 ) , Ψ ( v 3 ) ) ,
which matches the intended proximity ordering 1 > 0 for edges versus the non-edge. Thus Ψ is a concrete Graph SuperHyperembedding of G at level n = 2 .
Example 2.15
(A level- n = 3 Graph SuperHyperembedding on a 4-cycle). Let G = ( V , E ) be the cycle C 4 with
V = { a , b , c , d } , E = { a , b } , { b , c } , { c , d } , { d , a } .
Define the proximity s by s ( u , v ) = 1 for { u , v } E and s ( u , v ) = 0 otherwise.
(Target ( m , n ) -SuperhyperVector Space.) Let K = R , U = R 2 with usual addition, and fix ( m , n ) = ( 3 , 3 ) . Define the canonical singleton-tower scalar SuperHyperOperation
( 3 , 3 ) : K 3 × U P 3 * ( U ) , ( 3 , 3 ) ( α , β , γ ) , x = ι 3 ( α β γ ) x .
(Base similarity and aggregation.) Let
sim 0 ( x , y ) = x y 2 2 , Agg ( X ) = max X ,
and let SIM 3 be the induced similarity on P 3 * ( U ) obtained from Definition 2.11.
(Mode construction in P 3 * ( U ) .) We first define nonempty subsets of U (level 1 objects):
A 1 = { ( 0 , 0 ) , ( 0 , 0.1 ) } , A 2 = { ( 0.1 , 0 ) , ( 0.1 , 0.1 ) } ,
B 1 = { ( 1 , 0 ) , ( 1 , 0.1 ) } , B 2 = { ( 1.1 , 0 ) , ( 1.1 , 0.1 ) } ,
C 1 = { ( 1 , 1 ) , ( 1 , 1.1 ) } , C 2 = { ( 1.1 , 1 ) , ( 1.1 , 1.1 ) } ,
D 1 = { ( 0 , 1 ) , ( 0 , 1.1 ) } , D 2 = { ( 0.1 , 1 ) , ( 0.1 , 1.1 ) } .
Next define level 2 objects (nonempty sets of the above subsets):
X a ( 1 ) = { A 1 , A 2 } , X a ( 2 ) = { A 1 } ,
X b ( 1 ) = { B 1 , B 2 } , X b ( 2 ) = { B 1 } ,
X c ( 1 ) = { C 1 , C 2 } , X c ( 2 ) = { C 1 } ,
X d ( 1 ) = { D 1 , D 2 } , X d ( 2 ) = { D 1 } .
Finally, define the level 3 embedding Ψ : V P 3 * ( U ) by
Ψ ( a ) = { X a ( 1 ) , X a ( 2 ) } , Ψ ( b ) = { X b ( 1 ) , X b ( 2 ) } , Ψ ( c ) = { X c ( 1 ) , X c ( 2 ) } , Ψ ( d ) = { X d ( 1 ) , X d ( 2 ) } .
Each Ψ ( · ) is a nonempty set of level-2 objects, hence an element of P 3 * ( U ) .
(Verification: an edge similarity dominates a diagonal similarity.) Consider first the edge { a , b } E . Compute the level-1 induced similarity SIM 1 ( A 2 , B 1 ) . The closest pair is ( 0.1 , 0 ) A 2 and ( 1 , 0 ) B 1 :
( 0.1 , 0 ) ( 1 , 0 ) 2 2 = ( 0.9 ) 2 = 0.81 SIM 1 ( A 2 , B 1 ) = 0.81 .
One checks similarly that all other pairs among A i and B j yield similarities 0.81 , so
SIM 2 X a ( 1 ) , X b ( 1 ) = max i { 1 , 2 } , j { 1 , 2 } SIM 1 ( A i , B j ) = 0.81 .
Because SIM 3 is the max-aggregation over mode pairs,
SIM 3 Ψ ( a ) , Ψ ( b ) = max p { 1 , 2 } , q { 1 , 2 } SIM 2 X a ( p ) , X b ( q ) SIM 2 X a ( 1 ) , X b ( 1 ) = 0.81 .
In fact, the other mode combinations give values 0.81 , so
SIM 3 Ψ ( a ) , Ψ ( b ) = 0.81 .
Now consider the diagonal non-edge { a , c } E . Compute SIM 1 ( A 2 , C 1 ) using the closest pair ( 0.1 , 0.1 ) A 2 and ( 1 , 1 ) C 1 :
( 0.1 , 0.1 ) ( 1 , 1 ) 2 2 = ( 0.9 ) 2 + ( 0.9 ) 2 = 0.81 + 0.81 = 1.62 SIM 1 ( A 2 , C 1 ) = 1.62 .
Enumerating the remaining pairs among A i and C j gives similarities 1.62 , hence
SIM 2 X a ( 1 ) , X c ( 1 ) = 1.62 .
The other mode combinations yield values 1.62 (for instance, SIM 2 ( X a ( 2 ) , X c ( 2 ) ) = SIM 1 ( A 1 , C 1 ) = 1.81 ), so
SIM 3 Ψ ( a ) , Ψ ( c ) = max p , q { 1 , 2 } SIM 2 X a ( p ) , X c ( q ) = 1.62 .
Therefore,
SIM 3 Ψ ( a ) , Ψ ( b ) = 0.81 > 1.62 = SIM 3 Ψ ( a ) , Ψ ( c ) ,
which matches s ( a , b ) = 1 > 0 = s ( a , c ) . By the symmetric construction of the four corners, the same edge-versus-diagonal ordering holds for { b , c } , { c , d } , { d , a } versus the non-edges { a , c } , { b , d } . Hence Ψ is a concrete Graph SuperHyperembedding of C 4 at level n = 3 .
Theorem 2.16
(Graph SuperHyperembedding generalizes Graph Hyperembedding and Graph Embedding). Let G = ( V , E , w ) be a graph, s : V × V R a proximity, sim 0 : U × U R a base similarity, and Agg an aggregation operator.
(I)
(Generalizes Graph Hyperembedding) Assume n 1 . Let Φ : V P 1 * ( U ) = P * ( U ) be a graph hyperembedding in the sense of Definition 2.3 (with induced SIM 1 ). Define
Ψ ( v ) = ι 1 n ( Φ ( v ) ) P n * ( U ) ( v V ) .
Then Ψ is a graph superhyperembedding (Definition 2.13), and for all u , v V ,
SIM n ( Ψ ( u ) , Ψ ( v ) ) = SIM 1 ( Φ ( u ) , Φ ( v ) ) .
(II)
(Generalizes Graph Embedding) Let ( W , + , · , K ) be a classical vector space. Fix sim 0 : W × W R and Agg . Let φ : V W be a classical graph embedding (Definition: Graph embedding in a vector space). Define a lifted map
Φ ( v ) = { φ ( v ) } P 1 * ( W ) , Ψ ( v ) = ι 1 n ( Φ ( v ) ) P n * ( W ) .
Then Ψ is a graph superhyperembedding into P n * ( W ) , and for all u , v V ,
SIM n ( Ψ ( u ) , Ψ ( v ) ) = sim 0 ( φ ( u ) , φ ( v ) ) .
In particular, Graph SuperHyperembedding strictly contains the usual Graph Embedding as the singleton case.
Proof. (I) Let n 1 and define Ψ ( v ) = ι 1 n ( Φ ( v ) ) . For any u , v V , by Lemma 2.12,
SIM n ( Ψ ( u ) , Ψ ( v ) ) = SIM n ι 1 n ( Φ ( u ) ) , ι 1 n ( Φ ( v ) ) = SIM 1 ( Φ ( u ) , Φ ( v ) ) .
Therefore the proximity-preservation property for Ψ at level n is identical to that for Φ at level 1. Hence Ψ is a graph superhyperembedding.
(II) Let φ : V W be a classical graph embedding and define Φ ( v ) = { φ ( v ) } and Ψ ( v ) = ι 1 n ( Φ ( v ) ) . First compute SIM 1 on singletons:
SIM 1 ( Φ ( u ) , Φ ( v ) ) = Agg { SIM 0 ( x , y ) x { φ ( u ) } , y { φ ( v ) } } = Agg ( { sim 0 ( φ ( u ) , φ ( v ) ) } ) = sim 0 ( φ ( u ) , φ ( v ) ) .
Next apply Lemma 2.12 to lift from level 1 to level n:
SIM n ( Ψ ( u ) , Ψ ( v ) ) = SIM n ι 1 n ( Φ ( u ) ) , ι 1 n ( Φ ( v ) ) = SIM 1 ( Φ ( u ) , Φ ( v ) ) = sim 0 ( φ ( u ) , φ ( v ) ) .
Thus the induced similarities in the superhyperembedding match the classical similarities of φ , so Ψ preserves the same proximity information and is a graph superhyperembedding. □

3. Conclusion

In this paper, we introduced two extensions of classical graph embedding by employing HyperVector and SuperHyperVector structures: namely, Graph Hyperembedding and Graph SuperHyperembedding. For reference, we summarize Graph embedding, Graph Hyperembedding, and Graph SuperHyperembedding in a comparative Table 2.
In future work, we aim to investigate extended frameworks based on Fuzzy Sets[38], Neutrosophic Sets[39,40], Uncertain Sets[41,42], Plithogenic Sets[43,44], and related generalizations.

Funding

No external or organizational funding supported this work.

Data Availability Statement

This paper is purely theoretical and does not involve any empirical data. We welcome future empirical studies that build upon and test the concepts presented here.

Acknowledgments

We wish to thank everyone whose guidance, ideas, and assistance contributed to this research. Our appreciation goes to the readers for their interest and to the scholars whose publications provided the groundwork for this study. We are also indebted to the individuals and institutions that supplied the resources and infrastructure necessary for completing and disseminating this paper. Lastly, we extend our gratitude to all who offered their support in various capacities.

Ethical Approval

As this work is entirely conceptual and involves no human or animal subjects, ethical approval was not required.

Conflicts of Interest

The authors declare no conflicts of interest in connection with this study or its publication.

Research Integrity

The authors affirm that, to the best of their knowledge, this manuscript represents their original research. It has not been previously published in any journal, nor is it currently being considered for publication elsewhere.

Disclaimer on Computational Toolst

No computer-based tools—such as symbolic computation systems, automated theorem provers, or proof assistants (e.g., Mathematica, SageMath, Coq)—were employed in the development, analysis, or verification of the results contained in this paper. All derivations and proofs were conducted manually through analytical methods by the authors.

Use of Artificial Intelligence

I use generative AI and AI-assisted tools for tasks such as English grammar checking, and I do not employ them in any way that violates ethical standards.

Disclaimer on Scope and Accuracy

The theoretical models and concepts proposed in this manuscript have not yet undergone empirical testing or practical deployment. Future work may investigate their utility in applied or experimental contexts. While the authors have taken care to maintain accuracy and provide appropriate citations, inadvertent errors or omissions may remain. Readers are encouraged to consult original references for confirmation and further study. The authors assert that all mathematical results and justifications included in this work have been carefully reviewed and are believed to be correct. Should any inaccuracies or ambiguities be discovered, the authors welcome constructive feedback and will provide clarification upon request. The conclusions presented are valid only within the specific theoretical framework and assumptions described in the text. Generalizing these results to other mathematical contexts may require further investigation. All opinions and interpretations expressed herein are solely those of the authors and do not necessarily reflect the views of their respective institutions.

References

  1. Adebisi, S.A.; Ajuebishi, A.P. The Order Involving the Neutrosophic Hyperstructures, the Construction and Setting up of a Typical Neutrosophic Group. HyperSoft Set Methods in Engineering 2025, 3, 26–31. [Google Scholar] [CrossRef]
  2. Corsini, P.; Leoreanu, V. Applications of hyperstructure theory; Springer Science & Business Media, 2013; Vol. 5. [Google Scholar]
  3. Davvaz, B.; Nezhad, A.D.; Nejad, S.M. Algebraic hyperstructure of observable elementary particles including the Higgs boson. Proceedings of the National Academy of Sciences, India Section A: Physical Sciences 2020, 90, 169–176. [Google Scholar] [CrossRef]
  4. Al Tahan, M.; Davvaz, B. Weak chemical hyperstructures associated to electrochemical cells. Iranian Journal of Mathematical Chemistry 2018, 9, 65–75. [Google Scholar]
  5. Vougiouklis, T. The fundamental relation in hyperrings. The general hyperfield. In Proceedings of the Proc. Fourth Int. Congress on Algebraic Hyperstructures and Applications (AHA 1990), 1991; World Scientific. World Scientific; pp. 203–211. [Google Scholar]
  6. Vougiouklis, T. On some representations of hypergroups. Annales scientifiques de l’Université de Clermont. Mathématiques 1990, 95, 21–29. [Google Scholar]
  7. Vougiouklis, T. Hypermathematics, Hv-structures, hypernumbers, hypermatrices and Lie-Santilli admissibility. American Journal of Modern Physics 2015, 4, 38–46. [Google Scholar] [CrossRef]
  8. Fujita, T. Toward a Unified Framework for Knot Theory, Hyperknot Theory, and Superhyperknot Theory via Superhyperstructures. Neutrosophic Knowledge 2025, 6, 55–71. [Google Scholar]
  9. Al-Odhari, A. A Brief Comparative Study on HyperStructure, Super HyperStructure, and n-Super SuperHyperStructure. Neutrosophic Knowledge 2025, 6, 38–49. [Google Scholar]
  10. Smarandache, F. Extension of HyperGraph to n-SuperHyperGraph and to Plithogenic n-SuperHyperGraph, and Extension of HyperAlgebra to n-ary (Classical-/Neutro-/Anti-) HyperAlgebra. In Infinite Study; 2020. [Google Scholar]
  11. Ghods, M.; Rostami, Z.; Smarandache, F. Introduction to Neutrosophic Restricted SuperHyperGraphs and Neutrosophic Restricted SuperHyperTrees and several of their properties. Neutrosophic Sets and Systems 2022, 50, 480–487. [Google Scholar]
  12. Hamidi, M.; Smarandache, F.; Davneshvar, E. Spectrum of superhypergraphs via flows. Journal of Mathematics 2022, 2022, 9158912. [Google Scholar] [CrossRef]
  13. Al-Tahan, M.; Davvaz, B. Chemical hyperstructures for elements with four oxidation states. Iranian Journal of Mathematical Chemistry 2022, 13, 85–97. [Google Scholar]
  14. Davvaz, B.; Dehghan Nezhad, A.; Benvidi, A. Chemical hyperalgebra: Dismutation reactions. Match-Communications in Mathematical and Computer Chemistry 2012, 67, 55. [Google Scholar]
  15. Smarandache, F. History of SuperHyperAlgebra and Neutrosophic SuperHyperAlgebra (revisited again). Neutrosophic Algebraic Structures and Their Applications 2022, 10. [Google Scholar]
  16. Jech, T. Set theory: The third millennium edition, revised and expanded; Springer, 2003. [Google Scholar]
  17. Smarandache, F. Foundation of SuperHyperStructure & Neutrosophic SuperHyperStructure. Neutrosophic Sets and Systems 2024, 63, 21. [Google Scholar]
  18. Smarandache, F. Introduction to SuperHyperAlgebra and Neutrosophic SuperHyperAlgebra. Journal of Algebraic Hyperstructures and Logical Algebras 2022. [Google Scholar] [CrossRef]
  19. Vougioukli, S. Helix hyperoperation in teaching research. Science & Philosophy 2020, 8, 157–163. [Google Scholar]
  20. Vougioukli, S. Hyperoperations defined on sets of S-helix matrices. Journal of Algebraic Hyperstructures and Logical Algebras 2020, 1. [Google Scholar] [CrossRef]
  21. Smarandache, F. Foundation of revolutionary topologies: An overview, examples, trend analysis, research issues, challenges, and future directions. Neutrosophic Systems with Applications 2024, 13. [Google Scholar] [CrossRef]
  22. Hatip, A.; Olgun, N.; et al. On the Concepts of Two-Fold Fuzzy Vector Spaces and Algebraic Modules. Journal of Neutrosophic and Fuzzy Systems 2023, 7, 46–52. [Google Scholar]
  23. Şahin, M.; Kargın, A.; Yıldız, İ. Neutrosophic triplet field and neutrosophic triplet vector space based on set valued neutrosophic quadruple number. Quadruple Neutrosophic Theory And Applications 2020, 1, 52. [Google Scholar]
  24. Sameena, K.; et al. FUZZY MATROIDS FROM FUZZY VECTOR SPACES. South East Asian Journal of Mathematics and Mathematical Sciences 2021, 17, 381–390. [Google Scholar]
  25. Lubczonok, P. Fuzzy vector spaces. Fuzzy sets and systems 1990, 38, 329–343. [Google Scholar] [CrossRef]
  26. Agboola, A.; Akinleye, S. Neutrosophic vector spaces. Neutrosophic Sets and Systems 2014, 4, 9–18. [Google Scholar]
  27. Ibrahim, M.A.; Badmus, B.; Akinleye, S.; et al. On refined neutrosophic vector spaces I. International Journal of Neutrosophic Science 2020, 7, 97–109. [Google Scholar] [CrossRef]
  28. Motameni, M.; Ameri, R.; Sadeghi, R. Hypermatrix Based on Krasner Hypervector Spaces. 2013. [Google Scholar]
  29. Dehghan, O. An Introduction to NeutroHyperVector Spaces. Neutrosophic Sets and Systems 2023, 58, 21. [Google Scholar]
  30. Fujita, T. HyperVector and SuperHyperVector Spaces with Applications in Machine Learning: Feature, Support, and Relevance Vectors; 2025. [Google Scholar]
  31. Fujita, T. New Concepts Using HyperStructure and. n-SuperHyperStructure: Sequence, Equation, Statistics, Number, Vector, and More 2024. [Google Scholar]
  32. Wang, Q.; Mao, Z.; Wang, B.; Guo, L. Knowledge graph embedding: A survey of approaches and applications. IEEE transactions on knowledge and data engineering 2017, 29, 2724–2743. [Google Scholar] [CrossRef]
  33. Ou, M.; Cui, P.; Pei, J.; Zhang, Z.; Zhu, W. Asymmetric Transitivity Preserving Graph Embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016. [Google Scholar]
  34. Cai, H.; Zheng, V.W.; Chang, K.C.C. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE transactions on knowledge and data engineering 2018, 30, 1616–1637. [Google Scholar] [CrossRef]
  35. Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Proceedings of the AAAI conference on artificial intelligence, 2014; Vol. 28. [Google Scholar]
  36. Goyal, P.; Ferrara, E. Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems 2018, 151, 78–94. [Google Scholar] [CrossRef]
  37. Xu, M. Understanding graph embedding methods and their applications. SIAM Review 2021, 63, 825–853. [Google Scholar] [CrossRef]
  38. Zadeh, L.A. Fuzzy sets. Information and control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  39. Broumi, S.; Talea, M.; Bakali, A.; Smarandache, F. Single valued neutrosophic graphs. Journal of New theory 2016, 10, 86–101. [Google Scholar]
  40. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets. In Infinite study; 2010. [Google Scholar]
  41. Fujita, T.; Smarandache, F. A Unified Framework for U-Structures and Functorial Structure: Managing Super, Hyper, SuperHyper, Tree, and Forest Uncertain Over/Under/Off Models. Neutrosophic Sets and Systems 2025, 91, 337–380. [Google Scholar]
  42. Fujita, T.; Smarandache, F. A Dynamic Survey of Fuzzy, Intuitionistic Fuzzy, Neutrosophic,, and Extensional Sets; Neutrosophic Science International Association (NSIA), 2025. [Google Scholar]
  43. Smarandache, F.; Martin, N. Plithogenic n-super hypergraph in novel multi-attribute decision making. In Infinite Study; 2020. [Google Scholar]
  44. Smarandache, F. Plithogenic set, an extension of crisp, fuzzy, intuitionistic fuzzy, and neutrosophic sets-revisited. Infinite study, 2018. [Google Scholar]
Table 1. Concise comparison: Vector Space vs. HyperVector Space vs. ( m , n ) -SuperHyperVector Space.
Table 1. Concise comparison: Vector Space vs. HyperVector Space vs. ( m , n ) -SuperHyperVector Space.
Category Vector Space HyperVector Space ( m , n ) -SuperHyperVector Space
Underlying additive structure Abelian group ( V , + ) Abelian group ( V , + ) Abelian group ( V , + )
Scalars Field K Field K Field K (with m-tuples of scalars)
Scalar action (type) K × V V K × V P * ( V ) K m × V P n * ( V )
Output per scalar–vector input Single vector Nonempty subset of V Nonempty element of P n ( V ) (nested set, depth n)
Distributivity (typical form) Equalities Inclusions (set-valued) Inclusions (set-valued, m-ary)
Associativity (typical form) a ( b x ) = ( a b ) x a ( b x ) = ( a b ) x ( m , n ) ( a , ( m , n ) ( b , x ) ) = ( m , n ) ( a · b , x )
Identity behavior 1 · x = x x 1 x x ( m , n ) ( ( 1 , , 1 ) , x )
Motivation Linear representation Multi-valued/uncertainty-aware scalar action Hierarchical multi-valued scalar action (iterated powerset level)
Table 2. Concise comparison: Graph embedding vs. Graph Hyperembedding vs. Graph SuperHyperembedding.
Table 2. Concise comparison: Graph embedding vs. Graph Hyperembedding vs. Graph SuperHyperembedding.
Category Graph embedding Graph Hyperembedding Graph SuperHyperembedding
Target space R d ( U , + , , K ) ( U , + , ( m , n ) , K )
Embedding map φ : V R d Φ : V P * ( U ) Ψ : V P n * ( U )
Representation One vector Nonempty set in U Nested nonempty set in P n * ( U )
Similarity sim ( · , · ) on R d SIM 1 induced from sim 0 on U SIM n induced recursively from sim 0 on U
Objective sim ( φ ( u ) , φ ( v ) ) s ( u , v ) SIM 1 ( Φ ( u ) , Φ ( v ) ) s ( u , v ) SIM n ( Ψ ( u ) , Ψ ( v ) ) s ( u , v )
Reduction Φ ( v ) = { x v } gives classical embedding into U Ψ ( v ) = ι n ( x v ) gives SIM n = sim 0
Motivation Compact encoding Ambiguity-aware encoding Hierarchical ambiguity/multi-scale encoding
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated