Preprint
Article

This version is not peer-reviewed.

Non-Linear Extension of Interval Arithmetic and Exact Resolution of Interval Equations: Pseudo-Complex Numbers

Submitted:

22 January 2025

Posted:

24 January 2025

You are already at the latest version

Abstract
This paper introduces a novel extension of interval arithmetic through the formulation of pseudo- complex numbers, a mathematical framework defined over the quotient of polynomials R[h] (h2−h) . By leveraging pseudo-complex numbers, we extend traditional interval arithmetic to enhance the resolution of interval equations in analytical and computational settings. The proposed method systematically addresses the challenges of non-linear interval functions and their singularities, offering new tools for solving equations with guaranteed inclusion of solutions. Key results include the isomorphism between pseudo-complex numbers and diagonal matrices, the completeness of the pseudo-complex space, and the formulation of a generalized resolution theorem for interval equations. Applications and examples illustrate the practicality of this approach in diverse scenarios, including error propagation and constraint satisfaction in interval computations.
Keywords: 
;  

1. Introduction

1.1. Basic Terms and Concepts of the interval arithmetic

On [1] and [3] Moore defined the interval number as the closed interval demoted by [ a , b ] is the real numbers given by,
[ a , b ] = { x R , a x b }
We say that an interval is degenerate if a = b . Such an interval contains a single real number a. By convention, we agree to identify a degenerate interval [ a , a ] . In this sense, we may write such equation as
0 = [ 0 , 0 ]
We will denote by K c ( R ) the set of compact intervals real.
  • We are about to define the basic arithmetic operations between intervals. The key point in this definition is that computing with set. For example, when we add two intervals, the resulting interval is set containing the sums of all pairs of number, one from each of the initial sets. By definition then, the sum of two intervals X and Y is the set
    X + Y = { x + y ; x X , y Y }
    The difference of two intervals X and Y is the set
    X Y = { x y ; x X . y Y }
    The product of X and Y is given by
    X Y = { x y ; x X , y Y } .
    Finally, the quotient X / Y with 0 Y is defined as
    X / Y = { x / y ; x X , y Y }

1.2. Endpoint Formulas for the Arithmetic Operations

In addition, Let us that an operational way to add intervals. Since x X = [ x 1 , x 2 ] means that x 1 x x 2 and y Y = [ y 1 , y 2 ] means that y 1 y y 2 , we see by addition of inequalities that the numerical sums x + y X + Y must satisfy x 1 + y 1 x + y x 2 + y 2 . Hence, the formula X + Y = [ x 1 + y 1 , x 2 + y 2 ]
Example 1.
Let X = [ 0 , 2 ] and Y = [ 1 , 1 ] . Then X + Y = [ 0 1 , 1 + 2 ] = [ 1 , 3 ]
Subtraction Let X = [ x 1 , x 2 ] and Y = [ y 1 , y 2 ] . We add the inequalities
x 1 x x 2 and y 2 y y 1
to get x 1 y 2 x y x 2 y 1 . It follows that X Y = [ x 1 y 2 , x 2 y 1 ] . Note that X Y = X + ( Y ) where Y = [ y 2 , y 1 ]
Example 2.
Let X = [ 1 , 0 ] and Y = [ 1 , 2 ] . Then X Y = [ 1 2 , 0 1 ] = [ 3 , 1 ]
Multiplication In terms of endpoint, the product X Y of two intervals X and Y is given by
X Y = [ min S , max S ] , where S = { x 1 y 1 , x 1 y 2 , x 2 y 1 , x 2 y 2 }
Example 3.
Let X = [ 1 , 0 ] and Y = [ 1 , 2 ] . Then S = { 1 , 2 , 0 } and X Y = [ 2 , 0 ] .
The multiplication of intervals is given in terms of the minimum and maximum of four products of endpoint, this can be broken into nine spacial cases. Let X = [ x 1 , x 2 ] and Y = [ y 1 , y 2 ] then X Y = Z = [ z 1 , z 2 ] , then
case z 1 z 2
0 x 1 , y 1 x 1 y 1 x 2 y 2
x 1 < 0 < x 2 and 0 y 1 x 1 y 2 x 2 y 2
x 2 0 and 0 y 1 x 1 y 2 x 2 y 1
0 x 1 and y 1 < 0 < y 2 x 2 y 1 x 2 y 2
x 2 0 and y 1 < 0 < y 2 x 1 y 2 x 1 y 1
0 x 1 and y 2 0 x 2 y 1 x 1 y 2
x 1 < 0 < x 2 and y 2 0 x 2 y 1 x 1 y 1
x 2 0 and y 2 0 x 2 y 2 x 1 y 2
x 1 < 0 < x 2 and y 1 < 0 < y 2 min { x 1 y 2 , x 2 y 1 } max { x 1 y 1 , x 2 y 2 }
Division As with real number, division can be accomplished via multiplication by the reciprocal of the second operand. That is, we can implement the equation using
X / Y = X 1 Y ,
where,
1 Y = y ; 1 y Y .
Again, this assumes 0 Y .
For more information about interval numbers, see [1].

2. Interval Function Several Variables

Given an analytic function (which admits Taylor series and which converges to the function) we can extend it over square matrices, in particular over diagonal matrices (see Theorem 1.13 page 10 of [4]) we can also extend it over interval numbers. We can generalize to real functions of several analytic real variables.
Definition 1.
Let f : X R n R be an analytical function and a = ( a 1 , , a n ) R n such that B ( x 0 , ε ) X for any ε > 0 and j = 1 n [ α j , β j ] B ( x 0 , ε ) . Define the extension function on the space of interval numbers K c by f : K c n ( X ) K c given by f j = 1 n [ a j , b j ] and define the extension function on D 2 , the space of 2 × 2 diagonal matrices, by f : D 2 n ( X ) D 2 given by
f j = 1 n α j 0 0 β j : = f j = 1 n α j 0 0 f j = 1 n β j
Example 4.
Let α 1 0 0 β 1 , α 2 0 0 β 2 D 2 ( R )
1. 
exp x = k = 0 1 k ! x k , then exp α 1 0 0 β 1 = k = 0 1 k ! α 1 0 0 β 1 k = exp ( α 1 ) 0 0 exp ( β 1 )
2. 
sin x y : = k = 0 ( 1 ) k ( 2 k + 1 ) ! x 2 k + 1 y 2 k + 1 ,
then sin α 1 0 0 β 1 α 2 0 0 β 2 : = k = 0 ( 1 ) k ( 2 k + 1 ) ! α 1 0 0 β 1 2 k + 1 α 2 0 0 β 2 2 k + 1 = sin ( α 1 α 2 ) 0 0 sin ( β 1 β 2 )
Definition 2.
Let f : X R n R be an analytical function and Y X . We say that Y is free of singularity if for all points the gradient vector has non-null components in Y , i.e.,
f ( x ) x j 0 for all x Y and j = 1 , , n .
Definition 3.
Let f : X R n R be an analytical function, j = 1 n [ a j , b j ] X be free of singularity except at the vertices, and [ a j , b j ] an interval of the j-variable. Define the switch functions with respect to f as σ x j : D 2 D 2 given by σ x j a 0 0 b = a 0 0 b if f x j > 0 on ( a , b ) and σ x j a 0 0 b = b 0 0 a if f x j < 0 on ( a , b ) . We define φ : K c D 2 given by
φ ( [ a , b ] ) = a 0 0 b with a b
and ϕ : F ( K c n ; K c ) F ( D 2 n ; D 2 ) given by
ϕ f j = 1 n [ a j , b j ] = f j = 1 n σ x j φ [ a j , b j ] = f j = 1 n σ x j a j 0 0 b j
Theorem 1.
Let f : X R n R be an analytical function and x 0 R n such that B ( x 0 , ε ) X for any ε > 0 and j = 1 n [ a j , b j ] B ( x 0 , ε ) free of singularity except at the vertices. Then,
f j = 1 n [ a j , b j ] = φ 1 ϕ f j = 1 n [ a j , b j ] .
Proof: 
Let f : X R n R be a C 1 -function and x 0 R n such that B ( x 0 , ε ) X for any ε > 0 and j = 1 n [ a j , b j ] B ( x 0 , ε ) free of singularity. Then,
ϕ f j = 1 n [ a j , b j ] = f j = 1 n x j 0 0 y j
where x j 0 0 y j is the result of applying the switch to a j 0 0 b j . Then, we have
f j = 1 n x j 0 0 y j = f j = 1 n x j 0 0 f j = 1 n y j .
Applying φ 1 , we have the following interval,
f j = 1 n x j , f j = 1 n y j .
Now we will prove that the interval above corresponds to the image of f on R = j = 1 n [ a j , b j ] . First, we observe that both f j = 1 n x j and f j = 1 n x j are elements of f ( R ) . Since R is connected and closed, we have that f j = 1 n x j , f j = 1 n y j is a subset of f ( R ) . Now we will prove that f ( R ) f j = 1 n x j , f j = 1 n y j . For this, it is sufficient to demonstrate that f j = 1 n y j and f j = 1 n x j are the maximum and minimum values of f ( R ) , respectively.
Consider f λ k : [ a k , b k ] R given by f λ k ( x ) = f ( λ 1 , , λ k 1 , x , λ k + 1 , , λ n ) where λ k j = 1 , j k n [ a j , b j ] . As R it is free of singularity except at the vertices, the sign of the partial derivative of each variable does not change sign except at the vertices, where it can only take the zero value. Thus, all functions f λ k have the same monotony in R for all λ k .
Let a fixed k, define x k equal to a k if the derivative of f λ k is positive and b k if the derivative of f λ k is negative. Similarly, define y k equal to b k if the derivative of f λ k is positive and a k if the derivative of f λ k is negative. Observe that x k and y k correspond to the minimum and maximum points of f λ k , and also for R free of singularity we have that they correspond to the minimum and maximum points for all λ k in j = 1 , j k n [ a j , b j ] . In particular, taking λ k = ( x 1 , , x k 1 , x k + 1 , , x n ) and by monotony of f by coordinates in R, we have
f ( x 1 , , x k 1 , x k , x k + 1 , , x n ) f ( x ) f ( y 1 , , y k 1 , y k , y k + 1 , , y n ) for x j = 1 n [ a j , b j ] .
Then f ( x 1 , , x k 1 , x k , x k + 1 , , x n ) and f ( y 1 , , y k 1 , y k , y k + 1 , , y n ) are the minimum and maximum points of f in R. Therefore,
f j = 1 n x j , f j = 1 n y j = f ( R ) .
Corollary 1.
Under the same hypothesis of the above theorem. Let R = j = 1 m R j where R j are free of singularity except at the vertices, then
f ( R ) = j = 1 m φ 1 ϕ f R j .
Proof 
Indeed f ( R ) = f j = 1 m R j = j = 1 m f ( R j ) = j = 1 m φ 1 ϕ f R j
Next, we will analyze some properties of the application φ .
Corollary 2.
Let a , b , c , d R with a b and c d . We have
1. 
φ ( [ a , b ] + [ c , d ] ) = φ ( [ a , b ] ) + φ ( [ c , d ] ) .
2. 
φ ( k [ a , b ] ) = k φ ( [ a , b ] ) if k 0 and φ ( k [ a , b ] ) = k φ ( [ b , a ] ) if k < 0
3. 
φ ( [ a , b ] [ c , d ] ) =
(a) 
φ ( [ a , b ] ) φ ( [ c , d ] ) if and only if ( a , b ) , ( c , d ) > 0 or ( a , b ) , ( c , d ) < 0
(b) 
φ ( [ b , a ] ) φ ( [ c , d ] ) if and only if ( a , b ) > 0 and ( c , d ) < 0 .
4. 
φ [ a , b ] [ c , d ] = σ x ( [ a , b ] ) σ y ( [ c , d ] ) with
(a) 
σ x ( [ a , b ] ) = φ ( [ a , b ] ) if ( c , d ) > 0 and σ x ( [ a , b ] ) = φ ( [ b , a ] ) if ( c , d ) < 0 ,
(b) 
σ y ( [ c , d ] ) = φ ( [ d , c ] ) if ( a , b ) > 0 and σ y ( [ c , d ] ) = φ ( [ c , d ] ) if ( a , b ) < 0 ,
Proof: 
  • Let f : R 2 R given by f ( x , y ) = x + y , we have
    f x = 1 > 0 and f y = 1 > 0
    then
    φ ( f ( [ a , b ] , [ c , d ] ) ) = φ ( [ a , b ] + [ c , d ] ) = ϕ ( f ( [ a , b ] , [ c , d ] ) ) = f ( [ a , b ] , [ c , d ] ) = φ ( [ a , b ] ) + φ ( [ c , d ] )
  • Let f : R R given by f ( x ) = k x , we have
    f x = k
    then
    φ ( f ( [ a , b ] ) ) = φ ( k [ a , b ] ) = ϕ ( f ( [ a , b ] ) ) = k φ ( [ a , b ] ) if k > 0 and ϕ f ( [ a , b ] ) = k · ( φ ( [ b , a ] ) ) if k < 0
  • Let f : R 2 R given by f ( x , y ) = x y , we have
    f x = y and f y = x
    From the derivatives above, we have that all intervals that do not contain zero inside are free of singularities. Then let X , Y K c such that 0 i n t ( X ) , i n t ( Y )
    φ ( f ( [ a , b ] , [ c , d ] ) ) = ϕ ( f ( [ a , b ] , [ c , d ] ) = σ x φ ( [ a , b ] ) σ y φ ( [ c , d ] )
    Then σ x φ ( [ a , b ] ) = [ b , a ] if and only if ( c , d ) < 0 and σ y φ ( [ c , d ] ) = [ d , c ] if and only if ( a , b ) < 0
  • Let f : R × ( R 0 ) R given by f ( x , y ) = x y , we have
    f x = 1 y and f y = x y 2
    From the derivatives above, we have that all intervals X that do not contain zero inside are free of singularities. Then let [ a , b ] , [ c , d ] K c such that 0 ( c , d )
    φ ( f ( [ a , b ] , [ c , d ] ) ) = ϕ ( f ( [ a , b ] , [ c , d ] ) ) = f ( σ x φ ( [ a , b ] ) , σ y φ [ c , d ] ) = σ x φ ( [ a , b ] ) σ y φ ( [ c , d ] )
    Then σ x φ ( [ a , b ] ) = [ b , a ] if and only if ( c , d ) < 0 and σ y φ ( [ c , d ] ) = [ d , c ] if and only if ( a , b ) > 0

3. Pseudo-Complex Numbers

Definition 4.
We define the ring of Pseudo-complex numbers S c ( R ) as the quotient of polynomials
R [ h ] / ( h 2 h )
Each element of S c ( R ) can be represented in the form a + b h , where a , b R and h 2 = h .
Addition in S c ( R ) is defined component-wise:
( a 1 + b 1 h ) + ( a 2 + b 2 h ) = ( a 1 + a 2 ) + ( b 1 + b 2 ) h .
Multiplication is defined using the relation h 2 = h :
( a 1 + b 1 h ) · ( a 2 + b 2 h ) = a 1 a 2 + ( a 1 b 2 + b 1 a 2 + b 1 b 2 ) h .
The ring S c ( R ) is commutative, since both addition and multiplication are commutative operations. Additionally, S c ( R ) has a multiplicative identity, which is the element 1 + 0 h .
Consider two elements x = 1 + 2 h and y = 3 + 4 h in S c ( R ) . The addition of these elements is:
x + y = ( 1 + 2 h ) + ( 3 + 4 h ) = 4 + 6 h .
The multiplication of these elements is:
x · y = ( 1 + 2 h ) ( 3 + 4 h ) = 1 · 3 + ( 1 · 4 + 2 · 3 + 2 · 4 ) h = 3 + ( 4 + 6 + 8 ) h = 3 + 18 h .
The multiplicative inverse of a + b h is:
1 a + b a ( a + b ) h ,
with a 0 and a + b 0 . Indeed,
( a + b h ) 1 a + b a ( a + b ) h = a · 1 a + a · b a ( a + b ) h + b h · 1 a + b h · b a ( a + b ) h = 1 + a b a ( a + b ) h + b a h + b 2 h a ( a + b ) = 1 + a b + b ( a + b ) b 2 a ( a + b ) h = 1 + a b + a b + b 2 b 2 a ( a + b ) h = 1 + 0 a ( a + b ) h = 1 .
Proposition 1.
Let D 2 space the diagonal matrix 2 × 2 , then S c ( R ) and D 2 are isomorphic (rings).
Proof: 
Define a map φ : S c ( R ) D 2 by
φ ( a + b h ) = a 0 0 a + b .
We need to show that φ is a ring homomorphism, which means we need to check that φ preserves both addition and multiplication.
  • 1. Addition
    φ ( ( a 1 + b 1 h ) + ( a 2 + b 2 h ) ) = φ ( ( a 1 + a 2 ) + ( b 1 + b 2 ) h ) = a 1 + a 2 0 0 a 1 + a 2 + b 1 + b 2
    φ ( a 1 + b 1 h ) + φ ( a 2 + b 2 h ) = a 1 0 0 a 1 + b 1 + a 2 0 0 a 2 + b 2 = a 1 + a 2 0 0 a 1 + a 2 + b 1 + b 2
    Thus, φ preserves addition.
  • 2. Multiplication:
    φ ( ( a 1 + b 1 h ) · ( a 2 + b 2 h ) ) = φ ( a 1 a 2 + ( a 1 b 2 + b 1 a 2 + b 1 b 2 ) h ) = a 1 a 2 0 0 a 1 a 2 + a 1 b 2 + b 1 a 2 + b 1 b 2
    φ ( a 1 + b 1 h ) · φ ( a 2 + b 2 h ) = a 1 0 0 a 1 + b 1 · a 2 0 0 a 2 + b 2 = a 1 a 2 0 0 ( a 1 + b 1 ) ( a 2 + b 2 )
    = a 1 a 2 0 0 a 1 a 2 + a 1 b 2 + b 1 a 2 + b 1 b 2
  • Thus, φ preserves multiplication.
  • Since φ preserves both addition and multiplication, φ is a ring homomorphism. It is easy to see that φ is bijective, so φ is an isomorphism. Hence, S c ( R ) and D 2 are isomorphic as rings.
  • We can use the following decomposition of the diagonal matrices to define the pseudo-complex:
    a 0 0 b = a 0 0 a + b a 0 0 b a 0 0 0 1 a + ( b a ) h
    Now we are going to prove that the space of pseudo complex numbers is a complete metric space, that is, that every Cauchy sequence is convergent (see page 83 of [5]).
Proposition 2.
S c ( R ) is a complete metric space.
Proof: Consider the metric on S c ( R ) defined by:
d ( x , y ) = ( a 1 a 2 ) 2 + ( b 1 b 2 ) 2 ,
where x = a 1 + b 1 h and y = a 2 + b 2 h are elements of S c ( R ) .
Given a Cauchy sequence { x n } in S c ( R ) , where x n = a n + b n h , we have:
d ( x m , x n ) = ( a m a n ) 2 + ( b m b n ) 2 < ϵ for m , n N .
This implies that the sequences { a n } and { b n } in R are Cauchy. Since R is complete, there exist a , b R such that { a n } a and { b n } b . Therefore, x n converges to x = a + b h in S c ( R ) . Finally, for any ϵ > 0 , there exists N such that for all n N , the following holds:
| a n a | < ϵ 2 and | b n b | < ϵ 2 .
Then,
d ( x n , x ) = ( a n a ) 2 + ( b n b ) 2 < ϵ .
This confirms that { x n } converges in S c ( R ) , proving that S c ( R ) is a complete metric space.
Proposition 3.
Let f : R R analytical function, then f ( a + b h ) = f ( a ) + ( f ( a + b ) f ( a ) ) h
Proof: 
Consider the norm in S c ( R ) , which is given by:
a + b h = a 2 + b 2 .
This norm induces a metric on S c ( R ) defined by:
d ( x , y ) = x y = ( a 1 a 2 ) 2 + ( b 1 b 2 ) 2 .
We have that S c ( R ) is a complete metric space with this metric
Claim: The sequence S k = f ( a ) + f ( a ) b h + f ( a ) 2 ! b 2 h + f ( a ) 3 ! b 3 h + + f k ( a ) k ! b k h is a Cauchy sequence in S c ( R ) .
Proof of Claim: , To prove that { S k } is a Cauchy sequence, we need to show that for any ϵ > 0 , there exists an integer N such that for all m , n N , S m S n < ϵ . Consider S m and S n :
S m = k = 0 m f k ( a ) k ! b k h , S n = k = 0 n f k ( a ) k ! b k h .
Then,
S m S n = k = n + 1 m f k ( a ) k ! b k h .
Bound on S m S n :
S m S n = k = n + 1 m f k ( a ) k ! b k h k = n + 1 m f k ( a ) k ! b k h k = n + 1 m f k ( a ) k ! b k .
Therefore,
S m S n k = n + 1 m f k ( a ) k ! b k .
Since the Taylor series of f around a converges, for any ϵ > 0 , there exists an integer N such that for all k N ,
S m S n k = n + 1 m f k ( a ) k ! b k < ε
Claim
Then since the sequence is Cauchy and due to the completeness of S c ( R ) , we have that the series is convergent. Let f : R R be an analytic function. We need to show that
f ( a + b h ) = f ( a ) + ( f ( b ) f ( a ) ) h .
Consider the Taylor series expansion of f around a:
f ( x ) = f ( a ) + f ( a ) ( x a ) + f ( a ) 2 ! ( x a ) 2 +
For x = a + b h , we have:
f ( a + b h ) = f ( a ) + f ( a ) ( a + b h a ) + f ( a ) 2 ! ( a + b h a ) 2 +
= f ( a ) + f ( a ) b h + f ( a ) 2 ! ( b h ) 2 +
Since h 2 = h , ( b h ) 2 = b 2 h 2 = b 2 h , and generally ( b h ) n = b n h . Thus,
f ( a + b h ) = f ( a ) + f ( a ) b h + f ( a ) 2 ! b 2 h +
Factor out h:
f ( a + b h ) = f ( a ) + h f ( a ) b + f ( a ) 2 ! b 2 +
The expression inside the parentheses can be recognized as the Taylor series as f evaluated at a + b :
f ( a + b ) = f ( a ) + f ( a ) b + f ( a ) 2 ! b 2 +
Therefore,
f ( a + b ) = f ( a ) + f ( a ) b + f ( a ) 2 ! b 2 +
subtracting f ( a ) from both sides:
f ( a + b ) f ( a ) = f ( a ) b + f ( a ) 2 ! b 2 +
Thus,
f ( a + b h ) = f ( a ) + ( f ( a + b ) f ( a ) ) h
which completes the proof.
Theorem 2
(Pseudo-complex version). Let f : X R n R be an analytical function and x 0 R n such that B ( x 0 , ε ) X for any ε > 0 and j = 1 n [ a j , b j ] B ( x 0 , ε ) free of singularity except at the vertices. We define φ : K c S c ( R ) given by
φ ( [ a , b ] ) = a + ( b a ) h with a b
Let a + b h ¯ = ( a + b ) a h . Define the switch functions with respect to f as σ x j : S c ( R ) S c ( R ) given by σ x j ( a + ( b a ) h ) = a + b h if f x j > 0 on ( a , b ) and σ x j ( a + ( b a ) h ) = a + ( b a ) h ¯ = b a h if f x j < 0 on ( a , b ) . and ϕ : F ( K c n ( X ) ; K c ( X ) ) F ( S c ( X ) n ; S c ( X ) ) given by
ϕ f j = 1 n [ a j , b j ] = f j = 1 n σ x j φ [ a j , b j ] = f j = 1 n σ x j ( a j + ( b j a j ) h )
. Then,
f j = 1 n [ a j , b j ] = φ 1 ϕ f j = 1 n [ a j , b j ] .
Example 5.
Let f : R R given by f ( x ) = x ( 1 x ) and let X = [ 0 , 1 ] , so we have that 0 , 1 2 and 1 2 , 1 are free of singularity, where in the first interval the derivative is positive and in the second negative, then using the theorem above, we have 0 , 1 2 0 + 1 2 h and 1 2 , 1 1 1 2 h . So
f 0 + 1 2 h = 1 2 h 1 1 2 h = 1 2 h 1 4 h 2 = 1 2 h 1 4 h = 1 4 h 0 , 1 4
f 1 1 2 h = 1 1 2 h 1 1 + 1 2 h = 1 2 h 1 1 2 h = 1 2 h 1 4 h 2 = 1 2 h 1 4 h = 1 4 h 0 , 1 4
Therefore f ( [ 0 , 1 ] ) = 0 , 1 4 .
Example 6.
Let, f : R 2 R given by f ( x , y ) = x 2 x y 2 and X = [ 6 , 10 ] , Y = [ 0 , 1 ] . We have that the partial derivative with respect to x vanishes in the curve 2 x = y 2 and the partial derivative with respect to y vanishes in ( 0 , 0 ) . On the other hand, 2 X Y 2 = [ 12 , 20 ] [ 0 , 1 ] = and ( 0 , 0 ) ( 6 , 10 ) × ( 0 , 1 ) , so X × Y is free of singularities. Now we have f x , f y > 0 on X × Y , then [ 6 , 10 ] 6 + 4 h and [ 0 , 1 ] h . So
f ( 6 + 4 h , h ) = ( 6 + 4 h ) 2 ( 6 + 4 h ) h 2 = 36 + 13 h ( 6 h + 4 h ) = 13 + 3 h
Therefore f ( [ 6 , 10 ] , [ 0 , 1 ] ) = [ 13 , 16 ]

3.1. Singular Subsets of S c

As we saw, a pseudo complex set a + b h is invertible if and only if a 0 and a + b 0 . Let us consider the following subsets of S c
S c * = { a h , a R } S c * * = { a a h , a R }
Proposition 4.
The subsets s c * and s c * * are fields with the same operations of S c , however with different multiplicative neutral elements to the ring S c
Proof: We have that trivially S c * and S c * * have are additive subgroups of the additive group S c . Let’s show the multiplicative part.
We have that S c * trivially fulfills closure, commutativity and associativity, we will show the existence of the neutral element and the inverse elements. let a S c *
  • The multiplicative neutral element is h. Indeed a h · h = a h 2 = a h .
  • Let a 0 , then ( a h ) 1 = 1 a h . Indeed a h · 1 a h = h
Now. Let a , b , c R then
  • Closure: ( a a h ) · ( b b h ) = a b a b h a b h + a b h = a b a b h . From here we see that it is commutative.
  • Associativity: ( ( a a h ) · ( b b h ) ) · ( c c h ) = ( a b a b h ) · ( c c h ) = a b c a b c h = ( a a h ) · ( ( b b h ) · ( c c h ) ) .
  • The multiplicative neutral element is 1 h . Indeed ( a a h ) · ( 1 h ) = a a h .
  • Let a 0 , then ( a a h ) 1 = 1 a 1 a h . Indeed ( a a h ) · 1 a 1 a h = 1 h

4. Resolution of Interval Equations

Suppose we have an interval equation, for example a linear equation A X + B = C , where all the components are intervals, what should be the procedure to solve this equation?, assuming that there is some solution. we could for example consider the equation a x + b = c , where the values of this equation are defined over their corresponding intervals, that is to say that a A , and clear x of the equation and then determine the image of the square region, using the fundamental theorem, however, what we will obtain is a region that contains the solution of the equation.
We will then give a theorem that gives us the procedure to determine the solution of an interval equation, however, this solution does not always exist, since, the matrix we obtain as a solution to the matrix equation associated with the equation does not always satisfy the condition of have the first entry less than or equal to the last entry.
Theorem 3.
Let f : X R n R an analytical function, Ω × j = 2 n X j X free of singularity except at the vertices for X j = [ a j , b j ] with a j b j for j 2 and f x 1 0 and X 0 f Ω × j = 2 n X j a compact non-degenerate interval. Suppose it exists a function g : X 0 × j = 2 n X j Ω such that f g x 0 × j = 2 n x j , j = 2 n x j = x 0 X 0 . Then the equation f X × j = 2 n X j = X 0 has solution in X if and only if j = 2 n f x j δ j δ 0 where δ j = b j a j and further the solution is determined by φ X = σ x f g φ X 0 × i = 2 n σ x f φ X j .
Proof:
Claim 1: The equation f X × j = 2 n X j = X 0 has a solution in X if and only if φ X = σ x f g φ X 0 × j = 2 n σ x f φ X j is a matrix with the first entry less than or equal to the last entry.
Consider the following interval equation in X:
f X , j = 2 n X j = X 0 .
Suppose that exist a solution for X (25), this means that there exists an interval of the form [ a , b ] with a b that satisfies the equation (25), so As Ω × j = 2 n X j X it is free of singularity, then
f X , j = 2 n X j = φ 1 f σ x f φ X , j = 2 n σ x f φ X j
Then φ 1 f σ x f φ X , j = 2 n σ x f φ X j = X 0 or the equivalent
f σ x f φ X , j = 2 n σ x f φ X j = φ X 0
On the other hand, by hypothesis there is a function g : X 0 × j = 2 n X j Ω be such a function that for all x 0 X 0 exists ( x 2 , , x n ) j = 2 n X j such that f g x 0 × j = 2 n x j , j = 2 n X j = x 0 . This means that we can clear the unknown of each equation, thus forming the following matrix;
σ x f φ X = g φ X 0 × j = 2 n σ x f φ X j
Since σ x f σ x f = i d we have
φ X = σ x f g φ X 0 × j = 2 n σ x f φ X j
as φ X = a 1 0 0 b 1 with a 1 b 1 , then σ x f g φ X 0 × j = 2 n σ x f φ X j is a matrix with the first entry less than or equal to the last entry.
On the other hand, if σ x f g φ X 0 × j = 2 n σ x f φ X j is a matrix such that the first entry less than or equal to the last entry. Then X = φ 1 σ x f g φ X 0 × j = 2 n σ x f φ X j corresponds to a solution of the equation (25).
Claim 1
Claim 2: φ X = σ x f g φ X 0 × j = 2 n σ x f φ X j is a matrix with the first entry less than or equal to the last entry if and only if j = 2 n f x j ( b j a j ) ( b 0 a 0 ) .
We can write g φ X 0 × j = 2 n σ x f φ X j as g ( a 0 , γ 1 ) 0 0 g ( b 0 , γ 2 ) with ( a 0 , γ 1 ) , ( b 0 , γ 2 ) R × R n 1 and a 0 b 0 , since an X 0 is a compact non-degenerate interval. Let v the vector direction of ( a 0 , γ 1 ) to ( b 0 , γ 2 ) . We have that the orientation of the j th component of j = 2 n σ x f φ X j has depends on the sign of f ( x ) x j for j 2 . We can write the components of the vector v as:
ξ 1 = δ 0
ξ j = S n g f ( x ) x 1 δ j for j 2
Thus having the equation φ X = σ x f g ( a 0 , γ 1 ) 0 0 g ( b 0 , γ 2 ) define a matrix with the first entry less than or equal to the last, it is necessary and sufficient that the function g is increasing of ( a 0 , γ 1 ) to ( b 0 , γ 2 ) when f x 1 is positive and decreasing of ( a 0 , γ 1 ) to ( b 0 , γ 2 ) when f x 1 is negative. The domain of g is free of singularity. Indeed. from
f g x 0 × j = 2 n x j , j = 2 n x j = x 0
We have the partial derivative of g is
  • f x 1 g x 0 = 1 so g x 0 = f x 1 1 0 ,
  • f x 1 g x j + f x j = 0 so g x j = f x 1 1 f x j 0 .
So the function g must be monotonic in all directions within X 0 × j = 2 n X j , in particular the function g from ( a 0 , γ 1 ) to ( b 0 , γ 2 ) must be monotonous, then we can represent the above condition as
f x 1 g ( x ) × i = 2 n x j g ( x ) v 0 for all x X 0 × i = 2 n X j
0 f x g v = f x j = 1 n ξ j g u j = f x f x 1 ξ 1 j = 2 n ξ j f x j = δ 0 j = 2 n δ j f x j
Then we have
j = 2 n f x j δ j δ 0
Therefore, the equation φ X = σ x f g φ X 0 × i = 2 n σ x f φ X j is a matrix with the first entry less than or equal to the last if and only if
j = 2 n f x j ( b j a j ) b 0 a 0
Claim 2
Given error values δ j , the propagation of these errors δ f can be calculated using the formula δ f = j = 1 n f x j δ j . The previous theorem tells us that if the error propagation of the known variables does not exceed the final error, then there is a solution to the interval equation.
Corollary 3.
Let f : Ω R R an analytical function, ω is free of singularity and g : f ( Ω ) Ω such that f ( g ( y ) ) = y . The equation f ( X ) = X 0 has a solution if and only if X 0 f ( Ω ) .
Proof We have that there is a solution if d f d x d g d y 0 , expressing the derivative of g in terms of f, we have d f d x d f d x 1 = 1 0 . Then the only necessary and sufficient condition for a solution to exist is X 0 f ( Ω ) .
Corollary 4.
Let f : R n R given by f ( x 1 , , x n ) = j = 1 n a j x j with a j 0 and X 0 f R n and let the equation j = 1 n a j X j = X 0 . Then exists solutions j = 1 n X j R n if for any k = 1 , , n exists j k n X j such that j k | a j | δ j δ 0 .

References

  1. Ramon, E. Moore. Method and applications of interval analysis. Sism, 1979. [Google Scholar]
  2. Ramon, E. Moore, R. Baker Kearfott, Michael J. Cloud. Introduction to interval analysis. 2009. [Google Scholar]
  3. Ramon, E. Moore. Interval Arithmetic and Automatic Error Analysis in Digital Computing. Ph.D. dissertation, Department of Mathematics, Stanford University, Stanford, CA, 1962.
  4. Higham, Nicholas J. Functions of matrices theory and computation. Philadelphia: Society for Industrial and Applied Mathematics. 2008.
  5. Bishop, Errett Albert. Foundations of Constructive Analysis. Ishi Press. (2012).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated