2. Interval Function Several Variables
Given an analytic function (which admits Taylor series and which converges to the function) we can extend it over square matrices, in particular over diagonal matrices (see Theorem 1.13 page 10 of [
4]) we can also extend it over interval numbers. We can generalize to real functions of several analytic real variables.
Definition 1.
Let be an analytical function and such that for any and . Define the extension function on the space of interval numbers by given by and define the extension function on , the space of diagonal matrices, by given by
Example 4. Let
-
1.
, then
-
2.
-
,
then
Definition 2.
Let be an analytical function and . We say that is free of singularity if for all points the gradient vector has non-null components in , i.e.,
Definition 3.
Let be an analytical function, be free of singularity except at the vertices, and an interval of the j-variable. Define the switch functions with respect to f as given by if on and if on . We define given by
and given by
Theorem 1.
Let be an analytical function and such that for any and free of singularity except at the vertices. Then,
Proof: Let
be a
-function and
such that
for any
and
free of singularity. Then,
where
is the result of applying the switch to
. Then, we have
Applying
, we have the following interval,
Now we will prove that the interval above corresponds to the image of
f on
. First, we observe that both
and
are elements of
. Since
R is connected and closed, we have that
is a subset of
. Now we will prove that
. For this, it is sufficient to demonstrate that
and
are the maximum and minimum values of
, respectively.
Consider given by where . As R it is free of singularity except at the vertices, the sign of the partial derivative of each variable does not change sign except at the vertices, where it can only take the zero value. Thus, all functions have the same monotony in R for all .
Let a fixed
k, define
equal to
if the derivative of
is positive and
if the derivative of
is negative. Similarly, define
equal to
if the derivative of
is positive and
if the derivative of
is negative. Observe that
and
correspond to the minimum and maximum points of
, and also for
R free of singularity we have that they correspond to the minimum and maximum points for all
in
. In particular, taking
and by monotony of
f by coordinates in
R, we have
Then
and
are the minimum and maximum points of
f in
R. Therefore,
▪
Corollary 1.
Under the same hypothesis of the above theorem. Let where are free of singularity except at the vertices, then
Proof Indeed
▪
Next, we will analyze some properties of the application .
Corollary 2. Let with and . We have
-
1.
.
-
2.
if and if
-
3.
-
-
(a)
if and only if or
-
(b)
if and only if and .
-
4.
-
with
-
(a)
if and if ,
-
(b)
if and if ,
Proof:
Let
given by
, we have
then
Let
given by
, we have
then
-
Let
given by
, we have
From the derivatives above, we have that all intervals that do not contain zero inside are free of singularities. Then let
such that
Then if and only if and if and only if
-
Let
given by
, we have
From the derivatives above, we have that all intervals
X that do not contain zero inside are free of singularities. Then let
such that
Then if and only if and if and only if
▪
3. Pseudo-Complex Numbers
Definition 4.
We define the ring of Pseudo-complex numbers as the quotient of polynomials
Each element of can be represented in the form , where and .
Addition in
is defined component-wise:
Multiplication is defined using the relation
:
The ring
is commutative, since both addition and multiplication are commutative operations. Additionally,
has a multiplicative identity, which is the element
.
Consider two elements
and
in
. The addition of these elements is:
The multiplication of these elements is:
The multiplicative inverse of
is:
with
and
. Indeed,
Proposition 1. Let space the diagonal matrix , then and are isomorphic (rings).
Proof: Define a map
by
We need to show that
is a ring homomorphism, which means we need to check that
preserves both addition and multiplication.
1. Addition
Thus,
preserves addition.
Thus, preserves multiplication.
-
Since preserves both addition and multiplication, is a ring homomorphism. It is easy to see that is bijective, so is an isomorphism. Hence, and are isomorphic as rings.
▪
We can use the following decomposition of the diagonal matrices to define the pseudo-complex:
Now we are going to prove that the space of pseudo complex numbers is a complete metric space, that is, that every Cauchy sequence is convergent (see page 83 of [
5]).
Proposition 2. is a complete metric space.
Proof: Consider the metric on
defined by:
where
and
are elements of
.
Given a Cauchy sequence
in
, where
, we have:
This implies that the sequences
and
in
are Cauchy. Since
is complete, there exist
such that
and
. Therefore,
converges to
in
. Finally, for any
, there exists
N such that for all
, the following holds:
Then,
This confirms that
converges in
, proving that
is a complete metric space.
▪
Proposition 3. Let analytical function, then
Proof: Consider the norm in
, which is given by:
This norm induces a metric on
defined by:
We have that
is a complete metric space with this metric
Claim: The sequence is a Cauchy sequence in .
Proof of Claim: , To prove that
is a Cauchy sequence, we need to show that for any
, there exists an integer
N such that for all
,
. Consider
and
:
Then,
Bound on
:
Therefore,
Since the Taylor series of
f around
a converges, for any
, there exists an integer
N such that for all
,
Claim▪
Then since the sequence is Cauchy and due to the completeness of
, we have that the series is convergent. Let
be an analytic function. We need to show that
Consider the Taylor series expansion of
f around
a:
For
, we have:
Since
,
, and generally
. Thus,
Factor out
h:
The expression inside the parentheses can be recognized as the Taylor series as
f evaluated at
:
Therefore,
subtracting
from both sides:
Thus,
which completes the proof.
▪
Theorem 2 (Pseudo-complex version).
Let be an analytical function and such that for any and free of singularity except at the vertices. We define given by
Let . Define the switch functions with respect to f as given by if on and if on . and given by
. Then,
Example 5.
Let given by and let , so we have that and are free of singularity, where in the first interval the derivative is positive and in the second negative, then using the theorem above, we have and . So
Therefore .
Example 6.
Let, given by and . We have that the partial derivative with respect to x vanishes in the curve and the partial derivative with respect to y vanishes in . On the other hand, and , so is free of singularities. Now we have on , then and . So
Therefore
3.1. Singular Subsets of
As we saw, a pseudo complex set
is invertible if and only if
and
. Let us consider the following subsets of
Proposition 4. The subsets and are fields with the same operations of , however with different multiplicative neutral elements to the ring
Proof: We have that trivially and have are additive subgroups of the additive group . Let’s show the multiplicative part.
We have that trivially fulfills closure, commutativity and associativity, we will show the existence of the neutral element and the inverse elements. let
The multiplicative neutral element is h. Indeed .
Let , then . Indeed
Now. Let then
Closure: . From here we see that it is commutative.
Associativity: .
The multiplicative neutral element is . Indeed .
Let , then . Indeed
▪
4. Resolution of Interval Equations
Suppose we have an interval equation, for example a linear equation , where all the components are intervals, what should be the procedure to solve this equation?, assuming that there is some solution. we could for example consider the equation , where the values of this equation are defined over their corresponding intervals, that is to say that , and clear x of the equation and then determine the image of the square region, using the fundamental theorem, however, what we will obtain is a region that contains the solution of the equation.
We will then give a theorem that gives us the procedure to determine the solution of an interval equation, however, this solution does not always exist, since, the matrix we obtain as a solution to the matrix equation associated with the equation does not always satisfy the condition of have the first entry less than or equal to the last entry.
Theorem 3. Let an analytical function, free of singularity except at the vertices for with for and and a compact non-degenerate interval. Suppose it exists a function such that . Then the equation has solution in X if and only if where and further the solution is determined by .
Proof:
Claim 1: The equation has a solution in X if and only if is a matrix with the first entry less than or equal to the last entry.
Consider the following interval equation in
X:
Suppose that exist a solution for
X (
25), this means that there exists an interval of the form
with
that satisfies the equation (
25), so As
it is free of singularity, then
Then
or the equivalent
On the other hand, by hypothesis there is a function
be such a function that for all
exists
such that
. This means that we can clear the unknown of each equation, thus forming the following matrix;
Since
we have
as
with
, then
is a matrix with the first entry less than or equal to the last entry.
On the other hand, if
is a matrix such that the first entry less than or equal to the last entry. Then
corresponds to a solution of the equation (
25).
Claim 1 ▪
Claim 2: is a matrix with the first entry less than or equal to the last entry if and only if .
We can write
as
with
and
, since an
is a compact non-degenerate interval. Let
v the vector direction of
to
. We have that the orientation of the
th component of
has depends on the sign of
for
. We can write the components of the vector
v as:
Thus having the equation
define a matrix with the first entry less than or equal to the last, it is necessary and sufficient that the function
g is increasing of
to
when
is positive and decreasing of
to
when
is negative. The domain of
g is free of singularity. Indeed. from
We have the partial derivative of
g is
so ,
so .
So the function
g must be monotonic in all directions within
, in particular the function
g from
to
must be monotonous, then we can represent the above condition as
Then we have
Therefore, the equation
is a matrix with the first entry less than or equal to the last if and only if
Claim 2 ▪
▪
Given error values , the propagation of these errors can be calculated using the formula . The previous theorem tells us that if the error propagation of the known variables does not exceed the final error, then there is a solution to the interval equation.
Corollary 3. Let an analytical function, ω is free of singularity and such that . The equation has a solution if and only if .
Proof We have that there is a solution if , expressing the derivative of g in terms of f, we have . Then the only necessary and sufficient condition for a solution to exist is .
▪
Corollary 4. Let given by with and and let the equation . Then exists solutions if for any exists such that .