1. Introduction
We describe some problems with limited uncertainty as problems with inexact data. We encounter such situations in many scientific problems. These types of problems can sometimes be expressed with a linear interval equation systems. Interval matrices play a vital role in developing solution methods for these problems. For example, in neural networks, if the activation functions are not bounded, then we cannot always guarantee the existence of an equilibrium point of a neural network. In [
8] authors investigate existence of a unique equilibrium point for neural networks, which is necessary for global robust asymptotic stability of the neural network model which is a special sets of nonlinear differential equations. Only when it is known that the terms of the parameter matrices must lie in certain closed intervals can an estimate of the results be obtained. In this case we need to work with interval matrices. In general, we need interval matrices in the solution methods for linear systems with inexact data. If the missing data can be confined to the intervals, we can model the linear system with inexact data as a linear interval equation. Let us look at Example 7.5., from [
3]. The mesh equations for an electric circuit are expressed as
with
,
and
Here
denotes resistances,
denotes currents and
denotes voltages. Find enclosures for
and
with the variation possibility
on resistances.
Let us consider interval matrix
and (interval) vectors
Then
is the linear interval-equation that will serve the solution of the above linear interval system. In this paper we will try to solve this problem using the quasi-inverse concept and using Interval Cramer’s Rule, and we will see that our result is close to the results obtained in Example 7.5. In general, it is highly difficult to determine whether a solution of a system of linear-interval equations exists. The main reason for this is that the set of interval vectors that we will use to find the solution of such equations is not a vector space. So it is hard to obtain a solution method just the same as in classical linear algebra. However, fortunately, the set of all interval vectors has an algebraic structure so-called quasilinear space which is a generalization of a linear spaces. For this reason, we have to develop more general or newer concepts than classical linear algebra concepts. Further, they must be consistent with classical linear algebra concepts. The concept was first introduced by S. M. Aseev in [
1]. However, some necessary concepts such as quasispan, quasilinear dependence-independence and basis were not given in this work. These definitions and the definition of the dimension of a quasilinear space are given in references [
5,
6]. However, we realized that we needed to change some of the nomenclature in the definitions we gave in these works. For example, in a quasilinear space, we called an element that has no inverse with respect to summation a singular element. Some interval matrices may not have an inverse with respect to addition. When we call them singular elements, it is confusing with the known definition of singularity in matrices. Because every classical real matrix is a degenerate interval matrix.
Another important work on the algebraic structure of the class of intervals and more generally of sets is given by S.Markov [
18]. The study of parametric linear interval systems and the solution set of parametric interval matrix equation systems belongs to the same class of problems, and for other important works on them it is useful to refer to papers [
19,
20,
21,
22].
The concept of rank of an interval matrix has already been considered and defined in different versions (See for example [
17,
24]). Unlike the other definitions, we first recognise that an interval matrix defines a quasilinear operator between quasilinear spaces, so we define the rank of a quasilinear operator here. In order to define it, we must first define the dimension of a quasilinear space. We have already given this definition in ([
5,
6]), where the dimension of a quasilinear space was defined as a pair of natural numbers. Using this definition, the rank of a quasilinear operator and hence of an interval matrix will be defined as a pair of natural numbers. As known from Linear Algebra, the range of a linear operator is a linear space. Thus the rank of an operator is defined as the dimension of the range space. However, we know from our previous works on quasilinear operators that the range of quasilinear operators may not be a subspace. But we know that the space quasi-spanned by the range is a quasilinear space. Therefore, to define the rank of a quasilinear operator
T we will use the dimension of the space quasi-spanned by the range of
T. Correspondingly, we define the row and column ranks of an interval matrix as the dimension of the spaces quasi-spanned by the row and column vectors. We will also give an example that the row and column ranks may not be the same if the interval matrix is not degenerate, i.e. not a classical matrix. If the row rank is equal to the column rank, we will call this value the rank of the interval matrix. When we consider these points, the concept of rank we give is quite different from other rank definitions of an interval matrix. Since it can give these details better, the definition of quasilinear space given by Aseev is more advantageous than the definition given by Markov. Furthermore, interval matrices are quasilinear operators according to the quasilinear operator conjecture given by Aseev. But such a correspondence is not given in Markow’s work. This correspondence is fundamental in linear algebra and this is another factor that makes Aseev’s definition of quasilinear spaces more advantageous. Continuing in this direction, we give definitions such as the determinant and quasi-inverse of an interval matrix. Thanks to these definitions, we obtain an envelope containing the solution of some linear interval equations. We call this result Interval Cramer’s rule in this work. If the solution exists, it is easy to find an envelope containing the solution of the linear interval equations. The important thing is to find an acceptably narrow envelope containing the solution set. In the important reference [
3], for some simple equations of this type, some authors have obtained reasonable envelopes containing the solution by their own methods. For example, in [
3], Example 7.5, a reasonable envelope containing the solution of a this type equation is presented. We solved the same problem with the Interval Cramer’s rule developed in this study and obtained an envelope as narrow (small enough margin of error) as in [
3], Example 7.5. This gave us the impression that our rule is functional in many cases, but not always
The basic studies on the solution of systems of linear interval equations given by square matrix are given in references [
2,
14,
15]. However, earlier studies on the subject were given by Farkas, [
13], and Oettli, [
12]. Later, in [
9,
10], important contributions were made to the solution of linear systems of interval equations. Mainly in this work we first try to define the determinant of an interval matrix as an interval and its rank as a pair of natural numbers. Then, we introduce the notion of quasi-inverse of an interval matrix and obtain some results based on this concept. Further we aim to prove a theorem that we call Interval-Cramer’s rule regarding the solution of some linear interval equation systems. In addition, regarding the existence of solutions to this type of equations, we give a theorem related to the rank of an interval matrix that models the equation.
2. Interval Vectors and Matrices
An
n-dimensional interval vector
is a set in
such that each component
is a closed real interval for
In some cites this equivalent notation can be written as
We think that the first notation is more suitable in our works. We denote by
the set of all
n-dimensional interval vectors. Actually, saying
is
n-dimensional a bit of a misnomer since
is not a vector space. It’s just a word-of-mouth concept. In order to properly understand the concept of the dimension of
, we need to construct the concept of dimension for quasilinear spaces. We tried in some former works to perform this aim. Let us give some former work about quasilinear algebra. Here for any real scalar
A set
X is called a quasilinear space, [
1], on the field
of real or complex numbers, if it is a partially ordered set by the relation "⪯", and an algebraic sum operation +, and a scalar product are defined in it in such a way that the following conditions hold for all elements
and all
:
is an abelian ordered monoid with the zero
and the following other conditions
Any element
x of a quasilinear space (briefly QLS) is again called as "vector", just is the same as in the linear spaces. Any linear space is a QLS with the partial order relation “=”, but not conversely. In a QLS
X, the element
is minimal, i.e.,
if
. An element
is called
additive inverse of
if
. The inverse is unique whenever it exists. An element
x possessing the inverse is called a
stone, otherwise is called a
foam. We proved in [
4] that each stone is a minimal element.
Lemma 1. [1] Suppose that each element x in QLS X has inverse element . Then the partial order in X is determined by equality, the distributivity conditions hold, and consequently X is a linear space.
In any linear space, the equality is the only way to define a partial order such that conditions (1)-(13) hold [
1].
It will be assumed in what follows that Note that may not be exist but if it exist then For example, the interval is a foam in a nonlinear QLS, since the additive inverse of the element does not exist. However All degenerate intervals are stones and all non-degenerate intervals are foams in . Let us give an easy characterization of stones. An element x is a stone in any QLS if and only if , or equivalently, We should note that in a linear QLS, briefly in a linear space, each element is a stone. Hence the notions of stone and foam in linear spaces are redundant. An element x in a QLS X is said to be balanced whenever and denotes the set of all such elements in X.
Suppose
X is a QLS and
. Then
Y is said to be a
subspace of X whenever
Y is a QLS with the same partial order and with the same algebraic operations on
X. In [
1] the concept of a subspace for a QLS was not defined. After detailed investigations we saw that the characterization of the definition must be the same as in linear subspaces:
Y is a subspace of
X if and only if for every
and
[
4]. There exist three important subspaces of any QLS
The space
which is the class of all stones,
which is the class of all foams with the zero and
which is the class of all balanced elements. We call
and
as stone and foam subspace of
X respectively.
is known as balanced subspace of
X. Note that the quasilinear space
is a linear space but
and
are not. That is why we call
as the linear part of
Further,
An
interval matrix
is defined as the set
of all real-term
matrices
A such that
and
are fixed
-matrices and are lower and upper bounds of
, respectively. Writing interval matrices with their rows and columns explicitly shown will make our next results more understandable. Hence let us use the notation
from now on, where
Let us denote by
the family of all
interval matrices. Thus by former notation
is just
As a special case
denotes briefly
If
for each
then
is called
degenerate and any degenerate interval matrix is a singleton including only one classical real-term matrix
In this case, we can write
or sometimes
. For two elements
and
of
addition operation is defined by
and by this operation
is an abelian monoid with the identity interval matrix zero, which is a degenerate (classical)
zero matrix. Obviously
is not a group since some elements have no additive inverses. Let’s call a degenerate (classical) interval matrix
as a stone since it has an additive inverse, and call it as a foam since it has no additive inverse. Just like in classical matrices, we use the term inverse only for the multiplicative inverse in interval matrices. While in classical matrices the additive inverse is always exist but not in nondegenerate (pure) interval matrices. This is why we introduced the concepts of stone and foam. It is easy to prove that any interval matrix is degenerate if and only if it is a stone. Let us denote by
the class of all degenerate elements (stones) and denote by
the class of all foams in
The function
is a bijection and hence we can see that the set
of all classical real
-matrices are equivalent to
.
For two elements
and
of
the relation
is a partial order and hence
,
is a partially ordered monoid with the compatibility condition:
If is a stone and is a foam then means If and are both stones then they are classical matrices and means . If is a foam, is a stone then the assumption indicates that also has to be a foam. We can summarize the last case as follows: "any foam cannot be a subset of a stone". Following proposition states this assertion and it can easily be proved.
Proposition 1. The zero interval matrix θ and moreover all stones are minimal elements in ordered monoid ,
For the field
the law
is known as the scalar product on
and has the following properties: for all elements
and for all
,
By this properties we construct an algebraic structure ,. We will again write for in the sequel. In this respect, , is a quasilinear space on the field .
Example 1.
Let where and Alternatively, using interval notation, we write This is a balanced interval matrix and hence it is an element of the subspace of It is also a subspace of Furthermore, except for the zero, all balanced interval matrices are foams. For corresponds to the quasilinear space of all closed intervals of real numbers, and corresponds to . Further, is a foam and an element of while
is a stone and
3. Dimension and Basis in the Space of Interval Vectors
Any is known as n-dimensional interval vector, in fact, a set in such that each component is a closed real interval for With an another notation in interval analysis we can write where and are bounds of and they are n-tubles of real numbers. But former notation is most useful in this work. Of course, each n-dimensional interval vector can be seen as an interval (column) matrix and we denote by the set of all n-dimensional interval vectors instead of and for , . By the algebraic operations and by the partial order from the former section, is a quasilinear space on the field . Actually, calling is n-dimensional is a bit of a misnomer since is not a vector space. It’s just a word-of-mouth concept. In order to properly understand the concept of the dimension of , we need to construct the concept of dimension for quasilinear spaces.
In this section, let us present some basic results which is obtained formerly in our works [
5,
6,
7] by slightly changing some notations. Any quasilinear
combination of the set in a QLS
X is an element
such that
for some scalars
. But any
linear combination of the set in
X is an element
of
X in the form
just the same as is in classical linear (vector) spaces. Hence a linear combination of the set
is an element
z of
X such that
In a linear space, these two definitions are coincide since the relation "⪯" turns out to be the relation "=". Clearly, a linear combination of
is also a quasilinear combination of
but not conversely. For any nonempty subset
A of a QLS
the
quasi-span (q-span, for short)
of is defined by the set of all possible quasilinear combinations of
that is,
Span of is also defined in quasilinear spaces, just the same as is in classical linear spaces and obviously, Further for some linear QLS (linear space), hence, the notion of is redundant in linear spaces. Moreover, we say Aquasi-spans X whenever We know from former works that is a subspace of X but may not be a subspace of
Definition 1.
[7] (Quasilinear independence
and dependence)
A set
in a QLS X is called quasilinear independent (briefly ql-independent
) whenever the inequality
holds if and only if . Otherwise, A is called quasilinear dependent (briefly ql-dependent).
If we recall again that every linear space is a QLS with the relation "=", it can be seen that the notions of quasilinear independence and dependence coincide with linear independence and dependence in linear spaces.
Example 2. Consider a singleton in . It is obvious that if and only if where is the zero’s of . Therefore, A is ql-independent. However, the singleton is ql-dependent since for . This is an unusual case since a non-zero singleton is obviously linear independent in linear spaces. On the other hand, the set is ql-dependent. In general, the definition implies that any subset containing an element associated with zero is necessarily ql-dependent in a QLS. This extends the well-known result in linear spaces that any subset containing zero must be linearly dependent.
Example 3.
In , let and Then the set is ql-dependent since
for where is the zero’s of . However is ql-independent where and On the other hand, let then the singleton is ql-dependent in since
i-spans .
We now introduce the concept of dimension in a QLS. Our analysis indicates that it should be divided into two distinct notions, namely the stone dimension and the foam dimension. Before doing so, we first present a variation of a classical definition.
Definition 2.
Let S be a ql-independent subset of the QLS X. S is called maximal ql-independent subset of X whenever S is ql-independent, but any set icluding S is ql-dependent.
Definition 3.
[5] Stone (Foam) dimension
of any QLS X is the cardinality of any maximal ql-independent subsets of . If this number is finite then X is said to be finite stone (foam)-dimensional, otherwise; is said to be infinite stone (foam)-dimensional. Stone dimension is denoted by s- and foam dimension is denoted by f-. If s- and f- then we say that X is an
-dimensional QLS
where m and n are natural numbers or
The above definition means that
s-
is classical definition of dimension of the linear space
So,
s-
Notice that a non-trivial foam subspace of a QLS cannot be a linear space. Further, we can easily see that any QLS is
-dimensional if and only if it is
n-dimensional linear space. In this respect, the trivial linear space
is a
-dimensional QLS. We known former work [
5] that there is an example of a
-dimensional QLS other than the trivial quasilinear space
Remark 1. We can easily see that any set including a balanced element must be ql-dependent in a QLS. Further stone subspace of is dimensional while its foam subspace is dimensional.
4. Rank and Determinant of an Interval Matrix
This section includes some new definition and results on interval matrices and on the solution of some linear interval equations. We have been frequently benefited from the source [
16] for classical linear algebra facts. First of all let us fix some notation for an interval matrix
with columns and rows. When we consider an interval matrix
where
then we can write
for
To get a solution of a system of linear interval equations if it exists we think that we should first define linear algebra-like tools such as rank and inverse of an interval matrix.
First of all let us give some concepts and results on quasilinear operators given by Aseev.
Definition 4.
[1] Let X and Y be quasilinear spaces. A mapping is called a quasilinear operator
if it satisfies the following conditions:
In this definition, the last two conditions remain the same, and if we tighten the first condition a little more so that , we get the definition of a linear operator between quasilinear spaces.
Theorem 1.
An interval matrix defines a quasilinear operator from into by the interval matrix-product explicitly:
where and such that
and the product in the summation is the multiplication between intervals.
Proof. Only we are going to prove that
since verifying the other conditions are routine. But easily we can write from interval aritmetics (see [
3]) that:
□
Now for such an interval matrix
and for any interval vectors
by a system of linear interval equation
we mean a family of all linear equation systems
such that
,
and
If (4.1) has a solution then the solution set is written as
However, determining the solution sets of such equations is an extremely difficult problem. In fact, a much simpler form of such systems of equations arises when
is replaced by its linear subspace
. In this case the interval matrix
again defines a quasilinear operator from
into
and the interval vector
becomes a classical real
n-tuble
x. Moreover, the solution set of the simpler case of equation (4.1) is then expressed as
Even in this simple case the solution set is very difficult, in fact, it is an NP-hard problem. In the literature, this simpler case known as the system of linear interval equation. An earlier and fundamental results for description of the solution set of simpler case is given in [
12]. Some further investigation in this way are presented in ([
9,
10,
13,
15]). In fact we aim to develop solution techniques similar to the classical case for the simpler case of (4.1). Hence in this work we consider the linear interval equation
where
is an interval matrix,
and
Remark 2. In general, the solution set of does not appear as an interval vector. A simple example of the shape of such a set can be seen in [3] (p.99). In general, determining the exact solution set for this type problems is an NP-hard problem. Instead, in general, we determine an envelope containing the solution set Further a solution of is not an element x satisfying the equality , but an element x satisfying the (classical) linear equation for any and for any . Let us illustrate this with a simple example. Consider the system of linear interval equations . Here and . We know from interval arithmetic that there is no real number x satisfying this equality. If the solution set were defined in this way, we would say that this equation has no solution. However, this is not the case. According to the definition above, for , the system has a solution and is the solution. Similarly, for every , there exists a solution of the system and the solution set of is just In this simple example, the solution set is a 1-dimensional interval vector.
Definition 5.
Let be an interval matrix. Then (4.1) is called quasi-homogeneous whenever The dimension of the solution space of such a quasi-homogeneous system is called the quasi-nullity of .
Now since an interval matrix defines a quasilinear operator between quasilinear spaces, we will first define the rank of a quasilinear operator. Firs of all, it should be noted that a quasilinear operator may not be represented as an interval matrix even if its domain and range are finite dimensional. Furthermore, although the domain and range of a linear operator are linear spaces, the range of a quasilinear operator may not be a quasilinear spaces. For example, , for is a quasilinear operatör but the range is not a subspace of since Therefore, we will use the quasi-span of that is, for the rank definition. If T were defined between linear spaces, in which case it would be a linear operator, then would be a linear space and .
Since the notion of dimension is defined above as a pair of natural numbers, the notion of rank will also appear as a pair of natural numbers.
Definition 6.
Let X and Y be quasilinear spaces. Rank of a quasilinear operator is defined as the dimension of quasi-span of the range of T in that is,
Definition 7.
A quasilinear space which is quasi-spanned by row (column) vectors of an interval matrix is called row (column) space of . The dimension of the row (column) space of is called the row (column) rank of . We denote row and column ranks of by and , respectively. We will use the symbol if where m and n are natural numbers.
We will see in the sequel, unlike classical matrices, that the row and column ranks may not be equal in interval matrices.
Let us give first example from interval matrices.
Example 4.
Consider interval matrices and Their row and column vectors are the same. The row (column) vector of is First, we must find
Obviously, never contains degenerate interval other than zero. Hence the stone subspace of is the trivial subspace. So the stone dimension of is just the zero. Moreover, the foam subspace of is itself, and every subset of is ql-dependent. This assertion is clear from the definition of since and so for some This means foam dimension of is also zero. Eventually we conclude that so that
Now the row and column spaces of are the same and
The stone subspace of is again the trivial subspace. Therefore, its stone dimension is zero. On the other hand, the foam subspace of is again equal to itself. is ql-independent in this space, which tells us that the foam dimension of is 1 or a greater integer. Further, two elements in must be ql-dependent by the definition. So the row and column rank of is and hence
Finally, let’s consider the interval matrix . We know that and we know that is -dimensional. So, The matrix is also a classical matrix and is a transformation between linear spaces. From this point of view, its rank is 1. Every linear space is a quasilinear space and when we consider as a quasilinear operator from the quasilinear space into itself, . When we define as a quasilinear operator from into as before, .
Example 5. Now let us give examples from interval matrices. Consider and
Rows of are and Now
where Again the never contains any degenerate interval pairs other than zero. Hence the stone subspace of is the trivial subspace of . So its stone dimension is the zero. Moreover, its foam subspace is , and every subset of is ql-dependent. This is clear since for example. On the other hand, is a ql-independent set in and in This means foam dimension of is 1. Eventually we conclude that row rank of is that is, Let us now determine the column rank of . Consider the column vectors and in .
Again the stone subspace of is the trivial subspace and so its stone dimension is the zero. Let us now determine foam dimension of Observe that is ql-dependent since This means foam dimension of cannot be Further we cannot find any non-zero λ such that So is ql-independent and this implies foam dimension of is As a result we conclude that . Then we can write
Similarly, we can show that
is in fact a classical matrix and we know that its rank is 2 as a mapping on (quasi) linear space Now let us see that its rank is as a mapping on quasilinear space . Consider row vectors and in .
where Hence for any there exists such that and As the real numbers and change, interval pairs form . Now let us see this. Take an arbitrary . If then and so we can write
for some since and linearly independent in This proves the assertion. Hence
An analogous conclusion can be derived from column vectors of . As a result we conclude that the row and column rank of is , that is
Definition 8.
An interval vector whose each term consists of degenerate intervals is called a degenerate interval vector.
Thus, an interval vector whose at least one term is not a degenerate interval is called a non-degenerate interval vector. It can be easily shown that summation of a degenerate interval vector by a non-degenerate interval vector is a non-degenerate interval vector.
Example 6.
Consider Rows of are and Now
where For constitutes the stone subspace of and it is just the span of . For never contains stones, that is, stone subspace of is . Now let us look at the foam part of the row space of . The foam part is just as well, and thus since it contains maximum two ql-independent vectors, namely, . Let us now investigate columns , and of . Assume
Then, for and the above inclusion system is satisfied. This shows is ql-dependent in On the other hand is ql-independent in Because, is already linearly independent. The stone subspace of is just Hence the column rank of is , that is, So we conclude by this example that: unlike classical matrices, the row and column ranks in interval matrices may not be equal.
If we examine the rank of the (interval) matrix as a quasilinear operator from into . Then
Conclusion 1. Row and column ranks of an interval matrix including a non-degenerate term may not be equal.
Proposition 2. Any classical real matrix A with rank r is also an interval matrix from into for which the (row and column) rank is
The partial order in the
square-interval matrix space
was just defined as
Further let us say that the (interval) matrix
is the multiplicative unit in
It is not difficult to define the multiplication operation between two interval matrices by using the multiplication between two intervals. Because the multiplication rule here are the same as in classical matrices.
Definition 9.
For any , determinant
of is an interval-valued function such that
where the sum is taken on the all permutations of the set . If the permutation is even then if it is odd then
Example 7.
For
where and
Example 8.
Let
The rule given in this theorem is called Interval Sarrus Rule.
Remark 3. Since is an interval, we can write it with lower and upper bounds as If then can be calculated as from the interval calculus (see [3]). Easily we can see that if then for each .
Remark 4. Another important work on the determinant of square interval matrices is given in [23], where the determinant of an interval matrix is also defined as an interval. In that work the important result characterising the determinant calculus is presented as Proposition 3.1. With our definition, the determinant of an interval matrix includes the determinant given by the other definition, but it is not the same.
We found that the interval-valued determinant has similar properties to the classical determinant.
Theorem 3. For a square-interval matrix
1. where denotes the transpose of the interval matrix , and the transpoze is defined as in classical matrices.
2. If a square-interval matrix is obtained from by interchanging two rows (columns) of , then
3. If two rows (columns) of are equal then must be a symmetric interval.
4. If all the elements in a row (column) of are zero, that is, the interval , then .
Proof. Only claim 3 seems different from similar results in classical matrices. Here we will only prove claim 3. Other proofs are easily done similarly to classical matrices. But it is not very difficult to prove this because we can easily reach the result from the 2nd claim. This means that is a symmetric interval. □
Remark 5. In classical matrices, implies But for interval matrices the assumption only can say is a symmetric interval. Any symmetric interval is a balanced element in and also can be seen as a balanced -interval matrix.
Definition 10.
Let be an interval matrix. Let be a sub-interval matrix of type obtained by deleting the elements in the column and row of . Then is called the minor of Further, the cofactor of the is again an interval such that
We found that the interval valued determinant function has similar properties to the classical determinant function.
Theorem 4.
Let be an square-interval matrix. Then
where each is just interval multiplication.
This theorem is an interval generalization of the classical case and the proof can be derived from the former theorem.
Example 10. By this theorem, for
We know from interval calculus that if then is an interval and always includes 1. Furthermore is always balanced element that is a symmetric interval an so always
Definition 11.
Let be an interval matrix in Any element of is called a right quasi-inverse of
if it satisfies the condition Similarly, Any element of is called a left quasi-inverse of
if it satisfies the condition An interval matrix satisfying the condition is called a quasi-inverse of
. Any satisfying the condition
is called an inverse of
and then is denoted by
Of course, any right (left) inverse of is a right (left) quasi-inverse, but not conversely.
Remark 6.
Here it is possible to give the definition of a right quasi-inverseas "......". But in this case has to be a classical real-term matrix (stone), because is a minimal element in the partially ordered set Thus, as soon as we write , we get immediately. In such a case, we arrive at the definition of the concept of the right inverse of the interval matrix . An interval matrix may have many (right or left) quasi-inverses. If a quasi-inverse of an interval matrix is an inverse, then it must be a stone. Hence an inverse of an (interval) matrix must be unique in this case. A foam cannot have an inverse element, it can only have some quasi-inverses. Only stones may have inverses. If we want to introduce an inverse concept for all interval matrices, we have to work with quasi-inverse concept.
Example 11. For the interval matrices and 1 are both left and right quasi-inverses of Further, is another right (left) quasi-inverses of Any closed interval (matrix) for which is a quasi-inverse of . If then exits and . However, the foam is only a quasi-inverse of
Definition 12.
(Adjoint) Let be an square-interval matrix. Then the adjoint of
is written as and it is defined by
where and is a sub-interval matrix of type obtained by deleting the elements in the column and row of .
Just as we can multiply a real number by a matrix, we can similarly multiply an interval by an interval matrix. Of course, this multiplication is performed by multiplying an interval by each term of the interval matrix, i.e., . By this multiplication let us now give a main result.
Theorem 5. Let be an square-interval matrix and let us assume that , that is, . Then is a quasi-inverse of .
Proof. By the assumption
exists and
where
Let us prove,
for
and
for
It is sufficient to prove the assertion for
. For
the proof of the assertion is similar and it can be derived by the induction. For
and
Observe that diagonal elements (intervals) in
includes
and other elements includes
Because each of the diagonal elements in interval matrix
is the determinant expansion of
. That is,
Hence each diagonal elements in
is
and since
, it exists and of course includes 1. For non-diagonal elements in
, consider, for example,
and observe that
We obtain the last equality by changing the order of the multiplication since the interval multiplication is commutative. That is
has the form
and so we can say
Similarly we can see other non-diagonal terms also includes 0. Hence we can deduce that
This means is a quasi-inverse of . □
Now let us give another important theorem that we will prove with the help of this theorem.
Theorem 6.
(Interval-Cramer’s Rule) Let be an square interval matrix from into , be an n-dimensional interval vector and let us assume that . Then the system has a solution set such that
is an envelope including .
Proof. Since any system of linear interval equation is a family of systems of linear equations and since
implies
for each
, we can guarantee the existence of solution set
. Further the assumption implies existence of
Let us consider
. Then
is a quasi-inverse of
from the Theorem 5. So we can conclude that
and so for any
This means explicitly
must satisfy the condition
for each
where
As a result
is the desired set (envelope) containing the solution set
. □
Remark 7. There may be many other interval matrices satisfy the condition and it may be a quasi-inverse of . Already is one of the that meets this condition, and it is the one obtained with the help of the Interval-Cramer’s rule. A narrower that satisfies the condition is more valuable, and the solution x obtained from it is a closer and better result. With this method, we do not determine the exact solution set of the interval equation, but we determine an n-dimensional envelope containing the solution set.
Let us return problem given in introduction, which is given in Example 7.5 in [
3] Which is the mesh equation of an electrical circuit system. An envelope for solving this problem is given in [
3]. By the Interval Cramer’s method let us determine an envelope covering the solution and compare the results with the results in Example 7.5, [
3].
Example 12.
([3], Example 7.5.) The mesh equations for an electric circuit are expressed as
with and Here denotes resistances, denotes currents and denotes voltages. Find enclosures for and In [3], Example 7.5., it is expressed that a pair of envelopes of currents are given as
Let us now give another envelope for this problem by using Interval Cramer’s rule. Let us consider the interval matrix and vectors
respectively. First we will get an enclosure for the solution set of the linear interval equation
where is a model (interval) vector. First of all we must calculate . By using the interval calculus we get
Since , we can say has a solution and we can determine an envelope from the Interval-Cramer’s Rule. By again this theorem
and so first we must calculate . Using the interval calculus and by the definition of the adjoint we get
We can conclude again from the theorem that
for every So this means
Compared to the other result, we can say that Interval-Cramer’s rule also gives a close and relatively good result.
Let us now give another main result.
Theorem 7. Let be an interval matrix and consider a system of linear interval equation .
1) If has a solution then
2) If there exists an interval vector such that and Then the system has at least one (possibly many) solution
Proof. The proof of 1) is similar to classical case. Because if
is a linear combination of the column vectors of
, it is of course a quasilinear combination. Therefore, let us just prove 2). Assume
. In this case
is in the column space of
and so it is a ql-combination of column vectors of
. This means from ql-combination definition that there exist real numbers
such that
By writing
we get
is the solution of
□
Remark 8. According to this theorem, when we find an interval vector with and with the condition , we guarantee an envelope containing the solution of the system .
Conclusion 2. There are different definitions of the rank of interval matrices than ours ([11,17]). In general, these definitions are important definitions based on the ranks of classical matrices with real terms which are elements of the interval matrix. Our definition of rank comes from the definition of the rank of a quasilinear operator by first considering that an interval matrix is a quasilinear operator. We defined the rank of a quasilinear operator as the dimension of the space quasi-spanned by the range of a quasilinear operator. Accordingly, we gave the definition of the rank of the interval matrix. Classical definition of rank of a matrix is also defined depending on linear operators. So we think that the definition of rank we give is more suitable for quasilinear algebra. Furthermore, the notion of quasi-inverse is an extension of the notion of inverse and the Interval Cramer’s rule is a result obtained with the help of this definition. We believe that the quasilinear algebra developed in this way can give a linear algebra-like systematic approach to the solution of other problems related to the solution of linear interval equations and further interval matrix problems.