Preprint
Article

This version is not peer-reviewed.

Positive Matrices and Subsidy Allocation Models in Interconnected Industrial Systems

Submitted:

14 January 2026

Posted:

15 January 2026

You are already at the latest version

Abstract
Linear systems with nonnegative or positive coefficients play a central role in the analysis of dynamical processes where admissible states are required to be positive. This paper studies a linear system governed by a positive matrix and interprets it as a spec-tral problem motivated by a subsidy allocation model. The analysis is carried out within the framework of the Perron–Frobenius theory and relies on classical results of linear algebra, in particular Perron’s theorem and Wielandt’s lemma. Using purely theoretical methods, we show that a fair allocation is characterized by a positive eigenvector asso-ciated with the spectral radius of the underlying matrix. The positivity and primitivity of the matrix guarantee the existence and uniqueness of this eigenvector up to scaling, while the convergence of matrix powers ensures the stability of the resulting allocation independently of initial conditions. These results demonstrate that fairness and stability arise as intrinsic consequences of the spectral structure of positive matrices. The paper provides a rigorous mathematical interpretation of equilibrium and stability in linear dynamical systems and illustrates the relevance of positive matrix theory in the study of structured linear models.
Keywords: 
;  ;  ;  ;  

1. Introduction

Many processes in economics, industry, and other applied fields can be modeled as linear systems with nonnegative or positive coefficients. Such systems naturally lead to the use of nonnegative matrices, whose entries describe transfers, costs, or influences among the individual components of the system. Within this framework, the dynamics of the system are determined by matrix powers or by solving the associated linear systems, and therefore the spectral properties of the matrix play a crucial role in the analysis of the long-term behavior of the system [3,4,7].
A central tool in the study of positive matrices is the Perron–Frobenius theory. Perron’s theorem guarantees the existence of a dominant eigenvalue and an associated positive eigenvector, while Frobenius’ extensions allow the treatment of a broader class of nonnegative matrices [5]. These results enable the unique determination of stable distributions and equilibrium states in systems whose interactions are described by positive coefficients. The spectral separation of the dominant eigenvalue is also essential for the convergence of matrix power sequences and for the stability of solutions [1].
In this paper, we apply the theory of positive matrices to the analysis of a subsidy allocation problem formulated as a linear system with positive coefficients. We show that a fair allocation of subsidies is equivalent to finding a positive eigenvector of the corresponding matrix of cost or transfer coefficients. Furthermore, we demonstrate that the primitivity of the matrix guarantees the uniqueness of the solution and the aperiodic convergence of the system, which follows directly from Wielandt’s lemma and Perron’s theorem [6,8]. In this way, we establish a clear mathematical connection between spectral theory and applied models of economic networks.
The objective of this paper is to analyze linear systems governed by positive matrices using spectral theory. We show that fair and stable subsidy allocation in interconnected systems can be formulated as a spectral problem associated with a positive matrix of cost or transfer coefficients. By applying Perron’s theorem and Wielandt’s lemma, we establish the existence and uniqueness of positive solutions, study the convergence of matrix powers, and characterize the stability of equilibrium states. The paper thus provides a rigorous mathematical link between spectral theory and structured linear models.

2. Theoretical Framework

The theoretical framework of the paper is based on the spectral theory of nonnegative and positive matrices, which constitutes a central part of linear algebra and the theory of linear operators in finite-dimensional spaces. Attention is devoted to the Perron–Frobenius theory, which describes the structure of eigenvalues and eigenvectors of matrices with nonnegative or positive entries, as well as their convergence properties.
A fundamental result in this area is Perron’s theorem [5], which guarantees that every real positive matrix has a dominant eigenvalue equal to its spectral radius, together with an associated eigenvector with strictly positive components. Moreover, the eigenspace corresponding to this eigenvalue is one-dimensional, which ensures the uniqueness of the positive eigenvector up to a multiplicative constant. This result was extended in the work of Frobenius [1], who considered a broader class of nonnegative matrices and introduced the notions of reducibility, primitivity, and periodicity.
An important refinement of the Perron–Frobenius theory is provided by Wielandt’s lemma [9], which ensures the spectral separation of the dominant eigenvalue. More precisely, the spectral radius is the only eigenvalue of the matrix with maximal absolute value, a property that is crucial for proving the convergence of sequences of matrix powers and the stability of linear dynamical systems. This result allows for a rigorous analysis of the long-term behavior of systems described by recursive matrix relations.
The modern development of the theory of positive matrices also relies on the connection between the spectral radius, invariant cones, and matrix norms. In this context, results that relate spectral properties to norm estimates play an important role, as they allow convergence to be established without additional regularity assumptions [2,7]. Such an approach is standard in contemporary matrix theory and operator theory.
A comprehensive and systematic treatment of the Perron–Frobenius theory and its extensions can be found in the classical monographs by Gantmacher, Seneta, Horn–Johnson, and Meyer [2,3,4,7]. These works provide the mathematical foundation for the analysis presented in this paper and enable a rigorous, proof-oriented treatment of the spectral properties of positive matrices in finite-dimensional spaces.
The contribution of this paper is situated within this theoretical framework through the consistent application of classical results from the Perron–Frobenius theory. The results obtained are direct consequences of these theorems.

3. Materials and Methods

In this paper, we employ methods from linear algebra and the spectral theory of positive matrices. The models under consideration are based on finite real matrices with nonnegative or positive entries, which describe transfer or cost relationships among the individual components of the system. The analysis is purely theoretical in nature and does not involve empirical or statistical data.
The central mathematical tool is the Perron–Frobenius theory. We use Perron’s theorem for positive matrices to establish the existence of a dominant eigenvalue and an associated positive eigenvector, and Wielandt’s lemma to analyze spectral properties and the convergence of matrix powers. Attention is devoted to conditions of primitivity, which ensure the uniqueness of the positive eigenvector and the aperiodic convergence of linear dynamical processes.
The models are formulated as linear systems or recursive relations describing the temporal evolution of the system states. Long-term behavior is analyzed through sequences of matrix powers and their spectral properties. The proofs rely on standard methods of linear algebra, including matrix norms, eigenvalues, eigenvectors, and properties of invariant subspaces.
To illustrate the theory, we consider deterministic example models of industrial and regional systems in which economic interactions are formalized using positive matrices. These examples serve exclusively as mathematical motivation and do not constitute empirical validation of the model. All results are established analytically, without the use of numerical simulations or experimental methods.

4. Results

In this section, we present the main results of the paper through two model examples that illustrate the application of the theory of nonnegative and positive matrices in the analysis of linear systems. The purpose of both examples is to demonstrate how the spectral properties of matrices are reflected in the long-term behavior of the system, as well as in the existence and structure of stable distributions.
In the first example, we consider a dynamic model of goods redistribution among regions, which is described by a column-stochastic matrix. We analyze the convergence of the sequence of matrix powers and show that the long-term distribution of goods corresponds to the limiting state of the system determined by the spectral properties of the matrix.
In the second example, we examine a subsidy allocation problem in an industrial system consisting of three factories. The problem is formulated as a linear system with a positive matrix of cost coefficients. We show that a fair and optimal subsidy allocation is equivalent to a positive eigenvector associated with the spectral radius of the matrix, and that Perron’s theorem guarantees the existence and uniqueness of such a solution.
Taken together, these two examples illustrate that the theory of positive matrices provides a unified and mathematically rigorous approach to the analysis of stability, equilibrium, and optimal allocations in linear dynamical systems.

3.1. Example 1

We consider four regions R 1 , R 2 , R 3 , R 4 among which goods are redistributed on a daily basis, while the total quantity of goods remains constant. Each region retains a portion of the goods and distributes the remaining part to the other three regions.
The entry c i j represents the proportion of goods that region R j allocates to region R i each day. The diagonal entries c j j correspond to the proportion of goods retained by each region, while the off-diagonal entries represent the proportions sent to the other regions.
c i j = c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 c 41 c 32 c 43 c 44
Since all goods in each region are either retained or distributed to other regions, the following condition must hold:
i = 1 4 c i j = 1   for   each   j ,
that is, the sum of each column of the matrix is equal to 1.
From this matrix, we observe that all four regions engage in mutual trade daily. We now illustrate this structure with the following example.
C = 0.1 0.3 0 0 0.7 0.6 0 0 0.1 0 1 0 0.1 0.1 0 1 ,
The first column of the matrix C indicates that region R 1 retains 10% of its goods each day, sends 70% of the goods daily to region R 2 , sends 10% daily to region R 3 , and sends 10% daily to region R 4 . The sum of the entries in the first column is therefore
1 = 0.1 + 0.7 + 0.1 + 0.1 .
The second column of the matrix C shows that region R 2 retains 60% of its goods each day, sends 30% daily to region R 1 , sends no goods to region R 3 , and sends 10% daily to region R 4 . The sum of the entries in the second column is
1 = 0.3 + 0.6 + 0 + 0.1 .
The third column of the matrix C indicates that region R 3 retains all of its goods each day and sends no goods to regions R 1 , R 2 , or R 4 . The sum of the entries in the third column is equal to 1.
The fourth column of the matrix C shows that region R 4 also retains all of its goods each day and sends no goods to regions R 1 , R 2 , or R 3 . The sum of the entries in the fourth column is equal to 1.
Let v 1 , v 2 , v 3 , and v 4 denote the quantities of goods in regions R 1 , R 2 , R 3 ,   and R 4 on the current day. We now ask how the distribution of goods among all four regions will look after two weeks or after one month.
The quantity c i j v j represents the amount of goods sent from region R j to region R i on the current day. In particular, c 1 j v j is the amount of goods sent from region R j to region R 1 , c 2 j v j is the amount sent to region R 2 , c 3 j v j is the amount sent to region R 3 , and c 4 j v j is the amount sent to region R 4 on the current day.
Let us now consider what the following sum represents:
j = 1 4 c 1 j v j .
After all trades on the first day have been completed, the quantity of goods in region R 1 is
c 11 v 1 + c 12 v 2 + c 13 v 3 + c 14 v 4 .
Similarly, the quantity of goods in region R 2 after the first day is
c 21 v 1 + c 22 v 2 + c 23 v 3 + c 24 v 4 ,
the quantity in region R 3 is
c 31 v 1 + c 32 v 2 + c 33 v 3 + c 34 v 4 ,
and the quantity in region R 4 is
c 41 v 1 + c 42 v 2 + c 43 v 3 + c 44 v 4 .
We then start the trading process on the second day with these updated quantities.
In general,
C v gives the distribution of goods in all four regions at the end of the first day,
C 2 v gives the distribution of goods in all four regions at the end of the second day,
C 3 v gives the distribution of goods in all four regions at the end of the third day,
C n v gives the distribution of goods in all four regions at the end of the n -th day.
We now write the matrix C in block form as
C = X 0 Y I , were
X = 0.1 0.3 0.7 0.6 ,   Y = 0.1 0 0.1 0.1 ,   0 = 0 0 0 0 in   I = 1 0 0 1 .
Let us prove that
C n = X n 0 Y j = 0 n 1 X j I
for all n N . We prove this by mathematical induction.
For n = 1 , we have
C 1 = X 0 Y I ,
so the statement holds in this case. Assume that the claim is true for some n N . Then
C n + 1 = C n · C 1 =
= X n 0 Y j = 0 n 1 X j I · X 0 Y I =
= X n + 1 0 Y j = 0 n 1 X j · X + Y I =
= X n + 1 0 Y + Y j = 0 n 1 X j + 1 I =
= X n + 1 0 Y + Y · X + Y · X 2 + + Y · X n I =
= X n + 1 0 Y X + X 2 + + X n I =
= X n + 1 0 Y j = 0 n X j I .
We have proved that Claim 1 holds.
For an n × n matrix A = ( a i j ) , define
A 1 = a i j 1 = max j = 1 , . . . , n i = 1 n a i j .
This is a subordinate matrix norm; hence it is sub multiplicative:
A · B 1 A 1 B 1
and therefore
A k 1 A 1 k .
Clearly,
X 1 = 0.1 0.3 0.7 0.6 1 = 0.9 .
Hence,
X n 1 X 1 n = 0,9 n
and consequently
lim n X 1 n = 0 .
Next, we show that the series
j = 0 X j
converges. It suffices to prove that
j = 0 X j 1
converges.
j = 0 X j 1 j = 0 X 1 j = j = 0 0.9 j = 1 1 0.9 = 10 ,
and therefore
j = 0 X j
converges.
Let us prove that
j = 0 X j = 1 X 1 .
Calculate:
I     X · j = 0 X j =
= lim n I X · j = 0 n X j =
= lim n I X · I + X + + X n =
= lim n I X n + 1 = I
Similarly,
j = 0 X j · I X =
= lim n j = 0 n X j · I X =
= lim n I + X + + X n · I X =
= lim n I X n + 1 = I .
From the above, it follows that:
lim n C n = lim n X n 0 Y j = 0 X j I = 0 0 Y · I X 1 I .
Calculating
I X 1 = 1 0 0 1 0.1 0.3 0.7 0.6 1 =
= 0.9 0.3 0.7 0.4 1 = 9 10 3 10 7 10 4 10 1 =
= 1 9 10 · 4 10 7 10 · 3 10 4 10 3 10 7 10 9 10 =
= 1 3 20 4 10 3 10 7 10 9 10 = 20 3 4 10 3 10 7 10 9 10 =
= 8 3 2 14 3 6
and
Y j = 0 X j = Y · I X 1 = 0.1 0 0.1 0.1 · 8 3 2 14 3 6 =
= 1 10 0 1 10 1 10 · 8 3 2 14 3 6 =
= 8 30 2 10 22 30 8 10 = 4 15 1 5 11 15 4 5 .
Therefore,
lim n C n = 0 0 0 0 0 0 0 0 4 15 1 5 1 0 11 15 4 5 0 1 ,
from which we obtain
lim n C n v = 0 0 0 0 0 0 0 0 4 15 1 5 1 0 11 15 4 5 0 1 v 1 v 2 v 3 v 4 =
= 0 ,   0 , 4 15 v 1 + 1 5 v 2 + v 3 , 11 15   v 1 + 4 5 v 2 + v 4 t r =
= 0 ,   0 , 4 v 1 + 3 v 2 + 15 v 3 15 ,   11 v 1 + 12 v 2 + 15 v 3 15 t r .
After a large number of days, the distribution of goods will be close to the limiting distribution. This implies that most of the goods will be concentrated in the third and fourth regions, while all goods that were initially located in regions R 3 and R 4 will remain there. Of the goods that gradually migrate from the first two regions to the last two regions, region R 4 will receive a significantly larger share than region R 3 .

3.1. Example 2 (Subsidy Allocation Problem)

Suppose that we have three factories T 1 , T 2 , T 3 that produce different products and trade intermediate goods among themselves. The state provides support to all three factories in the form of subsidies.
The subsidies are distributed fairly: each factory uses only its own share of the subsidy for its production, while at the same time purchasing subsidized intermediate goods from the other factories. In return, the state requires the total production to be as large as possible, which is achieved under a fair allocation of subsidies.
Let us now examine the following matrix:
c i j = c 11 c 12 c 13 c 21 c 22 c 23 c 31 c 32 c 33 .
The entry c i j represents the cost incurred by the j -th factory in producing one unit of output due to the use of products from the i -th factory. Naturally, all coefficients satisfy c i j > 0 .
The diagonal entries c i i correspond to the costs of using a factory’s own products, while the off-diagonal entries represent the costs of using products from other factories.
We now ask whether the subsidy is distributed fairly. Each factory receives a subsidy directly from the state. However, its production is also subsidized indirectly, since it uses both its own products and the products of other factories in its production process, and these products have already been subsidized.
If the i -th factory produces u i units per year, then its indirect benefit from subsidies is given by
( c 1 i + c 2 i + c 3 i ) u i = γ i u i .
At the same time, the other factories also derive an indirect benefit from subsidies through their use of the products of the i -th factory. In this way, factories T 1 , T 2 , and T 3 obtain a total indirect benefit
c i 1 u 1 + c i 2 u 2 + c i 3 u 3
from the production of the i -th factory.
For the allocation of subsidies to be fair, the indirect benefit received by each factory must equal the indirect benefit provided to the other factories through the use of its products. Since this condition must hold for every factory, the indirectly received benefit must be equal to the indirectly provided benefit for each factory. Therefore, the following condition must be satisfied:
γ 1 · u 1   c 11 · u 1   c 12 · u 2   c 13 · u 3 = 0 ,
γ 2 · u 1 c 21 · u 1 c 22 · u 2 c 23 · u 3 = 0 ,
γ 3 · u 1 c 31 · u 1 c 32 · u 2 c 33 · u 3 = 0 ,
where
γ 1 = c 11 + c 21 + c 31 ,
γ 2 = c 12 + c 22 + c 32 ,
γ 3 = c 13 + c 23 + c 33 .
We seek solutions of the linear system
u 1 = c 11 · u 1 + c 12 · u 2 + c 13 · u 3 γ 1 ,
u 2 = c 21 · u 1 + c 22 · u 2 + c 23 · u 3 γ 2 ,
u 3 = c 31 · u 1 + c 32 · u 2 + c 33 · u 3 γ 3 .
Moreover, we impose the constraints u 1 > 0 , u 2 > 0 , and u 3 > 0 , since the number of units produced by each factory must be positive.
At the same time, we wish to satisfy the state’s requirement that the total production of all factories be as large as possible. Hence, we seek a positive triple   u 1 ,   u 2 , u 3 solving the above linear system such that, for every ordered positive triple x 1 ,   x 2 , x 3   that also solves the system, namely
0 < x 1 = c 11 · x 1 + c 12 · x 2 + c 13 · x 3 γ 1 ,
0 < x 2 = c 21 · x 1 + c 22 · x 2 + c 23 · x 3 γ 2 ,
0 < x 3 = c 31 · x 1 + c 32 · x 2 + c 33 · x 3 γ 3 ,
the inequality u 1 + u 2 + u 3     x 1 + x 2 + x 3 holds.
Does this system have a solution?
In 1905, the German mathematician O. Perron proved a fundamental theorem for real matrices A whose entries are all positive. Among other statements, Perron’s theorem guarantees that for such matrices A , the eigenvalue equation
A x = A x
has a solution x with strictly positive components, i.e., x i > 0 for all i .
Here A notes the spectral radius of the matrix A . Perron’s theorem also states that the eigenspace corresponding to the eigenvalue A is one-dimensional.
The matrix
A = c 11 γ 1 c 12 γ 1 c 13 γ 1 c 21 γ 2 c 22 γ 2 c 23 γ 2 c 31 γ 3 c 32 γ 3 c 33 γ 3
has all entries positive. By Perron’s theorem, there exists a vector
x = x 1 ,   x 2 ,   x 3 t r
with strictly positive components such that
c 11 · x 1 + c 12 · x 2 + c 13 · x 3 γ 1 = A · x 1 ,
c 21 · x 1 + c 22 · x 2 + c 23 · x 3 γ 2 = A · x 2 ,
c 31 · x 1 + c 32 · x 2 + c 33 · x 3 γ 3 = A · x 3 .
Let us denote
g = γ 1 ,   γ 2 ,   γ 3 t r .
It is clear that
g t r A = g t r .
Therefore,
g t r x = g t r A x = g t r A x = A g t r x .
Since the scalar g t r x is nonzero, it follows that A = 1 .
Every vector of the form u = τ x , where τ > 0 , is a solution of the system
u 1 = c 11 · u 1 + c 12 · u 2 + c 13 · u 3 γ 1 ,
u 2 = c 21 · u 1 + c 22 · u 2 + c 23 · u 3 γ 2 ,
u 3 = c 31 · u 1 + c 32 · u 2 + c 33 · u 3 γ 3
and, of course, satisfies the inequalities u 1 > 0 ,   u 2 > 0 ,   u 3 > 0 .
Perron’s theorem further states that the eigenspace corresponding to the eigenvalue
A y = A y
is one-dimensional. Consequently, every solution of the system
u 1 = c 11 · u 1 + c 12 · u 2 + c 13 · u 3 γ 1 ,
u 2 = c 21 · u 1 + c 22 · u 2 + c 23 · u 3 γ 2 ,
u 3 = c 31 · u 1 + c 32 · u 2 + c 33 · u 3 γ 3
under the constraints u 1 > 0 ,   u 2 > 0 ,   u 3 > 0 is a positive scalar multiple of the vector x .
The sum
τ · x 1 + τ · x 2 + τ · x 3
represents the total production of all three factories combined.
The cost of this production is given by
γ 1 · τ · x 1 + γ 2 · τ · x 2 + γ 3 · τ · x 3 .
This cost cannot exceed the total state subsidy σ . Therefore, we choose
u 1 = σ γ 1 · x 1 + γ 2 · x 2 + γ 3 · x 3 · x 1 ,
u 2 = σ γ 1 · x 1 + γ 2 · x 2 + γ 3 · x 3 · x 2 ,
u 3 = σ γ 1 · x 1 + γ 2 · x 2 + γ 3 · x 3 · x 3 .
We obtain the solution
u 1 = c 11 · u 1 + c 12 · u 2 + c 13 · u 3 γ 1 ,
u 2 = c 21 · u 1 + c 22 · u 2 + c 23 · u 3 γ 2 ,
u 3 = c 31 · u 1 + c 32 · u 2 + c 33 · u 3 γ 3
under the constraints u 1 > 0 ,   u 2 > 0 ,   u 3 > 0 , which is optimal. In this case, the i -th factory receives γ i u i as its share of the total subsidy.
Perron’s theorem thus provides an answer to the question posed earlier: the subsidy allocation problem is always solvable, regardless of the choice of the parameters c i j .
Perron’s theorem will be stated precisely and discussed in detail in the following section.

5. Perron’s Theorem and Wielandt’s Lemma

Positive Matrices
In this chapter, let A R k × k be a positive matrix. We denote by A the spectral radius of the matrix A , defined by A ,
A = max λ ; λ   i s a n e i g e n v a l u e o f   A .
Perron’s Theorem
If A > 0 , then A is an eigenvalue of A . The corresponding eigenspace is one-dimensional. Moreover, A has a positive eigenvector, and A > λ for every eigenvalue λ of A with λ A .
We first prove Wielandt’s lemma.
Wielandt’s Lemma:
  • The spectral radius of the matrix A is a positive eigenvalue of A and admits a positive eigenvector.
  • The absolute value of any other eigenvalue of A is strictly smaller than the spectral radius.
We begin by proving the first statement of Wielandt’s lemma.
Suppose that μ is an eigenvalue of A with maximal absolute value. Then
μ = A
and
A x = μ x
for some complex vector x .
Define
p = x 1 , x 2 ,   . . . ,   x k   t r ,
and show that p is a positive eigenvector. For each 1 i k we have
μ x i = j a i j x j ,
and hence
A p i = μ x i j a i j x j = j a i j p j .
Therefore,
A p A p
which implies
A A I p 0 .
We now show that this inequality is in fact an equality. We argue by contradiction. Let
z = A A I p
and assume that z 0 . Since z 0 and the matrix A has strictly positive entries, it follows that
A z > 0 .
Because
A p > 0 ,
there exists a positive number ε   such that
A z ε A p .
Now,
A A A I p = A z
A 2 p A A p = A z
A 2 p = A z + A A p
ε A p + A A p
ε + A A p .
Let
B = ε + A 1 A .
Then
B A p A p .
Since B is a positive matrix, it follows that
B n A p A p for   all   n 1 .
For any scalar τ and any matrix X , we have
τ X = τ X .
Hence,
B = ε + A 1 A .
Since ε > 0 , it follows that
B < 1 .
Therefore, lim n B n .
Therefore 0 A p , and we also know that A p > 0 . So A p = A p . This proves the first statement of Wielandt’s lemma.
We now prove the second statement of Wielandt’s lemma.
Suppose that λ is an eigenvalue such that
λ = A ,
and let y   denote a corresponding eigenvector. As before, define
q i = y i ,   i = 1 ,   ,   n .
Then we have
A q i = j a 1 j y j = j a 1 j y j = j a 1 j q j = A q 1
Since the same reasoning applies to all coordinates, it follows that
A q = A q .
Consequently,
j a 1 j y j = j a 1 j y j .
All numbers a i 1 , a i 2 , , a i k are positive real numbers. Therefore, the arguments of the complex numbers y j must be equal. This implies that each y j admits a polar representation
y j = y j e i θ ,
where the angle θ is independent of j . Hence,
y j = e i θ q .
Thus, y is a scalar multiple of the nonnegative eigenvector corresponding to the eigenvalue A . Since y is an eigenvector associated with the eigenvalue λ , it follows that λ = A . This completes the proof of the second statement of Wielandt’s lemma.
 Theorem 1.
The sequence of powers of the matrix A 1 A is convergent.
We now prove Theorem 1.
Let B = A 1 A . Then B = 1 . Wielandt’s lemma applies to B , and we conclude that 1 is an eigenvalue of B   and that λ < 1 for every eigenvalue λ 1 .
We now show that the sequence of powers of A 1 A is bounded.
Let y be a positive eigenvector associated with the eigenvalue 1 . Then
B n y = y   f o r   a l l   n .
Set y s = max i y i and y t = min i y i . For all i , j we have
y s y i = l b i l n y l b i j n y j b i j n y t
and hence
b i j n < y s y t   f o r   a l l   n .
Since b i j n > 0   f o r   a l l   n , it follows that the sequence A 1 A is indeed bounded.
Finally, because λ < 1 for all eigenvalues λ 1 , the sequence of matrix powers A −1 A is convergent.
This completes the proof of Theorem 1.
Corollary. The spectral radius  A is a simple eigenvalue of A .
Let us prove that the corollary is true.
We show that the corollary holds. In the proof of the previous theorem, we observed that for the matrix
B = A 1 A ,
the geometric multiplicity of the eigenvalue 1 coincides with its algebraic multiplicity, and that λ i < 1   for i = 1,2 , , l , where λ 1 , λ 2 , , λ l are all remaining eigenvalues of B .
There exists a vector p > 0 such that B p = p , and therefore the algebraic multiplicity of the eigenvalue 1 is at least one. Denote this multiplicity by n . Suppose that n > 1 . Then there exists a real vector q such that B q = q and q is not a scalar multiple of p .
Define
τ = max i q i p i
and choose an index j such that q j p j = τ . Then τ p q . Since
B ( τ p q ) > 0 ,
we have
τ p q 0 .
Consequently,
τ B p B q > 0 ,
which implies
τ p q > 0 .
In particular, τ p j > q j , which contradicts the choice of τ .
Hence, n = 1 , and the eigenvalue 1 of B is simple. It follows that A is a simple eigenvalue of A .
This completes the proof of the corollary.
The proof of Perron’s theorem consists of Wielandt’s lemma together with this corollary.

6. Conclusions

In this paper, we investigated linear systems described by positive matrices and showed that their spectral properties play a decisive role in determining the long-term behaviour of the system. By applying Perron’s theorem and Wielandt’s lemma, we established the existence of a dominant eigenvalue and a unique positive eigenvector, which represents a stable equilibrium state of the models under consideration.
We demonstrated that the problem of fair and stable subsidy allocation in interconnected industrial systems can be naturally reduced to a spectral problem involving a positive matrix of cost or transfer coefficients. The optimal subsidy allocation is uniquely determined by the eigenvector associated with the spectral radius of the matrix, while the convergence of matrix power sequences ensures the stability of the solution independently of initial conditions.
The results confirm that the theory of positive matrices provides a rigorous mathematical framework for the analysis of stability, equilibria, and optimal allocations in linear dynamical systems. In the models considered, fairness and stability are not additional assumptions but direct consequences of the spectral structure of the underlying matrix.
Future research could focus on extending the presented models to nonnegative, but not necessarily positive, matrices and on analysing the impact of reducibility and periodicity on the structure of solutions. It would also be of interest to investigate the relationship between spectral properties of matrices and the robustness of systems under structural changes of the coefficients.

References

  1. Frobenius, G. Über Matrizen aus nicht negativen Elementen. Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften. 1912; 456–477. [Google Scholar]
  2. Gantmacher, F.R. The Theory of Matrices, Vol. II; Chelsea Publishing: New York, NY, USA, 1959. [Google Scholar]
  3. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  4. Meyer, C.D. Matrix Analysis and Applied Linear Algebra; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  5. Perron, O. Zur Theorie der Matrices. Mathematische Annalen 1907, 64, 248–263. [Google Scholar] [CrossRef]
  6. Pullman, N.J. Matrix Theory and Its Applications; Marcel Dekker: New York, NY, USA, 1976. [Google Scholar]
  7. Seneta, E. Non-negative Matrices and Markov Chains; Springer: New York, NY, USA, 2006. [Google Scholar]
  8. Wielandt, H. Finite Permutation Groups; Academic Press: New York, NY, USA, 1964. [Google Scholar]
  9. Wielandt, H. Unzerlegbare, nicht negative Matrizen. Mathematische Zeitschrift 1950, 52, 642–648. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated