Preprint
Article

This version is not peer-reviewed.

Truncating and Shifting Weights for Max-Plus Automata

A peer-reviewed article of this preprint also exists.

Submitted:

22 December 2025

Posted:

22 December 2025

You are already at the latest version

Abstract
In this paper, for any real number $\lambdaup$, we transform the complete max-plus semiring $\mathbb{R}_\infty$ into a commutative, complete, additively idempotent semiring $\mathbb{R}_\infty^\lambdaup$, called the lower $\lambdaup$-truncation~of~$\mathbb{R}_\infty$. It is obtained by removing from $\mathbb{R}_\infty$ all real numbers smaller than $\lambdaup$, inheriting the addition operation, shifting the original products by $-\lambdaup$, and appropriately modifying the residuum operation. The purpose of lower truncations is to transfer the iterative procedures for computing the greatest presimulations and prebisimulations between max-plus automata, in cases where they cannot be completed in a finite number of iterations over $\mathbb{R}_\infty$, to $\mathbb{R}_\infty^\lambdaup$, where they could terminate in a finite number of iterations. For instance, we prove that this necessarily happens when working with max-plus automata with integer weights. We also show how presimulations and prebisimulations computed over $\mathbb{R}_\infty^\lambdaup$ can be transformed into presimulations and prebisimulations between the original automata over $\mathbb{R}_\infty$. Although they do not play a significant role from the standpoint of computing presimulations and prebisimulations, for theoretical reasons we also introduce two types of upper truncations of the complete max-plus semiring $\mathbb{R}_\infty$.
Keywords: 
;  ;  ;  

1. Introduction

Weighted finite automata constitute a natural and widely studied generalization of classical finite automata, obtained by associating weights from a given semiring with transitions, initial states, and final states. Instead of merely accepting or rejecting input words, weighted automata assign to each word a value that aggregates the weights of all runs labeled by that word, according to the operations of the underlying semiring. This framework provides a uniform and expressive model for describing quantitative behaviors, and it has found numerous applications in areas such as language and speech processing, image analysis, control theory, performance evaluation, and the modeling of discrete-event dynamical systems. The choice of the semiring plays a central role in determining both the semantics and the range of applications of weighted automata. Classical examples include probabilistic automata over the probability semiring, cost automata over the tropical semiring, and automata over Boolean or arithmetic semirings.
Within this general setting, max-plus automata arise as a particularly important and well-studied class. They are weighted finite automata defined over the max-plus semiring, in which addition is interpreted as taking the maximum and multiplication corresponds to ordinary addition. The max-plus semiring has proved to be a powerful mathematical tool for modeling and analyzing systems in which synchronization, timing, and resource sharing are fundamental. Typical application domains include manufacturing systems, transportation networks, communication protocols, scheduling problems, and discrete-event systems. In these contexts, the max-plus operations naturally capture the propagation of delays, the accumulation of processing times, and the synchronization of concurrent events, making max-plus models both intuitive and analytically tractable. Max-plus automata can be interpreted as quantitative models of timed behavior, where transition weights represent durations, costs, or delays, and the value assigned to a word corresponds to the maximal accumulated weight among all runs labeled by that word. From this perspective, max-plus automata describe the worst-case or longest execution time associated with a given sequence of events. This interpretation makes them especially suitable for performance analysis and verification tasks, where extremal timing behavior is of primary interest. Due to their strong algebraic structure and rich expressive power, max-plus automata form a central object of study at the intersection of automata theory, algebra, and systems theory. Understanding their properties, equivalences, and computational aspects is therefore of both theoretical significance and practical relevance.
Two fundamental decision problems associated with weighted finite automata are the equivalence problem and the containment problem. Given two weighted automata over the same semiring, the equivalence problem asks whether they assign the same value to every input word, whereas the containment problem asks whether the value assigned by one automaton is less than or equal to the value assigned by the other for all words, provided that the semiring from which the weights are taken is ordered. These problems generalize classical language equivalence and inclusion problems, and they play a central role in verification, optimization, and model comparison. In the context of max-plus automata, these problems admit a particularly intuitive interpretation – the containment problem asks whether the behavior of one system is always no slower than that of another, while equivalence asserts that both systems exhibit identical worst-case timing behavior for every possible sequence of events. These interpretations make the equivalence and containment problems especially relevant for performance analysis and refinement checking in timed and discrete-event systems.
Despite their conceptual simplicity, both problems are computationally challenging. For general weighted automata, decidability and complexity heavily depend on the chosen semiring. In the max-plus setting, equivalence is known to be decidable under certain restrictions, but it becomes difficult or undecidable in more general frameworks, particularly when infinite behaviors or unrestricted weights are allowed. Similar difficulties arise for containment, which is often harder than equivalence due to its inherently asymmetric nature. To address these challenges, simulation and bisimulation relations have been introduced as sound, and in some cases complete, proof techniques for establishing containment and equivalence. Simulations provide sufficient, but most often not necessary, conditions for containment, while bisimulations serve as sufficient, but most often not necessary, conditions for equivalence. Originally introduced in the setting of classical (Boolean) finite automata and labeled transition systems, simulations and bisimulations were defined as binary relations between states, typically represented by Boolean matrices. In this classical framework, a simulation relation ensures that every transition of one system can be matched by a corresponding transition of another system, whereas a bisimulation realizes mutual simulation.
When extending these concepts to weighted finite automata, and in particular to automata over the max-plus semiring, the classical Boolean relational approach proves insufficient for capturing quantitative aspects of behavior. Boolean matrices can express the existence of a correspondence between states, but they cannot encode quantitative discrepancies in accumulated weights, such as differences in execution times or costs. As a result, classical simulations often lead to overly coarse abstractions when applied to max-plus automata, and classical bisimulation may fail to reflect meaningful quantitative distinctions between systems. To overcome this limitation, a quantitative approach was developed in [11,31,40], in which simulations and bisimulations are defined as matrices over the max-plus semiring rather than as Boolean relations. In particular, two types of simulations and four types of bisimulations were defined in [11] as solutions of specific systems of matrix inequations, thereby reducing the problem of testing the existence of a simulation or bisimulation of a given type, as well as computing the greatest simulation or bisimulation of that type, if it exists, to the problem of solving the corresponding system of matrix inequations. However, solving these systems is a rather challenging task, which will be examined in more detail below.
The matrix inequations defining any of the aforementioned types of simulations and bisimulations can be divided into three groups. The system consisting of the inequations from the second and third groups is always solvable, and its solutions are called presimulations or prebisimulations of the given type. Moreover, this system always has a greatest solution, which we call the greatest presimulation or the greatest prebisimulation of that type. The inequations from the first group are used to test the greatest presimulation or prebisimulation. Specifically, if the greatest presimulation or prebisimulation of a given type is a solution of the system consisting of the inequations from the first group, then it is also the greatest simulation or bisimulation of that type. Otherwise, if the greatest presimulation or prebisimulation is not a solution of this system, then no simulation or bisimulation of that type exists.
While testing the greatest presimulation or prebisimulation on the inequalities from the first group is a relatively simple task, computing the greatest presimulation or prebisimulation is a significantly more difficult task and requires special attention. The procedure for computing the greatest presimulation or simulation of a given type between two max-plus automata, proposed in [11], consists of constructing a non-increasing sequence of matrices { U s } s N whose infimum U ^ is the greatest simulation or prebisimulation being sought. A central place in constructing this sequence of matrices is held by the residuum operation, which is fully defined on the complete max-plus semiring R and extends to matrices over this semiring. For this reason, the max-plus automata we work with are treated as automata over the complete max-plus semiring R rather than over the usual max-plus semiring R max , even if + does not appear in the transition matrices and the vectors of initial and final weights of these automata.
In cases where the sequence { U s } s N is finite, its infimum can be computed efficiently. Namely, in such cases we can find the smallest s for which U s = U s + 1 , the sequence then stabilizes at U s , and U s is the infimum of the sequence, as its smallest member. However, problems arise in cases where the sequence is infinite. In some of these cases, the infimum of the sequence could be determined theoretically, for instance by using the Principle of Mathematical Induction, but whenever the sequence of matrices { U s } s N is infinite, its infimum cannot be computed efficiently using standard algorithms and software. Even in cases where all real weights appearing in the transition matrices and in the vectors of initial and final weights of the automata we consider are positive numbers, the use of the residuum operation on R in constructing each member of the sequence of matrices { U s } s N may lead to the appearance of negative entries in these matrices, and at certain positions in these matrices one may even obtain infinite descending sequences of real numbers converging to . To overcome this problem, we came up with the idea of truncating the semiring R from below at some real number λ, by discarding all elements of R that are smaller than λ, except for . Replacing R with R λ = [ λ , + ) { , + } also requires a modification of the multiplication operation, which is very simply done by shifting the original products by λ , which also entails a somewhat more substantial modification of the residuum operation. The modified residuum operation allows the aforementioned descending sequences of real numbers, which appear in the sequence of matrices { U s } s N and converge to , to become finite, roughly speaking, by setting all elements of these sequences that are smaller than λ to . Thus, the essence of truncating the semiring R is that in cases where the computation of the greatest presimulation or bisimulation cannot be carried out in a finite number of steps, this computation is transferred to the new truncated semiring R λ , where it could be performed in a finite number of steps.
The main results of the paper are as follows. First, we prove that, for any real number λ, the set R λ , equipped with the addition operation inherited from R and the multiplication operation obtained by shifting the products in R by λ , forms a commutative, complete, additively idempotent semiring, and we provide a characterization of the residuum operation in this semiring (Theorem 3.1). This semiring is called the lower λ-truncation of R  with shifted multiplication. Although only lower truncations are needed for the applications discussed above, for theoretical reasons we also deal with truncations of the semiring R from above at some real number υ . We introduce two types of such truncations: one in which the carrier set is R υ ( , υ ] { , + } , with + included, and another in which the carrier set is R max υ ( , υ ] { } , where + is not included and the role of the greatest element is taken by υ . We show that both R υ and R max υ , equipped with the addition operations inherited from R and the multiplication operations obtained by shifting the products in R by −υ, form commutative, complete, additively idempotent semirings, and we provide characterizations of residuum operations in these semirings (Theorems 3.2 and 3.3). Although the addition and multiplication operations in both semirings are almost identical, the absence of the element + in R max υ causes the residuum operations in R max υ and R υ to differ significantly.
The key idea behind lower truncations is to find, for a max-plus automaton A over R , a real number λ such that all entries of the basic transition matrices and the initial and final weight vectors of this automaton are contained in R λ , and to transform A into an automaton A λ over R λ , which is called the lower λ-shift of A. The original automaton A and its lower λ-shift A λ have the same basic transition matrices and initial and final weight vectors, but their behaviors are different due to the differences between the multiplication operations in R and R λ . However, the behaviors of these automata are naturally related, and this relationship is expressed by the formula proven in Theorem 4.2. As a consequence, there exists a containment or equivalence relation between two automata over R if and only if such a relation exists between their lower λ-shifts (Theorem 4.3). In the context of computing the greatest presimulations and prebisimulations, for the transition from the original automata to their lower shifts to make sense, this transition should be justified by the efficiency of the computations performed over these lower shifts. Such a justification is provided by Theorem 4.4, which states that for max-plus automata with integer weights, the greatest presimulations and prebisimulations between their lower shifts can always be computed in a finite number of steps. On the other hand, Theorem 4.5 provides a method by which presimulations, prebisimulations, simulations, and bisimulations between the lower shifts can be converted into presimulations, prebisimulations, simulations, and bisimulations between the original automata.
We also provide numerical examples illustrating the application of lower truncations in computing the greatest presimulations and prebisimulations. These examples present a case in which the greatest presimulation over the complete max-plus semiring R cannot be computed in finitely many steps, but this becomes possible over a lower truncation of R , as well as a case in which the greatest presimulation can be computed over R in a finite number of steps, but over a lower truncation of R it can be computed in fewer steps. The examples also point to one drawback of lower truncations, namely that it may happen that a simulation exists between the original automata, but not between their lower shifts. The reason for this is that the modified residuum operation can make the greatest presimulation between the lower shifts so small that it no longer satisfies the inequalities from the first group. It is also shown that, in the context of computing the greatest presimulations and prebisimulations, the use of upper truncations does not yield results as good as those obtained by using lower truncations.
At the end of the paper, we also provide some observations on weak simulations and bisimulations. These are generalizations of ordinary simulations and bisimulations that better detect the existence of containment or equivalence between two automata, but are significantly harder to compute. Weak simulations and bisimulations are also defined by means of particular systems of matrix inequations, where these inequations can be divided into two groups. The system consisting of the inequations from the second group is always solvable, and its solutions are called weak presimulations or prebisimulations of the given type, while the greatest solution of this system, which always exists, is called the greatest weak presimulation or prebisimulation. If the greatest weak presimulation or prebisimulation is a solution of the system consisting of the inequations from the first group, then it is the greatest weak simulation or bisimulation of that type, and otherwise, if it is not a solution of that system, then no weak simulation or bisimulation of that type exists. The main problem that arises here is how to compute the greatest weak presimulation or prebisimulation, since the system that defines them may consist of infinitely many inequalities, in which case they cannot be computed efficiently. The numerical examples we provide show that lower and upper truncations of the semiring R do not help in resolving this problem, and they also point to some other aspects of the problem and suggest other possible approaches to its solution, which we will address in our future research.
It should be noted that simulations for max-plus automata first appeared in [40], where they were defined as matrices over the max-plus semiring that are solutions of systems of matrix inequations similar (though not identical) to those presented here, in Section 4. That paper established a connection between simulations and mean payoff games and proposed an algorithm for testing the existence and computing simulations, which reduces this problem to the solution of two-sided linear equation over the max-plus semiring (cf. [3,6]). Defined in the same way, but as Boolean matrices (i.e., as ordinary binary relations), simulations also appeared in [26], where they were used in the study of max-plus automata with partial observations, and in [31], where they were employed in the determinization of max-plus automata. It should be noted that these papers relied on the results from [13] concerning simulations considered in the somewhat more general context of weighted finite automata over an additively idempotent semiring, which are also defined as Boolean matrices. In the second part of [31], simulations were also considered as matrices over the max-plus semiring, and it was shown that the problem of the existence of simulations can be reduced to the problem of the non-emptiness of a tropical polyhedron (the set of solutions of a two-sided linear inequality over the max-plus semiring). The conditions for the existence of a simulation between max-plus automata were also studied in [16]. As we have already noted, the iterative approach to solving the systems of matrix inequations that define simulations and bisimulations, which we use here, was introduced in [11] and is based on the approach to solving somewhat more general weakly linear systems of matrix inequations developed in [36].
The paper consists of five sections. After this introductory section, Section 2 introduces the basic concepts and notation related to semirings, in particular complete additively idempotent semirings, as well as the concepts and notation concerning vectors and matrices over these semirings. In Section 3, we first recall the definitions of the max-plus semiring and the complete max-plus semiring, and then introduce the notions of lower and upper truncations of the complete max-plus semiring and prove the basic results concerning these truncations. Next, in Section 4, we first introduce the basic concepts and notation related to weighted automata over a semiring and max-plus automata, and then prove the fundamental results that enable the successful application of lower truncations in the efficient computation of presimulations and prebisimulations between max-plus automata. In the final section, Section 5, we present observations on weak simulations and bisimulations between max-plus automata, as discussed above.

2. Preliminaries

Throughout this paper, R denotes the set of real numbers, Z the set of integers, N the set of natural numbers without zero, and N 0 the set of natural numbers including zero. For n N , we denote by [ 1 . . n ] the set of all natural numbers from 1 to n.
A semiring is an algebra S = ( S , , , 0 , 1 ) with a carrier set S , two binary operations ⊕ and ⊗ on S , and two constants 0 , 1 S such that
(S1)
( S , , 0 ) is a commutative monoid with identity 0,
(S2)
( S , , 1 ) is a monoid with identity 1,
(S3)
( a b ) c = ( a c ) ( b c ) and c ( a b ) = ( c a ) ( c b ) , for all a , b , c S (distributivity laws),
(S4)
0 a = a 0 = 0 , for every a S (absorption laws).
We call 0 the zero and 1 the identity of the semiring S . The operation ⊕ is caled addition, and the operation ⊗ is called multiplication. If the multiplication is also commutative, then S is called a commutative semiring. It is customary to identify an algebraic structure with its carrier set, so the carrier set of the semiring is denoted by the same symbol.
A semiring S is called additively idempotent if its addition operation is idempotent, i.e., if a a = a , for every a S . In such a semiring, a partial order ⩽ defined by
a b a b = b , for all a , b S
is called the canonical partial order on S (cf. [22]). The canonical partial order is compatible both with addition and multiplication, and 0 is the least element. In other words, with respect to the canonical partial order, S is a positive semiring (cf. [17,21]).
For arbitrary a , b S , the sum a b is the supremum of the set { a , b } , that is, the addition operation coincides with the (binary) supremum operation and ( S , ) is an upper semilattice. In the case when ( S , ) is a complete upper semilattice, that is, when every subset { a i } i I S has a supremum, that supremum will be denoted by i I a i and will be treated as an infinite sum. Note that it satisfies all the conditions of the definition of an infinite sum from [17] (that definition differs from the definition of an infinite sum given in [21, Ch. 22], which also includes infinite distributivity, while in [17] and here, infinite distributivity is considered separately).
If S is an additively idempotent semiring such that ( S , ) is a complete upper semilattice (with suprema represented as sums), and if it satisfies the infinite distributive laws
a i I b i = i I ( a b i ) , i I b i a = i I ( b i a ) ,
for all a S and { b i } i I S , then we call S a complete additively idempotent semiring. This definition is consistent with the definition of a complete dioid from [2, Def. 4.32], as well as with definitions of a complete semiring from [17,21]. As we have already mentioned, complete additively idempotent semirings also possess the structure of a complete lattice, and in terms of lattice theory, structures with properties identical to those defining complete additively idempotent semirings are known as unital quantales (cf. [18,23,34,35]).
For a complete additively idempotent semiring S and arbitrary elements a , b S , the inequation a x b , where x is an unknown taking values in S , has the greatest solution, which is denoted by a b and called the left residual of b by a. Similarly, the inequation x a b has the greatest solution, which is denoted by b / a and called the right residual of b by a. The operations ∖ and / are called residuum operations adjoined to the multiplication ⊗. It immediately follows from those definitions that
a b c b a c a c / b ,
for all a , b , c S . This formula is called the residuation property or the adjunction property.
Another important property of residuals in a complete additively idempotent semiring S is the following:
0 a = a / 0 = ,
for every a S , where ⊤ denotes the greatest element of S .
A complete additively idempotent semiring S whose multiplication operation ⊗ is commutative is called a commutative complete additively idempotent semiring. In such a semiring we have that a b = b / a , for all a , b S , which means that there is only one residuum operation denoted by →, that is, a b = a b = b / a , for all a , b S . In this case the residuation property becomes
a b c b a c
for all a , b , c S , and → is said to be the residuum operation adjoined to ⊗.
Let S = ( S , , , 0 , 1 ) be a semiring. Throughout the paper, for m , n N , the set of all m × n matrices with entries in S is denoted by S m × n , and the set of all vectors of size n with entries in S is denoted by S n . When vectors from S n are written as row vectors ( 1 × n matrices), then instead of S n we write S 1 × n , and when they are written as column vectors ( n × 1 matrices), then instead of S n we write S n × 1 . For a matrix M S m × n , the entry of M located in the i-th row and j-th column is denoted by M ( i , j ) , and for a vector α S n , the i-th entry of α is denoted by α ( i ) . The transpose of a matrix M S m × n is the unique matrix M S n × m such that M ( j , i ) = M ( i , j ) , for all j [ 1 . . n ] and i [ 1 . . m ] . A matrix of an arbitrary type whose all entries are equal to the zero of S is called a zero matrix, and an n × n matrix I n whose entries on the main diagonal are equal to the identity of S , while all the others are equal to the zero, is called the identity matrix of order n.
For matrices M S m × n and N S n × p , their matrix product is a matrix M N S m × p defined, for arbitrary i [ 1 . . n ] and k [ 1 . . p ] , by
( M N ) ( i , k ) = j = 1 n M ( i , j ) M ( j , k ) .
For a matrix M S m × n , a row vector α S 1 × m and a column vector β S n × 1 , the product α M S 1 × n is called the vector-matrix product of α and M, while the product M β S m × 1 is called the matrix-vector product of M and β . More precisely,
( α M ) ( j ) = k = 1 m α ( k ) M ( k , j ) , ( M β ) ( i ) = k = 1 n M ( i , k ) β ( k ) ,
for all j [ 1 . . n ] and i [ 1 . . m ] . In particular, the product α β of a row vector α S 1 × n and a column vector β S n × 1 is a 1 × 1 matrix, which is usually identified with its single entry. In other words, we assume that α β is an element of S defined by
α β = i = 1 n α ( i ) β ( i ) .
This product is known as the scalar product or dot product of vectors α and β .
The operation of scalar multiplication of a matrix is defined, for s S and M S m × n , in the following way: the scalar multiplication of M by s is a matrix s M S m × n given by
( s M ) ( i , j ) = s M ( i , j ) ,
for all i [ 1 . . m ] and j [ 1 . . n ] . The properties of scalar multiplication that we need later are
s ( t M ) = ( s t ) M , s ( M N ) = ( s M ) N ,
for all s , t S , M S m × n and N S n × p .
In the case when S is an ordered semiring, for matrices of the same type a matrix ordering can be defined. It is defined coordinatewise, as follows
M N M ( i , j ) N ( i , j ) ,
for all i [ 1 . . m ] and j [ 1 . . n ] . When vectors from S n are considered as matrices (either row or column vectors), this definition gives the coordinatewise ordering of vectors.
In what follows, let S = ( S , , , 0 , 1 ) be a complete additively idempotent semiring with residuum operations ∖ and / adjoined to the multiplication ⊗. The residuum operations can also be defined for matrices and vectors with entries in S , as follows. For matrices M S m × n and P S m × p , the left residual of P by M is a matrix M P S n × p defined by
( M P ) ( j , k ) = i = 1 m M ( i , j ) P ( i , k ) ,
for all j [ 1 . . n ] and k [ 1 . . p ] . Similarly, for matrices N S n × p and P S m × p , the right residual of P by N is a matrix P N S m × n defined by
( P / N ) ( i , j ) = k = 1 p P ( i , k ) / N ( j , k ) ,
for all i [ 1 . . m ] and j [ 1 . . n ] . The matrix residuum operations defined in this way satisfy the following residuation property (adjunction property):
M N P N M P M P / N ,
for all M S m × n , N S n × p and P S m × p .
On the other hand, for vectors α S m and β S n , the left residual  α β of β by α and the right residual  β / α of β by α are matrices α β S m × n and β / α S n × m defined by
( α β ) ( i , j ) = α ( i ) β ( j ) , ( β / α ) ( j , i ) = β ( j ) / α ( i ) ,
for all i [ 1 . . m ] and j [ 1 . . n ] . If the multiplication ⊗ is commutative, then β / α = ( α β ) , that is, α β = ( β / α ) . The residuation property (adjunction property) for vector residuum operations is
α U β U α β , V α β V β / α ,
for all α S m , β S n , U S m × n and V S n × m .

3. Truncation and Shifting of Multiplication in the Complete Max-Plus Semiring

In the sequel, R denotes the set of real numbers, while
R max = R { } and R = R { , + } ,
where , + R are different symbols which are interpreted as minus and plus infinity. In addition, we will also use the following notation:
R + = { a R 0 a } and R = { a R a < 0 } ,
where ⩽ is the usual linear ordering of real numbers and < is its strict version. By the same symbol ⩽ we will also denote a linear ordering on R that is an extension of the usual linear ordering of real numbers such that is the smallest and + is the greatest element. It is well-known that R , with respect to this ordering, is a complete lattice. Let ∧ and ∨ be the common notations for the lattice-theoretical finite meet and join operations, and let big symbols ⋀ and ⋁ be used to denote infinite meets and joins.
In addition, a binary operation ⊕ on R , called addition, is defined with = max = , while another binary operation ⊗ on R , called multiplication, is given by the following Cayley-like table
b R +
a R a + b +
+ + +
The restrictions of the operations ⊕ and ⊗ on R max are denoted by the same symbols.
It is well-known that R max = ( R max , , , , 0 ) is a commutative semiring with the zero and the identity 0. This semiring is also additively idempotent, and the above linear ordering coincides with the canonical partial order. We call R max the max-plus semiring.
On the other hand, R = ( R , , , , 0 ) is a commutative complete additively idempotent semiring, and it will be called the complete max-plus semiring. We can also consider R from the lattice-theoretical point of view, and then, R is a unital quantale with the identity 0, so sometimes it is also called the max-plus quantale.
As we said in the previous section, every commutative complete additively idempotent semiring has a unique residuum operation adjoined to the multiplication, and in the case of the semiring R , the residuum operation → adjoined to ⊗ is given by the following Cayley-like table
b R +
a R b a +
+ + +
+ +
Note that ⊕ and ∨ represent the same operation on R max and R , but we use the notation ⊕ rather than ∨ in order to emphasize its role in the semirings R max and R .
The sets Z = Z { , + } and Z max = Z { } are subsets of R closed under operations ⊕ and ⊗, whereas Z is also closed under infinite sums and the operation →. Therefore, with the operations inherited from R , which are denoted by the same symbols as the corresponding operations in R , Z = ( Z , , , , 0 ) constitutes a commutative complete additively idempotent semiring, which we call the complete max-plus semiring of integers, while Z max = ( Z max , , , , 0 ) constitutes an additively idempotent semiring, called the max-plus semiring of integers.
Now we turn to the introduction of some new concepts and the presentation of the results related to them. For an arbitrary λ R , let
R λ = [ λ , + ) { , + } and R max λ = [ λ , + ) { } ,
and let the restrictions of the ordering on R to R λ and R max λ be denoted by the same symbol ⩽ as the original ordering on R .
Theorem 3.1. 
For any λ R , let λ be the addition operation on R λ inherited from the addition operation on R , and let λ be the multiplication operation on R λ defined as follows:
λ b [ λ , + ) +
a [ λ , + ) a + b λ +
+ + +
Then ( R λ , λ , λ , , λ ) is a commutative complete additively idempotent semiring with the zero and the identity λ, and the residuum operation λ on R λ adjoined to λ is defined by
b [ λ , + )
λ a b a > b +
a [ λ , + ) b a + λ +
+ + + +
+ +
Proof. 
For arbitrary a , b [ λ , + ) , from a λ and b λ it follows that a + b 2 λ , which implies that a + b λ λ , and this means that λ is a well-defined operation on R λ . Let us note that a λ b = ( a b ) λ = ( a λ ) b = a ( b λ ) , for all a , b [ λ , + ) . It is also clear that R λ is closed under finite and infinite sums (where infinite sums in R λ are also inherited from R ), and when working with infinite sums, we will use the same notation as for infinite sums in R .
It is clear that the multiplication operation λ is commutative, and it is easy to check that it is also associative and that λ is the identity for λ . On the other hand, since the addition operation λ is inherited from R , it is also commutative and associative, and is the zero. For the sake of simplicity, in the sequel we will denote the infinite sums in R λ and R with the same symbol (without writing the subscript λ for the sums in R λ ), which does not lead to confusion because these sums coincide when applied to the elements of R λ .
To prove infinite distributivity, we first consider any a [ λ , + ) and { b i } i I [ λ , + ) . Then we have that
a λ i I b i = ( a λ ) i I b i = i I ( a λ ) b i = i I ( a λ b i ) .
Next, if a = , then
a λ i I b i = = i I ( a λ b i ) ,
for each { b i } i I R λ .
Further, if a R λ such that a , and if { b i } i I R λ such that b j = + , for some j I , then a λ b j = + and
a λ i I b i = a λ + = + = i I ( a λ b i ) .
Finally, if a R λ such that a , and { b i } i I [ λ , + ) { } , then
a λ i I b i = a λ j J b i = j J ( a λ b i ) = i I ( a λ b i ) .
where J = { j I | b j } . Here we used the fact that infinite sums we work with are suprema, and all terms in a supremum which are equal to the smallest element (here ) can be omitted, while in the case when the set J is empty, we used the usual convention that the supremum of the empty set is equal to the smallest element.
Therefore, we have completed the proof of the statement that R λ is a commutative complete additively idempotent semiring with the zero and the identity λ.
It remains to prove that λ is the residuum operation on R λ adjoined to the multiplication operation λ . This will be done by proving that a λ b is the greatest solution of the inequation a λ x b , for arbitrary a , b R λ . Before we prove it, it should be noted that for a , b [ λ , + ) such that b a we have that b a 0 , so b a + λ λ , and this means that λ is a well-defined operation on R λ .
In the case when a , b [ λ , + ) such that b a we have that
a λ x b a + x λ b x b a + λ ,
and hence, b a + λ = a λ b is the greatest solution of a λ x b . On the other hand, in the case when a , b [ λ , + ) such that b < a we have that b a + λ < λ , which means that b a + λ [ λ , + ) { + } , and from (16) it follows that = a λ b is the only solution of a λ x b . Therefore, it is the greatest solution of that inequation.
If a [ λ , + ) and b = , then in a similar way we conclude that = a λ b is the only solution of a λ x b , so it is the greatest solution of that inequation.
Further, if a = then a λ x = b , for all x , b R λ , so each x R λ is a solution of a λ x b , and this means that the greatest solution of that inequation is + = a λ b .
On the other hand, if b = + , then for every a R λ we have that any x R λ is a solution of a λ x b , so the greatest solution of that inequation is + = a λ b .
Finally, in the case when a = + and b R λ such that b + , we have that
a λ x = + if x R λ , x , if x = ,
whence it follows that = a λ b is the only solution of a λ x b , so it is the greatest solution of that inequation.
Having exhausted all possible cases, we conclude that λ is the residuum operation on R λ adjoined to λ . □
We will call the semiring R λ the lower truncation of the semiring R by means of λ, with shifted multiplication, or the lower λ-truncation for short. We will also refer to R λ as a lower truncated complete max-plus semiring. If λ Z , then Z λ = { z Z | λ z } { , + } is a subset of R λ closed under the operations λ , λ and λ , as well as under infinite sums, so Z λ = ( Z λ , λ , λ , , λ ) constitutes a commutative complete additively idempotent semiring, which will be called the lower λ-truncation of Z  with shifted multiplication.
An important special case of lower λ-truncations of R is the one where λ = 0 . The semiring R 0 has already been studied in [11], where it was denoted by R + . It is clear that the multiplication 0 in R + is a restriction of the multiplication ⊗ on R , but the residuum operation 0 on R + differs from the residuum operation → on R and is given by the following table:
b [ 0 , + )
0 a b a > b +
a [ 0 , + ) b a +
+ + + +
+ +
In the sequel, we make few remarks regarding lower truncated semirings R λ .
Remark 3.1. 
In the case when λ < 0 , the interval [ λ , + ) is not closed under the operation ⊗. Namely, for negative a , b [ λ , + ) it may happen that a b = a + b < λ , which is why we had to shift a b = a + b by λ and bring it back into [ λ , + ) .
In the case when λ 0 , the interval [ λ , + ) is closed under the operation ⊗, and it may seem that one could retain the original operation ⊗ on [ λ , + ) , with no need to shift this operation by λ . However, the problem is that in that case we would lose the identity. Therefore, even when λ 0 , for a , b [ λ , + ) it is necessary to shift a b = a + b by λ , thereby ensuring that the new operation has an identity. It is clear that the operation ⊗ is retained on [ λ , + ) only in the case when λ = 0 .
It should also be noted that, although the interval [ λ , + ) has the smallest element λ, we must retain as the smallest element, that is, as the zero in a semiring, because without we would not obtain a semiring. Namely, if were omitted, then λ would simultaneously serve as both the zero and the identity, which is possible for a semiring only in the case of a single-element semiring.
Remark 3.2. 
The set R max λ , equipped with the addition and multiplication operations inherited from R λ , is a commutative additively idempotent semiring with zero and identity λ , but this semiring is not a complete additively idempotent semiring because it does not have the greatest element. Consequently, the residuum operation is not defined for all pairs of elements from R max λ , since the inequation a λ x b does not have the greatest solution in R max λ for all pairs of elements from R max λ . More precisely, this inequation does not have the greatest solution whenever a = , although in that case every element from R max λ is a solution of this inequation.
We now turn to the case when the complete max-plus semiring is truncated from above. For an arbitrary υ R , let
R max υ = ( , υ ] { } and R υ = ( , υ ] { , + } ,
and let the restrictions of the ordering on R to R υ and R max υ be denoted by the same symbol ⩽ as the original ordering on R .
Theorem 3.2. 
For any υ R , let υ be the addition operation on R max υ inherited from the addition operation on R max , and let υ be an operation on R max υ defined by the following Cayley-like table:
υ b ( , υ ]
a ( , υ ] a + b υ
Then ( R max υ , υ , υ , , υ ) is a commutative complete additively idempotent semiring with the zero and the identity υ, and the residuum operation υ on R max υ adjoined to υ is given by
b ( , υ ]
υ a b a < b
a ( , υ ] b a + υ υ
υ υ υ
Proof. 
We can prove this theorem in a similar way as Theorem 3.1, so its proof will be omitted. □
We call the semiring R max υ the upper truncation of the semiring R max by means of υ, with shifted multiplication, or the upper υ-truncation for short. We will also refer to R max υ as the upper truncated max-plus semiring.
Remark 3.3. 
Unlike the semiring R λ , where we had to keep as the smallest element, in the construction of the semiring R max υ there was no risk of omitting + and thereby making υ simultaneously the identity and the greatest element.
In the terminology of lattice theory, R max υ is an integral commutative quantale, that is, a commutative quantale with an identity which is identical to the greatest element. Even more commonly, such an algebraic structure is called a complete residuated lattice.
However, there will be no problems even if we keep + and define the multiplication operation on R υ in a similar way as on R λ , which will be done in the next theorem.
Theorem 3.3. 
For any υ R , let υ be the addition operation on R υ inherited from the addition operation on R , and let υ be an operation on R υ defined by the following Cayley-like table:
υ b ( , υ ] +
a ( , υ ] a + b υ +
+ + +
Then ( R υ , υ , υ , , υ ) is a commutative complete additively idempotent semiring with the zero and the identity υ, and the residuum operation υ on R υ adjoined to υ is given by
b ( , υ ]
υ a b a < b +
a ( , υ ] b a + υ υ +
+ + + +
+ +
Proof. 
This theorem can also be proved in a similar way as Theorem 3.1, so the proof will be omitted. □
The semiring R υ will be called the upper truncation of the semiring R by means of υ, with shifted multiplication, or briefly the upper υ-truncation. We also refer to R υ as the upper truncatedcomplete max-plus semiring. In order to simplify the terminology, the semiring R max υ will be treated not only as an upper υ-truncation of R max , but also as an upper υ-truncation of R .
For an arbitrary υ Z , the sets
Z υ = { z Z | z υ } { , + } and Z max υ = { z Z | z υ } { }
are subsets of R υ and R max υ closed under the operations υ , υ and υ or υ , as well as under infinite sums. Thus, Z υ = ( Z υ , υ , υ , , υ ) and Z max υ = ( Z max υ , υ , υ , , υ ) constitute commutative complete additively idempotent semirings, which are respectively called the upper υ-truncations of Z and Z max  with shifted multiplication.
Remark 3.4. 
Although the multiplication operations on R λ and R υ are similarly defined, and the multiplication operation on R max υ is the restriction of the multiplication operation on R υ , Theorems 3.1, 3.2 and 3.3 show that the residuum operations on R λ , R υ , and R max υ differ substantially.

4. Application of Truncation and Shifting to Max-Plus Automata

Let X a non-empty set, which we call an alphabet and whose elements are called letters. The set of all sequences of elements of X, including the empty sequence, is denoted by X * , whereas the set of all non-empty sequences from X * is denoted by X + . The elements of X * are called words over X. Equipped with the concatenation operation, the set X * is a monoid, called the free monoid over X. The identity of X * is the empty sequence, which is denoted by ε and called the empty word. The number of letters that appear in u X * is denoted by | u | and is called the length of the word u. In particular, | ε | = 0 . For any non-empty set S, a function f : X * S is called a word function. If the set S is ordered, with an ordering ⩽, then word functions can also be ordered pointwise, that is, for two word functions f , g : X * S we set that f g if f ( u ) g ( u ) , for every word u X * .
Further, let S be a semiring and X an alphabet. A weighted finite automaton over S and X is a quadruple A = ( m , σ A , { M x A } x X , τ A ) , where
(A1)
m N is a natural number, called the dimension or the number of states of A;
(A2)
σ A S 1 × m is a row vector, called the initial weights vector;
(A3)
{ M x A } x X S m × m is a family of matrices, called the basic transition matrices of A;
(A4)
τ A S m × 1 is a column vector, called the final weights vector.
From the basic transition matrices, which correspond to the letters from the alphabet X, compound transition matrices are built, which correspond to the words from X * . This is done in the following way. For any word u = x 1 x 2 x k X * , where x 1 , x 2 , , x k X , the compound transition matrix M u A S m × m is defined by
M u A = M x 1 A M x 2 A M x k A ,
and for the empty word ε we define M ε A to be the identity matrix I m of order m.
The behavior of a weighted finite automaton A = ( m , σ A , { M x A } x X , τ A ) is a word function A : X * S defined as follows: For u = x 1 x 2 x k X + , where x 1 , x 2 , , x k X , we set
A ( u ) = σ A M u A τ A = σ A M x 1 A M x 2 A M x k A τ A ,
and in addition,
A ( ε ) = σ A M ε A τ A = σ A τ A .
We also say that A is the word function computed by A.
For a weighted finite automaton A = ( m , σ A , { M x A } x X , τ A ) , its reverse automaton  A ¯ is defined as a weighted finite automaton of the same dimension m, obtained from A by mutually swapping the vectors σ A and τ A , and replacing each transition matrix M x A with its transpose.
One of the main general problems in computer science is the problem of comparing the behavior of two computational systems, algorithms, or models of computation, known as the comparison problem. That problem generally manifests itself in two main forms. One of them is the equivalence problem, where one must determine whether two computational entities produce the same output or exhibit the same behavior for every possible input. The other is the containment problem, where it is necessary to determine whether the behavior or output of one computational entity is completely included within that of another. These problems have a broad range of applications, for example in model checking and verification (to verify whether an implementation matches its specification), compiler optimization (to ensure transformed code behaves the same as original code), and other areas of computer science. In the context of weighted finite automata, the equivalence problem requires determining whether two given automata A and B have the same behavior, that is, whether A = B , while the containment problem requires determining whether A B (under the condition that the underlying semiring is ordered). If A = B , we say that A and B are equivalent automata.
However, in many cases, including most types of weighted finite automata, the equivalence and containment problems are undecidable or computationally hard (cf. [14,15]). In such situations, it is natural to search for procedures for determining containment or equivalence that may not work in all cases, but can be implemented efficiently. The most known such procedures are based on the concepts of simulation and bisimulation.
Let two weighted finite automata A = ( m , σ A , { M x A } x X , τ A ) and B = ( n , σ B , { M x B } x X , τ B ) over an ordered semiring S be given. A matrix U S m × n is said to be a forward simulation between automata A and B if it satisfies the following conditions:
( fs 1 ) σ A σ B U , ( fs 2 ) U M x A M x B U , for all x X , ( fs 3 ) U τ A τ B .
Conditions (fs1), (fs2), and (fs3) can also be treated as a system of matrix inequations, with an unknown matrix U taking values in S m × n , and forward simulations are also defined as solutions of this system. The second type of simulations are backward simulations between A and B, which can be defined as forward simulations between the reverse automata of A and B. By combining the concepts of forward and backward simulations for the matrix U and its transpose U , four types of bisimulations are defined. The full definitions of backward simulations and the four types of bisimulations, given via systems of matrix inequalities of a similar form, can be found in [10,11]. Here we will focus only on forward simulations.
Informally speaking, simulations between A and B allow automaton B to mimic the moves of automaton A, and if they exist, they testify to the presence of containment relation between A and B (i.e., they confirm that A B ). On the other hand, bisimulations also allow reverse mimicking, and if they exist, they detect the presence of equivalence between A and B. It is also worth noting that simulations and bisimulations, represented by matrices, specify the degree of connectedness between the states of automata A and B, which is generally not the case when containment or equivalence is determined in some other way.
The fundamental problem related to simulations and bisimulations is how to test the existence of a simulation or bisimulation of a given type, and, in cases where it exists, how to compute the greatest simulation or bisimulation of that type. In other words, it is the problem of testing the existence of a solution to the corresponding system of matrix inequations and computing the greatest solution, if it exists. Here, this problem will be considered in the context of weighted finite automata over the complete max-plus semiring R , which we call max-plus automata. The name max-plus automaton will also be used for weighted finite automata over certain related semirings, such as the max-plus semiring R max , the above introduced truncations R λ , R υ and R max υ of R and R max , as well as semirings Z and Z max , and their truncations Z λ , Z υ and Z max υ . Max-plus automata originated as a means of representing the behaviour of timed discrete event systems with synchronization of tasks and resource sharing, such us, for instance, production systems, railroad networks, urban traffic networks, queuing systems, array processors, and others. The motivation for using the max-plus semiring as the structure of weights also stems from the fact that many phenomena, like synchronization, which are nonlinear in the classical systems theory, become linear by moving from the field of real numbers to the max-plus semiring. For information on the general properties of max-plus automata, as well as bibliographic notes about them, we refer to the papers [4,11,12,15,16,19,25,26,27,28,29,30,31,32,39,40,41].
In applications of max-plus automata within the theory of discrete event dynamic systems, letters from the input alphabet X are called events, the initial weights vector is called the vector of initial delays, the final weights vector is called the vector of final delays, and transition matrices are called matrices of transition times. In addition, word functions taking values in R are called daters (cf. [19,26]). For a sequence of events u X + , A ( u ) can be interpreted as the date when that sequence of events is completed, and A ( u ) = + means that the sequence of events will never be completed.
A procedure for testing the existence of forward simulations for max-plus automata over R , and for computing the greatest forward simulation when it exists, was proposed in [11]. The main part of this procedure is the construction of a non-increasing sequence of matrices { U s } s N from R m × n , defined inductively as follows:
U 1 = τ A τ B = ( τ B / τ A ) , U s + 1 = U s x X ( M x B U s ) / M x A .
The infimum of that sequence, denoted by U ^ , is the greatest solution of the system of inequations consisting of the inequations from (fs2) and (fs3). Solutions of this system are called forward presimulations between A and B (cf. [37,38]), and therefore, U ^ is the greatest forward presimulation.
It is interesting to note that U 1 is the greatest solution of the linear inequation (fs3), and when U s is given, for some s S , then U s + 1 is computed as the infimum of U s and the greatest solution of the system of linear inequations
U M x A M x B U s , for all x X ,
obtained from (fs2) by replacing the unknown matrix U on the right-hand sides of the inequalities with the matrix U s .
If the greatest forward presimulation U ^ is a non-zero matrix (it contains an entry different from ) and satisfies condition (fs1), then U ^ is the greatest forward simulation between A and B. Otherwise, if U ^ is either the zero matrix (all its entries are ) or does not satisfy (fs1), then there exists no forward simulation between A and B.
Obviously, the key problem that arises in the practical application of this procedure is the efficient computation of the greatest forward presimulation U ^ . One way to efficiently compute U ^ can be applied in situations where s N can be found such that U s = U s + 1 . If this happens, then one obtains that U ^ = U s . Therefore, after constructing the matrix U s + 1 from U s , we need to check whether they are equal, and once we find the smallest index s such that U s + 1 = U s , if such an index exists, we conclude that U ^ = U s and the procedure for constructing the sequence (19) terminates. However, it may happen that all members of the sequence are different and that the sequence is infinite. For instance, for some i [ 1 . . m ] and j [ 1 . . n ] it may happen that { U s ( i , j ) } s N is an infinite strictly decreasing sequence of real numbers that converges to . Behind our idea of lower λ-truncation lies the effort to make such sequences finite, to stop them whenever their members fall below λ. We will see that such truncation works particularly well when dealing with max-plus automata whose vectors of initial and final delays and matrices of transition times have integer entries, that is, when working with max-plus automata over Z .
Before we move on to max-plus automata over truncated max-plus semirings, we will show how matrices over these semirings are multiplied.
For arbitrary r R and k N 0 , in order to distinguish between the k-th multiplicative power of r in the semiring R and the k-th multiplicative power of r in the field of real numbers, the k-th multiplicative power of r in the semiring R will be denoted by r [ k ] .
Theorem 4.1. 
For an arbitrary ξ R let R ξ be either a lower or upper ξ -truncation of R , and for an arbitrary s N let M 1 , M 2 , , M s be matrices with entries in R ξ such that the matrix product M 1 ξ M 2 ξ ξ M s exists. Then
M 1 ξ M 2 ξ ξ M s = ( ξ ) [ s 1 ] ( M 1 M 2 M s ) ,
where ( ξ ) [ s 1 ] denotes the ( s 1 ) st power of ξ in the multiplicative monoid of R ξ .
Proof. 
Before proving the theorem, we will show that a ξ b = ( ξ ) a b , for all a , b R ξ . Indeed, for a , b [ ξ , + ) we have that a ξ b = a + b ξ = ( ξ ) + a + b = ( ξ ) a b , whereas in all other cases it follows that a ξ b = a b = ( ξ ) a b , since either a b = or a b = + , whereby ( ξ ) = and ( ξ ) + = + .
We can now proceed to the proof of the theorem itself, which will be proved by induction on s. It is clear that (20) is true for s = 1 , seeing that
( ξ ) [ 1 1 ] M 1 = ( ξ ) [ 0 ] M 1 = 0 M 1 = M 1 .
To make the proof clearer, we will also prove the case s = 2 . Therefore, we consider the matrices M 1 S m × n and M 2 S n × p . Then for arbitrary i [ 1 . . m ] and k [ 1 . . p ] we obtain
( M 1 ξ M 2 ) ( i , k ) = j = 1 n M 1 ( i , j ) ξ M 2 ( j , k ) = j = 1 n ( ξ ) M 1 ( i , j ) M 2 ( j , k ) = ( ξ ) j = 1 n M 1 ( i , j ) M 2 ( j , k ) = ( ξ ) ( M 1 M 2 ) ( i , k ) = ( ξ ) M 1 M 2 ( i , k ) ,
and hence, M 1 ξ M 2 = ( ξ ) M 1 M 2 , which is what was to be proved.
Assume now that the assertion is valid for some s N , s 2 , i.e., that (20) holds. Let M s + 1 be a matrix with entries in R ξ such that the product M 1 ξ M 2 ξ ξ M s ξ M s + 1 exists. According to the induction hypothesis and the properties of scalar multiplication, we get
M 1 ξ M 2 ξ ξ M s ξ M s + 1 = ( M 1 ξ M 2 ξ ξ M s ) ξ M s + 1 = ( ξ ) ( M 1 ξ M 2 ξ ξ M s ) M s + 1 = ( ξ ) ( ξ ) [ s 1 ] ( M 1 M 2 M s ) M s + 1 = ( ξ ) ( ξ ) [ s 1 ] ( M 1 M 2 M s M s + 1 ) = ( ξ ) ( ξ ) [ s 1 ] ( M 1 M 2 M s M s + 1 ) = ( ξ ) [ s ] ( M 1 M 2 M s M s + 1 )
This completes the proof of the theorem. □
Let A = ( m , σ A , { M x A } x X , τ A ) be an arbitrary max-plus automaton over the complete max-plus semiring R , and let R λ be the lower λ-truncation of R , where λ is a real number such that all entries of matrices M x A , x X , and vectors σ A and τ A , belong to R λ . The max-plus automaton over R λ having the same transition matrices M x A , x X , and initial and final weights vectors σ A and τ A as A, will be denoted by A λ and called the lower λ-shift of A.
Similarly, if R υ is the upper υ-truncation of R , where υ is a real number such that all entries of matrices M x A , x X , and vectors σ A and τ A , belong to R υ , then the max-plus automaton over R υ having the same transition matrices M x A , x X , and initial and final weights vectors σ A and τ A as A, will be denoted by A υ and called the upper υ-shift of A. This definition can also be adapted to the case when the upper υ-shift A υ is regarded as an automaton over R max υ .
When we say that A λ is the lower λ-shift of A, this includes the assumption that A λ is well-defined, i.e., that all entries of the transition matrices and initial and final weights vectors of A belong to the lower λ-truncation R λ of R . Similarly, when we say that A υ is the upper υ-shift of A, this includes the assumption that A υ is well-defined, i.e., that all entries of the transition matrices and initial and final weights vectors belong to the upper υ-truncation R υ of R .
Theorem 4.2. 
Let A be a max-plus automaton over the complete max-plus semiring R , and let ξ R . If A ξ is either the lower or the upper ξ -shift of A, then
A ξ ( u ) = ( ξ ) [ s + 1 ] A ( u ) ,
for every word u X * of length s.
Proof. 
This assertion follows immediately from the definition of the behavior of weighted automata and Theorem 4.1. □
Theorem 4.3. 
Let A and B be max-plus automata over the complete max-plus semiring R , and let ξ R . If A ξ and B ξ are either the lower or the upper ξ -shifts of A and B, then
(a)
A B if and only if A ξ B ξ ;
(b)
A = B if and only if A ξ = B ξ .
Proof. 
From (21) it follows directly that A B implies A ξ B ξ . From (21) we also obtain that A ( u ) = ( ξ ) [ s + 1 ] A ξ ( u ) , for each word u X * of length s, from which it follows that A ξ B ξ implies A B . This completes the proof of statement (a).
The statement (b) follows directly from (a). □
Theorem 4.4. 
For any λ Z and arbitrary max-plus automata A = ( m , σ A , { M x A } x X , τ A ) and B = ( n , σ B , { M x B } x X , τ B ) over the lower λ-truncation Z λ of Z , the sequence (19) is finite.
Proof. 
First we note that the semiring Z λ satisfies the descending chain condition, that is, every descending sequence in Z λ is finite. Therefore, for any pair ( i , j ) [ 1 . . m ] × [ 1 . . n ] , the descending sequence { U s ( i , j ) } s N is finite, so there exists a smallest member of this sequence. This means that there is a smallest number k i , j N such that U k i , j ( i , j ) is the smallest member of the sequence { U s ( i , j ) } s N , and since the sequence is decreasing, we obtain that
U k i , j ( i , j ) = U l ( i , j ) , for every l N , l k i , j .
Now, for
k = max { k i , j | ( i , j ) [ 1 . . m ] × [ 1 . . n ] } .
we have that U k ( i , j ) = U l ( i , j ) , for all ( i , j ) [ 1 . . m ] × [ 1 . . n ] and l N , l k , and consequently, U k = U l , for every l N , l k . This completes the proof that the sequence (19) is finite. □
The following example demonstrates the application of lower truncations of the complete max-plus semiring.
Example 4.1. 
For X = { x , y } , let A = ( 2 , σ A , { M x A , M y A } , τ A ) and B = ( 3 , σ B , { M x B , M y B } , τ B ) be max-plus automata over Z given by
M x A = 4 1 2 , M y A = 1 , σ A = 2 , τ A = 0 , M x B = 4 4 2 , M y B = 0 2 1 , σ B = 1 2 , τ B = 1 3 0 .
(a) Computation over the complete max-plus semiring Z : Using formulas from (19), by induction on s we obtain that the members of the sequence { U s } s N are given by
U 1 = + + + 1 3 0 , U s = 0 0 5 2 s 1 3 0 , for s N , s 2 ,
and obviously, the infimum of this sequence is
U ^ = 0 0 1 3 0 .
As we have already said, U ^ is the greatest forward presimulation between A and B. Since
σ B ( U ^ ) = 1 2 0 1 0 3 0 = 1 2 > 2 = σ A ,
we conclude that U ^ satisfies (fs1), so it is the greatest forward simulation between A and B.
(b) Computation over the lower λ-truncation Z λ of Z , for an arbitrary λ 3 : By means of formulas (19) we compute the members of the sequence { U s λ } s N and obtain
U 1 λ = + + + λ , U 2 λ = + + 1 + λ λ , U 3 = U 4 = U ^ λ = λ ,
where U ^ λ denotes the infimum of this sequence. Now we have that
σ B λ ( U ^ λ ) = 1 2 λ λ = 2 = σ A ,
which means that U ^ λ satisfies (fs1) and it is the greatest forward simulation between the lower λ-shifts A λ and B λ .
c) Computation over the upper υ-truncation Z υ of Z , for υ = 4 : In the case when A and B are regarded as weighted automata over Z υ , for υ = 4 , we obtain an infinite sequence
U 1 υ = + + + 3 1 4 , U s υ = 4 4 8 2 s 3 1 4 , for s N , s 2 ,
whose infimum is
U ^ υ = 4 4 3 1 4 .
It is easy to verify that U ^ υ satisfies (fs1), which means that it is greatest forward simulation between the upper υ-shifts A υ and B υ .
(d) The same automata with changed initial weights vectors: Let us change initial weights vectors of A and B so that
σ A = 1 , σ B = 1 .
Since the initial weight vectors do not affect the computation of the elements of the sequence { U s } s N , we obtain the same sequence as in (a) and its infimum U ^ , and now we have that
σ B ( U ^ ) = 1 0 1 0 3 0 = 1 0 > 1 = σ A .
Therefore, U ^ still satisfies (fs1), and it is the greatest forward simulation between A and B.
When we perform computation over Z λ , for automata A λ and B λ , we also get the same sequence as in (b) and its infimum U ^ λ . However, U ^ λ no longer satisfies (fs1), since
σ B λ ( U ^ λ ) = 1 λ = 1 = σ A ,
which means that in this case there is no forward simulation between automata A λ and B λ .
Let us analyze the previous example. First, it should be pointed out that in cases where the greatest forward presimulation between two max-plus automata over R cannot be computed effectively using standard algorithms and software, because the sequence of matrices that determines it is infinite, one may instead attempt to compute it over a lower truncation of R . Namely, the previous example shows that even in such a case it may happen that the greatest forward presimulation between the lower shifts of these automata can be computed in a finite number of steps, that is, that the sequence of matrices defining it is finite. Moreover, Theorem 4.4 shows that this always occur for max-plus automata with integer weights. In accordance with Theorem 4.3, this can transfer the problem of determining the existence of a containment relation from max-plus automata over R to their lower shifts.
However, the previous example also shows that the application of the method described above has its limitations. Namely, although a slightly modified residuum operation on R λ can ensure that the greatest forward presimulation U ^ λ is computed over R λ in a finite number of steps, we have seen here that such a residuum operation can cause U ^ λ to be so small that it can no longer satisfy (fs1). Therefore, it is possible to reach a situation where we have two max–plus automata between which a containment relation exists, for the original automata over R it can be detected by a forward simulation that cannot be effectively computed using standard algorithms and software (except perhaps by some AI system capable of using mathematical induction), but it cannot be detected over the lower λ-truncation R λ of R , even though over R λ the greatest forward presimulation can be computed effectively.
Finally, this example also shows that the application of upper truncations does not yield results as good as those obtained with lower truncations.
The second example, presented below, shows that even when the greatest forward presimulation between two max-plus automata over R can be computed in a finite number of steps (when the corresponding sequence of matrices is finite), the use of lower truncations can still yield better results, in the sense that the computation can be carried out faster, in fewer steps.
Example 4.2. 
For X = { x , y } , let A = ( 3 , σ A , { M x A , M y A } , τ A ) and B = ( 2 , σ B , { M x B , M y B } , τ B ) be max-plus automata over Z given by
M x A = 10 3 4 5 10 3 4 6 7 , M y A = 5 6 2 3 3 4 7 7 10 , σ A = 5 0 0 , τ A = 1 1 1 , M x B = 10 6 6 7 , M y B = 6 6 7 10 , σ B = 5 0 , τ B = 1 1 .
(a) Computation over the complete max-plus semiring Z : Computing the members of the sequence { U s } s N using formulas (19) we obtain
U 1 = 0 0 0 0 0 0 , U 2 = 0 3 0 3 4 0 , U 3 = U 4 = U ^ = 0 4 0 4 4 0 ,
and since U ^ satisfies (fs1), it is the greatest forward simulation between A and B.
(b) Computation over the lower λ-truncation Z λ of Z , for λ = 0 : When the computation from (a) is performed over Z λ , we obtain a sequence { U s λ } s N whose members are given by
U 1 λ = 0 0 0 0 0 0 , U 2 λ = U 3 λ = U ^ λ = 0 0 0 ,
and U ^ λ is also a forward simulation between A λ and B λ , since it satisfies (fs1).
The last theorem of this section establishes a connection between forward presimulations and simulations between max-plus automata over R and forward presimulations and simulations between their lower and upper shifts.
Theorem 4.5. 
Let A and B be max-plus automata of dimensions m and n over the complete max-plus semiring R , let ξ R , let A ξ and B ξ be either the lower or the upper ξ -shifts of A and B, and let U be an m × n matrix with entries in R ξ . Then
(a)
U is a forward presimulation between A ξ and B ξ if and only if ( ξ ) U is a forward presimulation between A and B.
However, even if U is the greatest forward presimulation between A ξ and B ξ , ( ξ ) U need not be the greatest forward presimulation between A and B.
(b)
U is a forward simulation between A ξ and B ξ if and only if ( ξ ) U is a forward simulation between A and B.
However, even if U is the greatest forward simulation between A ξ and B ξ , ( ξ ) U need not be the greatest forward simulation between A and B.
Proof. 
(a) According to Theorem 4.1 and the properties of scalar multiplication, for an arbitrary x X we get
U ξ M x A M x B ξ U ( ξ ) U M x A ( ξ ) M x B U ( ξ ) U M x A M x B ( ξ ) U ( ξ ) U M x A M x B ( ξ ) U ,
and also
U ξ τ A τ B ( ξ ) U τ A τ B ( ξ ) U τ A τ B ( ξ ) U τ A τ B ,
and therefore, U is a forward presimulation between A ξ and B ξ if and only if ( ξ ) U is a forward presimulation between A and B.
Further, consider max-plus automata A and B over R from Example 4.1. If we assume that ξ = λ and U is the greatest forward presimulation between the lower λ-shifts A λ and B λ computed in Example 4.1 (b), that is,
U = λ ,
then
( λ ) U = ( λ ) λ = 0 < 0 0 1 3 0 ,
where the last matrix is the greatest forward presimulation between A and B computed in Example 4.1 (a). Thus, ( λ ) U is not the greatest forward presimulation between A and B.
(b) According to Theorem 4.1 and the properties of scalar multiplication, we have that
σ A σ B ξ U σ A ( ξ ) σ B U σ A σ B ( ξ ) U σ A σ B ( ξ ) U ,
and in accordance with (a), U is a forward simulation between A ξ and B ξ if and only if ( ξ ) U is a forward simulation between A and B.
The remaining part of the proof of claim (b) is contained in the proof of (a). □
Remark 4.1. 
Let us once again consider the automata A and B from Example 4.1. As shown in part (c) of that example, for ξ = υ = 4 , the greatest forward simulation between the upper υ-shifts A υ and B υ is
U = 4 4 3 1 4 .
Then
( ξ ) U = ( 4 ) 4 4 3 1 4 = 0 0 1 3 0 ,
and it coincides with the greatest forward simulation between A and B.
Thus, if U is the greatest forward presimulation or simulation between A ξ and B ξ , then ( ξ ) U need not be, but may be, the greatest presimulation or simulation between A and B.

5. Remarks on Weak Simulations and Bisimulations for Max-Plus Automata

Let A = ( m , σ A , { M x A } x X , τ A ) be a max-plus automaton over the complete max-plus semiring R and an alphabet X. For any word u X * we define vectors σ u A R 1 × m and τ u A R m × 1 by
σ u A = σ A M u A , τ u A = M u A τ A .
According to this definition, for the empty word ε we have that σ ε A = σ A and τ ε A = τ A . The vectors σ u A and τ u A will be referred to, respectively, as the σ-vector and the τ-vector corresponding to the word u X * .
In the context of max-plus automata, the vectors σ u A are known as generalized daters (cf. [20,30]) or state vectors (cf. [16,31]). The vectors σ u A and τ u A play a very important role in the determinization of max-plus automata, and recently, they have also been used to define generalizations of simulations and bisimulations for max-plus automata, called weak simulations and weak bisimulations [33]. Namely, for max-plus automata A = ( m , σ A , { M x A } x X , τ A ) and B = ( n , σ B , { M x B } x X , τ B ) , a matrix U R m × n is called a weak forward simulation between A and B if it satisfies the following conditions
( wfs 1 ) σ A σ B U , ( wfs 2 ) U τ u A τ u B , for all u X * ,
Weak backward simulations between A and B are defined as forward simulations between the reverse automata of A and B, and by combining the concepts of forward and backward simulations for the matrix U and its transpose U , two types of bisimulations are defined. Here, the conditions (wfs1) and (wfs2) can also be treated as systems of matrix inequations with the matrix U as the unknown, and weak forward simulations can therefore be defined as the solutions of that system. Other types of weak simulations and bisimulations can also be defined as solutions of appropriate systems of matrix inequations. A matrix U R m × n that is a solution of the system composed of inequations from (wfs2) will be called a weak forward presimulation between A and B.
The procedure for testing the existence of a weak forward simulation between A and B, and computing the greatest one when it exists, is partly similar to the procedure described above for ordinary forward simulations, but there are also significant differences. First, the greatest solution U ^ of the system consisting of the matrix inequations from (wfs2) is computed, which is given by the following formula
U ^ = u X * τ u A τ u B .
In other words, U ^ is the greatest weak forward presimulation between A and B. If U ^ is a non-zero matrix and satisfies (wfs1), then U ^ is the greatest weak forward simulation between A and B. Otherwise, if U ^ is either the zero matrix or does not satisfy (wfs1), then there exists no weak forward simulation between A and B. The main problem that arises when computing the matrix U ^ is the possibility that the system (wfs2) consists of infinitely many inequations, which means that, before computing the matrix U ^ , one must compute infinitely many pairs of the form ( τ u A , τ u B ) , which will be called τ-pairs in A. In the following example, we will see that this problem cannot be resolved by passing to the lower or upper truncations of the semiring R , that is, to the corresponding lower or upper shifts of the original automata, since the sets of τ -pairs may be infinite in these cases as well.
Example 5.1. 
For X = { x } , let A = ( 2 , σ A , { M x A , M y A } , τ A ) and B = ( 3 , σ B , { M x B , M y B } , τ B ) be max-plus automata over Z given by
M x A = 4 2 , σ A = 2 , τ A = 0 , M x B = 1 4 1 2 , σ B = 1 2 , τ B = 1 3 0 .
(a) Computation over the complete max-plus semiring Z : The τ -pairs computed over Z are given by
( τ x k A , τ x k B ) = 2 k , 2 k 1 4 k 3 2 k , for all k N 0 ,
and the corresponding residuals are
τ x k A τ x k B = 2 k 2 k 1 4 k 3 2 k = + + + 1 2 k 3 0 , for all k N 0 .
As the sequence 2 k 3 is increasing, its smallest member is obtained for k = 0 and is equal to 3 , so the greatest weak forward presimulation between A and B is given by
U ^ = k N 0 τ x k A τ x k B = + + + 1 3 0 .
U ^ is also the greatest weak forward simulation between A and B, since it satisfies (wfs1).
(b) Computation over the lower λ-truncation Z λ of Z , for λ = 3 : When we compute the τ -pairs over Z λ we obtain that
( τ x k A , τ x k B ) = 5 k , 5 k 1 7 k 3 5 k , for all k N 0 ,
and the corresponding residuals are
τ ε A λ τ ε B = + + + 1 3 , τ x A λ τ x B = + + + 3 , τ x k A λ τ x k B = + + + 2 k 6 3 , for all k N , k 2 .
Therefore, in this case the greatest weak forward presimulation is
U ^ λ = k N 0 τ x k A λ τ x k B = + + + 3 ,
and since it satisfies (wfs1), U ^ λ is the greatest weak forward simulation between A λ and B λ .
(c) Computation over the upper υ-truncation Z υ of Z , for υ = 4 : Computation of the τ -pairs over Z υ yields
( τ x k A , τ x k B ) = 2 k , 2 k 1 3 2 k , for all k N 0 ,
and the corresponding residuals are
τ ε A υ τ ε B = + + + 4 1 4 , τ x A υ τ x B = + + + 3 3 4 , τ x k A υ τ x k B = + + + 3 4 4 , for all k N , k 2 .
Consequently, here we have that the greatest weak forward presimulation is
U ^ υ = k N 0 τ x k A υ τ x k B = + + + 3 1 4 ,
and since it satisfies (wfs1), U ^ υ is the greatest weak forward simulation between A υ and B υ .
The algorithm proposed in [33] computes the greatest weak forward presimulation between two max-plus automata by calculating the infimum (23) step by step, simultaneously with the computation of the τ -pairs and their residuals. More precisely, in each individual step of the algorithm, a new pair ( τ x u A , τ x u B ) = ( M x A τ u A , M x B τ u B ) is computed from some previously computed pair ( τ u A , τ u B ) , and after that, in the same step, the algorithm computes the residual τ x u A τ x u B of that pair, as well as the infimum of this residual and the infimum of all previously computed residuals of τ -pairs. In other words, the infimum (23) is updated at each step, and its computation terminates after all τ -pairs have been computed, provided that there are finitely many of them.
Besides the fact that passing to the computation of τ -pairs over the lower and upper truncations of the semiring R does not necessarily ensure the finiteness of the set of τ -pairs, the previous example shows something else as well. Namely, in part (a) of this example, the infimum (23) is reached already in the first step, while in parts (b) and (c) it is reached in the second step, and all subsequent computations of τ -pairs and their residuals are unnecessary, since these residuals do not change the infimum that has already been reached. This naturally raises the question of whether a termination criterion can be found for such an algorithm that would recognize that the infimum has been reached and terminate the further generation of τ -pairs. Such a criterion will be the subject of our future research. Our further research will also address different methods of truncating the complete max-plus semiring, which will ensure that the sets of τ -pairs over such truncated semirings are necessarily finite.

Author Contributions

Conceptualization, J.M., M.Ć., J.I.,and I.M.; methodology, M.Ć. and J.I.; software, J.M.; validation, M.Ć., J.M.; formal analysis, J.M., M.Ć., J.I.,and I.M.; investigation, J.M., M.Ć., J.I.,and I.M.; resources, J.M.; data curation, J.M.; writing – original draft preparation, M.Ć. and J.M.; writing – review and editing, J.M., M.Ć., J.I.,and I.M.; visualization, J.M. and M.Ć.; supervision, M.Ć., J.I. and I.M.; project administration, J.M. and M.Ć.; funding acquisition, M.Ć. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science, Technological Development and Innovation, Republic of Serbia, Contract No. 451-03-137/2025-03/200124.

Data Availability Statement

The data that support the findings of this study are available on request to the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Akian, M.; Gaubert, S.; Guterman, A. Tropical polyhedra are equivalent to mean payoff games. Internat. J. Algebra Comput. 2012, 22(01), 1250001. [Google Scholar] [CrossRef]
  2. Baccelli, F.; Cohen, G.; Olsder, G.; Quadrat, J. Synchronization and Linearity; John Wiley and Sons Ltd: Chichester, 1992. [Google Scholar]
  3. Bezem, M.; Nieuwenhuis, R.; Rodríguez-Carbonell, E. Exponential behaviour of the Butkovic–Zimmermann algorithm for solving two-sided linear systems in max-algebra. Discrete Appl. Math. 2008, 156(18), 3506–3509. [Google Scholar] [CrossRef]
  4. Boukra, R.; Lahaye, S.; Boimond, J-L. New representations for (max,+) automata with applications to the performance evaluation of discrete event systems. IFAC PapersOnLine 2012, 45(29), 116–121. [Google Scholar] [CrossRef]
  5. Butkovič, P. Max-linear Systems: Theory and Algorithms; Publisher: Springer: London, 2010. [Google Scholar]
  6. Butkovič, P.; Zimmermann, K. A strongly polynomial algorithm for solving two-sided linear systems in max-algebra. Discrete Appl. Math. 2006, 154(3), 437–446. [Google Scholar] [CrossRef]
  7. Cassandras, C.G.; Lafortune, S. Introduction to Discrete Event Systems; Publisher: Springer: New York, 2008. [Google Scholar]
  8. Ćirić, M.; Ignjatović, J.; Damljanović, N.; Bašić, M. Bisimulations for fuzzy automata. Fuzzy Sets Syst. 2012, 186, 100–139. [Google Scholar] [CrossRef]
  9. Ćirić, M.; Ignjatović, J.; Jančić, I.; Damljanović, N. Computation of the greatest simulations and bisimulations between fuzzy automata. Fuzzy Sets Syst. 2012, 208, 22–42. [Google Scholar] [CrossRef]
  10. Ćirić, M.; Ignjatović, J.; Stanimirović, P.S. Bisimulations for weighted finite automata over semirings. Research Square 2022. [Google Scholar] [CrossRef]
  11. Ćirić, M.; Micić, I.; Matejić, J.; Stamenković, A. Simulations and bisimulations for max-plus automata. Discrete Event Dyn. Syst. 2024, 34, 269–295. [Google Scholar] [CrossRef]
  12. Colcombet, T.; Daviaud, L.; Zuleger, F. Size-change abstraction and max-plus automata. In Mathematical Foundations of Computer Science (MFCS 2014); Csuhaj-Varjú, E., Dietzfelbinger, M., Ésik, Z., Eds.; Lecture Notes in Computer Science; Publishing House: Springer: Berlin-Heidelberg, 2014; vol. 8634, pp. 208–219. [Google Scholar]
  13. Damljanović, N.; Ćirić, M.; Ignjatović, J. Bisimulations for weighted automata over an additively idempotent semiring. Theor. Comput. Sci. 2014, 534, 86–100. [Google Scholar] [CrossRef]
  14. Daviaud, L. Containment and Equivalence of Weighted Automata: Probabilistic and max-plus cases. In Language and Automata Theory and Applications (LATA 2020); Leporati, A., Martín-Vide, C., Shapira, D., Zandron, C., Eds.; Lecture Notes in Computer Science; Publishing House: Springer: Cham, 2020; vol 12038, pp. 17–32. [Google Scholar]
  15. Daviaud, L.; Guillon, P.; Merlet, G. Comparison of max-plus automata and joint spectral radius of tropical matrices. In Foundations of Computer Science (MFCS 2017); Larsen, K., Bodlaender, H., Raskin, J.F., Eds.; Leibniz International Proceedings in Informatics (LIPIcs); 2017; vol. 83, pp. 19:1–19:14. [Google Scholar]
  16. Daviaud, B.; Lahaye, S.; Lhommeau, M.; Komenda, J. On the existence of simulations for max-plus automata. IEEE Control Syst. Lett. 2024, 8, 694–699. [Google Scholar]
  17. Droste, M.; Kuich, W. Semirings and Formal Power Series. In Handbook of Weighted Automata; Droste, M., Kuich, W., Vogler, H., Eds.; Monographs in Theoretical Computer Science. An EATCS Series; Publishing House: Springer, Berlin-Heidelberg, 2009; pp. 3–28. [Google Scholar]
  18. Eklund, P.; Gutiérrez García, J.; Höhle, U.; Kortelainen, J. Semigroups in complete lattices. In Developments in mathematics; Publisher: Springer: Cham, 2018. [Google Scholar]
  19. Gaubert, S. Performance evaluation of (max,+) automata. IEEE Trans. Automat. Control 1995, 40(12), 2014–2025. [Google Scholar] [CrossRef]
  20. Gaubert, S.; Mairesse, J. Modeling and analysis of timed Petri nets using heaps of pieces. IEEE Trans. Automat. Control 1999, 44(4), 683–698. [Google Scholar] [CrossRef]
  21. Golan, J.S. Semirings and their Applications; Publisher: Springer: Dordrecht, 1999. [Google Scholar]
  22. Gondran, M.; Minoux, M. Graphs, Dioids and Semirings. New Models and Algorithms; Publisher: Springer: New York, 2008. [Google Scholar]
  23. Ignjatović, J.; Ćirić, M. Moore-Penrose equations in involutive residuated semigroups and involutive quantales. Filomat 2017, 31(2), 183–196. [Google Scholar] [CrossRef]
  24. Jančić, I. Weak bisimulations for fuzzy automata. Fuzzy Sets Syst. 2014, 249, 49–72. [Google Scholar] [CrossRef]
  25. Komenda, J.; Lahaye, S.; Boimond, J.L. Supervisory control of (max,+) automata: A behavioral approach. Discrete Event Dyn. Syst. 2009, 19(4), 525–549. [Google Scholar] [CrossRef]
  26. Komenda, J.; Lahaye, S.; Boimond, J-L. (Max,+)-automata with partial observations. IFAC PapersOnLine 2018, 51(7), 192–197. [Google Scholar] [CrossRef]
  27. Komenda, J.; Lahaye, S.; Boimond, J.L.; van den Boom, T. Max-plus algebra and discrete event systems. IFAC PapersOnLine 2017, 50(1), 1784–1790. [Google Scholar] [CrossRef]
  28. Komenda, J.; Lahaye, S.; Boimond, J.L.; van den Boom, T. Max-plus algebra in the history of discrete event systems. Annual Rev. Control 2018, 45, 240–249. [Google Scholar] [CrossRef]
  29. Lahaye, S.; Komenda, J.; Boimond, J.L. Modeling of timed Petri nets using deterministic (max,+) automata. IFAC Proceedings Volumes 2014, 47(2), 471–476. [Google Scholar] [CrossRef]
  30. Lahaye, S.; Komenda, J.; Boimond, J.L. Compositions of (max,+) automata. Discrete Event Dyn. Syst. 2015, 25, 323–344. [Google Scholar] [CrossRef]
  31. Lahaye, S.; Lai, A.; Komenda, J.; Boimond, J.L. A contribution to the determinization of max-plus automata. Discrete Event Dyn. Syst. 2020, 30, 155–174. [Google Scholar] [CrossRef]
  32. Lombardy, S.; Mairesse, J. Max-plus automata. In Handbook of Automata Theory; Pin, J.É., Ed.; European Mathematical Society Publishing House GmbH – EMS Press, 2021; Vol 1, pp. 151–188. [Google Scholar]
  33. Matejić, J.; Micić, I.; Ćirić, M. Weak simulations and bisimulations for max-plus automata. Filomat. submitted for publication.
  34. Paseka, J.; Rosický, J. Quantales. In Current Research in Operational Quantum Logic: Algebras, Categories, Languages; Coecke, B., Moore, D., Wilce, A., Eds.; Publishing House: Springer: Dordrecht, 2000; pp. 245–262. [Google Scholar]
  35. Rosenthal, K.I. Quantales and Their Applications. In Pitman Research Notes in Mathematics Series; Publisher: Longman Higher Education: Harlow, England, 1990. [Google Scholar]
  36. Stamenković, A.; Ćirić, M.; Djurdjanović, D. Weakly linear systems for matrices over the max-plus quantale. Discrete Event Dyn. Syst. 2022, 32, 1–25. [Google Scholar]
  37. Stanković, M.; Ćirić, M.; Ignjatović, J. Hennessy-Milner type theorems for fuzzy multimodal logics over Heyting algebras. J. Mult.-Valued Logic Soft Comput. 2022, 39(2-4), 341–379. [Google Scholar]
  38. Stanković, M.; Ćirić, M.; Ignjatović, J. Simulations and bisimulations for fuzzy multimodal logics over Heyting algebras. Filomat 2023, 37(3), 711–743. [Google Scholar] [CrossRef]
  39. Triska, L.; Moor, T. Behaviour equivalent max-plus automata for timed Petri nets under open-loop race-policy semantics. Discrete Event Dyn. Syst. 2021, 31, 583–607. [Google Scholar] [CrossRef]
  40. Urabe, N.; Hasuo, I. Quantitative simulations by matrices. Inf. Comput. 2017, 252, 110–137. [Google Scholar] [CrossRef]
  41. Uyttendaele, J.; Van Hoeck, I.; Besinovic, N.; Vansteenwegen, P. Timetable compression using max-plus automata applied to large railway networks. TOP 2023, 31, 414–439. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated