Preprint
Article

Optimal Opacity-Enforcing Supervisory Control of Discrete Event Systems on Choosing Cost

Altmetrics

Downloads

67

Views

14

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

08 February 2024

Posted:

08 February 2024

You are already at the latest version

Alerts
Abstract
To assure the opacity, an optimal problem is to retain the occurring events sequences as many as possible. Contrary to such problem, the other optimal problem is to preserve the minimal occurring events sequences. In the paper, based on the choosing cost, an optimal opacity-enforcing model is presented to minimize the sum of discount choosing cost, which is not only to preserve the opacity, but also to retain the secret to the maximum. To solve the model, two scenarios on opacity are considered. For the two scenarios, some algorithms are presented to achieve the optimal solution for the model by using the method of dynamic programming. And then, the solutions produced by the algorithms are proved correctly by theoretical proof. Finally, some illustrations for the algorithms are given.
Keywords: 
Subject: Engineering  -   Control and Systems Engineering

1. Introduction

In modern society, when important information of enterprises is released online, it generally needs to be as complete as possible. In order to keep the security and opacity of important data transmission, it is necessary to use some unimportant data to confuse and make it difficult to distinguish for adversary. The more information is used to confuse important data, the better it is. Therefore, some papers [e.g. 1,2,3,4] discussed the issue of maximum opaque sublanguage. However, in reality, due to the cost involved in using data transmission, it is better to require less unimportant data and lower transmission costs. Therefore, this paper proposes an optimization mathematical model to find the most cost-effective information to confuse the important data to be released, and proposes an algorithm to obtain the optimal control strategy.
In 2004, opacity was first introduced to analyze cryptographic protocols in [5]. In 2005, the research modeled as Petri nets in [6] brought opacity to the field of discrete event systems (DES). Afterwards, the researches on opacity in DES were boomed up. In DES models, the definition of opacity was divided into two cases: language-based opacity([1,7,8] (e.g. strongly opacity([7], weakly opacity [7] and non-opacity [7]) and state-based opacity [6,9,10,11,12,13] (e.g. current-state opacity [6,9], Initial-state opacity [6], Initial-and-final-state opacity [13], K-step opacity [9,10,11], Infinite-step opacity [12,14]).[15] extended the works of [13] and showed that the various notions of opacity can be transformable to each other. Then, [16] unified the existing notions of opacity and provide a general framework. With the definition of opacity, the verification approach was investigated in the previous works. Once a system was not opaque, supervisory control theory [1,4,17] or the enforcement approach [18,19,20,21,22] was presented to ensure opacity in the system. In general, supervisory control theory restricts the behavior of the system to ensure the opacity, whereas the enforcement approach does not restrict, but modify the output of the system to ensure the opacity. For example, [1] used fix-point theory to check the opacity of the closed-system at every iterations to achieve the maximal permissive supervisor on condensed state estimates. [4] got the maximal permissive supervisor by using the refining of the plant and observer instead of condensed state estimates. [17] transformed strong infinite- and k-step opacity into a language and presents an algorithm to enforce infinite- and k-step opacity by supervisor. [18] extended the synthesis of insertion function from current-state opacity [23] to infinite-step opacity and K-step opacity. [19] inserted fictitious events at the output of the systems to enforce the opacity of the system. And some works [20,21] extended the method of [19], where [20] discussed a problem of opacity enforcement under energy constraints and [21] studied supervisory control under local mean payoff constraints. [22] verified and enforced the current-state opacity opacity based on an algebraic state space approach for partially-observed finite automaton.
To ensure opacity, [24] proposed some algorithms to design a controller to control information released to the public. And then, [25] extended the work of [24] and presented an algorithm for finding minimal information release policies for non-opacity. [26] assigned a reward for revealing the information of each transition and finds the maximum guaranteed payoff policy for the weighted DES. [27] proposed a dynamic information release mechanism to verify current-state opacity, where information is partially released and state-dependent.
To ensure opacity, the secret is preserved by enabling/disabling some events to restrict the behavior of the system. If cost functions are defined in DES, two types optimal supervisory control problem are developed: one is for event cost, and the other is for control cost. For example, [28] defined the cost of events to design a supervisor to minimize the total cost to reach the desired states. Then, [29] extended the framework to partial observation of the system. Afterwards, [30] investigated mean payoff supervisory control problem for a system where each event has an integer associated to it. In [31,32], the costs of choosing control input and occurring event were defined and two optimal problems to minimize the maximal discounted total cost among all possible strings generated in the systems were solved by using Markov decision processes.
In contrast to supremal opaque sublanguage of the plant in [1,2,3,4,8], we want to find a ’smallest’ closed controllable sublanguage of the plant, with respect to which the secret is not only opaque, but also ’largest’. Since the class of opaque languages is not closed under intersection [8], ’smallest’ is not in terms of set inclusion, but in terms of minimal discount total choosing cost. On the other hand, ’largest’ is in terms of set inclusion, where it means the union of all the elements of the class of confused secret. To describe the optimal problem, a non-linear optimal supervisory control model is proposed by introducing the concept of choosing cost.
The paper is organized as follows. Section 2 establishes the background on supervisory control theory, opacity and choosing cost of DES. In Section 3, we present an optimal supervisory control problem that are model by a non-linear programing with two constraint conditions. In Section 4, we consider two scenarios to compute the non-linear programing. Firstly, we suppose that the plant can ensure the secret is opaque. Then, a simply model on the non-linear programming is presented. Based on the structure of the secret’s closure, two algorithms and three theorems are put forward. Secondly, we suppose that the plant can not ensure the secret is opaque. A generalized algorithm and theorem are proposed to solve the non-linear programing. Finally, the main contribution of the work is discussed in Section 5.

2. Preliminary

2.1. Supervisory Control Theory

We consider a DES modeled by a determined finite transition system G = ( Q , Σ , δ , q 0 ) , where Q is a finite set of states, Σ is a finite set of events labeled in the transition, a partial function δ : Σ × Q Q is the transition function and q 0 Q is the initial state. A run of G is a finite non-empty sequence q 0 σ 1 q 1 σ 2 q n 1 σ n , where q i Q , σ i + 1 Σ and q i + 1 = δ ( q i , σ i ) for i = 1 n 1 . The trace t r ( ρ ) of the run ρ is σ 1 σ 2 σ n . The languages of G is the set of all traces of runs of G, and it is denoted L ( G ) = { s Σ * | δ ( s , q 0 ) ! } . The event set Σ is assumed to be partitioned into controllable event set Σ c and the uncontrollable event set Σ u , where Σ = Σ c ˙ Σ u . Each subset of events is a control pattern, and the set of all control patterns is denoted by Γ = { γ | Σ u γ Σ } . A supervisor for G is any map f : L ( G ) Γ . For system G, we denote Γ L ( G ) as the set of supervisors. The closed behavior of f / G , i.e. G under the supervision of f, is defined to be the language L ( f / G ) L ( G ) described as follows.
  • ϵ L ( f / G ) ;
  • s σ L ( f / G ) s L ( f / G ) , s σ L ( G ) , σ f ( s ) .
For a language K L ( G ) , the notation K ¯ is said to be the prefix(-closure) of K. K is said to be (prefix-)closed if K ¯ = K .
Definition 1 
([33]). Given a non-empty language K. Then, K is said to be controllable if K ¯ Σ u L ( G ) K ¯ .
A necessary and sufficient condition for the existence of supervisor is given as follows.
Theorem 1 
([33]). Given a non-empty language K L ( G ) . Then, there exists a supervisor f such that L ( f / G ) = K if and only if K is a controllable and closed language.

2.2. Supervisory Control for Opacity

Assume that adversary can be aware of any supervisor’s control policy, a subset Σ a of the events can be seen by adversary through an observable function θ : Σ * Σ a * . An adversary’s observation of system is denoted by o b s ( G ) = { Q a , Σ a , δ a , q 0 a } , which is an observer of system G. If an observer of the system is unable to clearly determine some secret information, we give the following definition of opacity.
Definition 2 
([1,7]). Given system G and non-empty language K. For any s K L ( G ) , if there exists s L ( G ) K such that θ ( s ) = θ ( s ) , we say K is (strongly) opaque w.r.t. L ( G ) and Σ a .
If the condition of Definition 2 can not be met, we say K is not (strongly) opaque w.r.t. L ( G ) and Σ a . For the definition of non-opacity, it is different with that of [7].
In [8], since opaque language have a desirable property that it is closed under union, there exists supremal controllable closed and opaque sublanguages of the system [1,4]. In [1,4,8], different works about supremal controllable closed and opaque sublanguage has been computed.

2.3. The Definition of Choosing Cost

In [31,32], two optimal control models of DES are presented, based on the costs of choosing a control input and of occurring an event respectively. In this paper, the choosing cost is from the definition of the cost of choosing control input after some string of [31,32]. For a given DES G, we let c ( s , γ ) be the cost of choosing control input γ at string s, where s L ( G ) and γ Γ . For any supervisor f : L ( G ) Γ , we call c ( s , f ) = c ( s , f ( s ) ) the cost of choosing f ( s ) after s under the supervision of f. All in all, we simply call c ( s , γ ) (or c ( s , f ) ) the choosing cost.

3. Optimal Supervisory Control Model on Opacity

In the section, an optimal supervisory control model is constructed to minimize the choosing cost of the controlled system which is opaque. Given a system G and a secret K L ( G ) , as shown in [1], secret K is a regular language. For secret K, we suppose there exists a set of secret states Q s Q which can recognize secret K, i.e. s K iff δ ( q 0 , s ) Q s .
To show the cost of information released, we introduce the cost of choosing control input in [31,32] and give a definition of total choosing cost as follows.
Definition 3. 
Given a closed-loop behavior L ( f / G ) , the discount total choosing cost of L ( f / G ) is defined as V ( L ( f / G ) , f ) = s L ( f / G )   β | s | c ( s , f ) , where f is a supervisor controlling the system G and β > 0 is a discount factor. For convenience, V ( L ( f / G ) , f ) is simplified as V ( L ( f / G ) ) .
For system G, if there exist two supervisors f 1 and f 2 such that L ( f 1 / G ) L ( f 2 / G ) , it is obvious that V ( L ( f 1 / G ) ) V ( L ( f 2 / G ) ) by Definition 3.
To obtain the discount total choosing cost, we give a definition about discount cost by the sum of choosing cost after s under f.
Definition 4. 
V s ( t n ) = k = 0 n β | s | + k c ( s t k , f s t n is said to be the discount cost of choosing t n = σ 1 σ 2 σ n after s under f, where f s t n ( s t k ) = f ( s t k ) Σ s t n ( s t k ) and t 0 = ε . Particularly, if s = ϵ , then s t n = t n and V ϵ ( t n ) = k = 0 n β k c ( t k , f t n .
In Definition 4, if some string can be divided into two pieces, a formula is formulated to simplify the computation of discount cost in the following theorem.
Theorem 2. 
Given t = s t . Then we have that V ϵ ( t ) = V ϵ ( s ) + V s ( t )
Proof. 
For t = s t , we have proved the following formula, where s = σ 1 σ 2 σ | s | and t = σ | s | + 1 σ | s | + 2 σ | s | + n .
V ϵ ( t ) = k = 0 | s | + n β k c ( t k , f t ) = c ( ε , f s + β c ( σ 1 , f s ) + + β | s | c ( s , f s ) + k = 0 n β | s | + k c ( s t k , f s t = k = 0 | s | β k c ( σ 1 σ k , f s ) + k = 0 n β | s | + k c ( s t k , f s t = V ϵ ( s ) + V s ( t )
   □
To generalize the above Theorem 2, we have the following theorem as Theorem 3.
Theorem 3. 
Given that s = s 1 s 2 s k L ( G ) . Then we have that V ϵ ( s ) = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s k 1 ( s k ) .
Proof. 
The proof can be proceed by induction.
Base case: If k = 1 , then it holds that V ϵ ( s ) = V ϵ ( s 1 ) .
Inductive hypothesis: Suppose that we have that V ϵ ( s 1 s j 1 s j ) = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s j 1 ( s j ) if k = j .
Inductive step: For k = j + 1 , then we will prove that V ϵ ( s 1 s j 1 s j s j + 1 = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s j 1 ( s j + V s 1 s 2 s j ( s j + 1 .
By the definition 4, we have the following, which completes the inductive step.
V ϵ ( s ) = V ϵ ( s 1 s j 1 s j s j + 1 = V ϵ ( s 1 s j 1 s j ) + V s 1 s j 1 s j ( s j + 1 ( Theorem = V ϵ ( s 1 ) + V s 1 ( s 2 ) + + V s 1 s 2 s j 1 ( s j ) + V s 1 s j 1 s j ( s j + 1 ( Hypothesis k = j )
   □
To obtain the discount total choosing cost, we formulate Algorithm 1 to get V ( L ( f / G ) ) by the computation of V s ( s ) . Preprints 98498 i001
As shown in Algorithm 1, the closed-system f / G can be transformed into a tree automaton, where they are language-equivalent. In the tree automaton, the string from root to some node is the longest common prefix of strings from root to the leaves via the node.
To show the computational process of Algorithm 1, an example is given to get the discount total choosing cost.
Example 1. 
Suppose that L ( f / G ) = { s 1 ¯ , s 2 ¯ } and t is the longest common prefix of s 1 and s 2 . To simplify the process of computation in Definition 3, we denote t = σ 1 σ l , t 1 = σ i 1 σ i m and t 2 = σ j 1 σ j n such that s 1 = t t 1 and s 2 = t t 2 . By Definition 3, we have the following equation.
V ( { s 1 ¯ , s 2 ¯ } = k = 0 l β k c ( σ 1 σ k , f σ 1 σ l + β l k = 0 m β k c ( σ 1 σ l σ i 1 σ i k , f σ 1 σ l σ i 1 σ i m + β l k = 0 n β k c ( σ 1 σ l σ j 1 σ j k , f σ 1 σ l σ j 1 σ j n = V ε ( t ) + V t ( t 1 ) + V t ( t 2 ) = V ε ( t t 1 ) + V t ( t 2 ) = V ε ( s 1 ) + V t ( t 2 ) = V ε ( s 2 ) + V t ( t 1 )
According to Algorithm 1, we transform L ( f / G ) in Figure 1 into a tree automaton shown in Figure 2.
By Figure 2 and Algorithm 1, we have the formula:
V ( { s 1 ¯ , s 2 ¯ } = V ε ( t ) + V t ( t 1 ) + V t ( t 2 ) = V ε ( t t 1 ) + V t ( t 2 ) = V ε ( s 1 ) + V t ( t 2 ) = V ε ( s 2 ) + V t ( t 1 )
If there exists a language K such that K ¯ L ( f / G ) , the discount total choosing cost of K ¯ can be denoted by V ( K ¯ , and be obtained as shown in Algorithm 1. If we denote V ( L ( f / G ) K ¯ by V ( L ( f / g ) ) V ( K ¯ , we also can get the discount cost by Definition 4. Obviously, V ( L ( f / G ) K ¯ means to compute the choosing cost outside of K ¯ in L ( f / G ) . Next, we will continue the above Example 1.
Example 2. 
In Figure 1, if there exists a state subset Q K = { 4 , 6 } such that K = t ( σ i 1 + σ j 1 . By Algorithm 1, we get a tree automaton based on K ¯ shown in Figure 3. So, it holds that V ( K ¯ = V ε ( t ) + V t ( σ i 1 + V t ( σ j 1 . According to the formulas of V ( L ( f / G ) and V ( K ¯ , we have the following equation).
V ( L ( f / G ) K ¯ = [ V ε ( t ) + V t ( t 1 ) + V t ( t 2 ) ] [ V ε ( t ) + V t ( σ i 1 + V t ( σ j 1 ] = V t ( t 1 ) V t ( σ i 1 + V t ( t 2 ) V t ( σ j 1 = V t σ i 1 ( σ i 2 σ i m + V t σ j 1 ( σ j 2 σ j n
For the system and secret, we propose an optimal problem to synthesize a supervisor such that the discount total choosing cost of the controlled system is minimal.
Optimal opacity-enforcing problem Given system G and secret K L ( G ) , find a supervisor f such that L ( f / G ) satisfy the following conditions.
1.
K is opaque with respect to L ( f / G ) and Σ a ;
2.
Secret K permitted by supervisor f is "the largest it can be";
3.
For the closed-loop behavior L ( f / G ) , the discount total choosing cost V ( L ( f / G ) ) is minimal.
In the condition 2 of the above problem, if denoted the supremal controllable and closed sublanguage of L ( G ) by L ( g / G ) [4], the largest secret K permitted by supervisor f is L ( g / G ) K .
Based on the above optimal problem, a non-linear optimal model is formulated as follows.
min V ( L ( f / G ) ) s . t . θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) K L ( f / G ) = K L ( g / G ) f Γ L ( G )
For the above optimal model (1), the objection function means that the supervised system’s discount total choosing cost is minimal. And, the first constraint condition means that K is opaque w.r.t. the controlled system. The second implies that the most secret not to be disclosed is in the controlled system.

4. Solution of Optimal Model on Choosing Cost

In this section, the first thing is that we will make the following assumption.
Assumption *
If s L ( G ) and γ = Σ u , then we have c ( s , γ ) = 0 .
For Assumption *, we suppose that any uncontrollable event’s choosing cost is 0.
Under the assumption, we consider the following two scenarios to solve the optimal model (1) in the following subsections.
Scenario 1 
Secret K is opaque w.r.t. L ( G ) and Σ a .
Scenario 2 
Secret K is not opaque w.r.t. L ( G ) and Σ a .

4.1. Scenario 1: Secret K is opaque w.r.t. L ( G ) and Σ a .

Given language L ( G ) and regular language (secret) K L ( G ) . To maximize the secret under the control of supervisor f in scenario 1, we have K L ( f / G ) , which implies that K ¯ L ( f / G ) . So, in model (1), the second constraint condition can be reduced to K ¯ L ( f / G ) . And then, the optimal model (1) can be transformed into the following model (2).
min V ( L ( f / G ) ) s . t . θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) K ¯ L ( f / G ) f Γ L ( G )
To solve optimal model (2), we consider the following three cases.

4.1.1. Case 1: K ¯ = L ( G ) .

Obviously, we have L ( G ) L ( f / G ) in the condition 2 of model (2). To assure the feasible region is not empty, we can obtain supervisor f such that L ( G ) = L ( f / G ) . So, we have the following theorem.
Theorem 4. 
Given system G and secret K L ( G ) . In case 1 of scenario 1, then L ( G ) = L ( f / G ) is an optimal solution of optimal model (2).
Proof. 
From the above analysis, it is obvious that L ( G ) = L ( f / G ) is the unique solution of feasible set. So, V ( L ( f / G ) ) is minimal.    □

4.1.2. Case 2: K ¯ L ( G ) and s K , s K ¯ K such that θ ( s ) = θ ( s ) .

In case 2, the condition means that any secret can not be distinguished from some non-secret in the closure K ¯ .
For secret K and its closure K ¯ , we have θ ( K ) θ ( K ¯ K ) θ ( L ( G ) K ) . If there exists a closed-loop system f / G such that K ¯ L ( f / G ) , it obviously can ensure the opacity of secret K. So, the feasible region of model (2) is not empty.
To find supervisor f, we infer if K ¯ is controllable w.r.t. L ( G ) .
  • If K ¯ is controllable w.r.t. L ( G ) , there exists supervisor f such that L ( f / G ) = K ¯ . For the closed-loop behavior L ( f / G ) , it satisfies the two constraint conditions of model (2). So, the feasible region of model (2) is not empty.
  • If K ¯ is not controllable w.r.t. L ( G ) , we can find a controllable and closed superlanguage K ¯ of K ¯ . Obviously, the superlanguage K ¯ not only ensures the opacity of K(in Theorem A1 of Appendix), but also maximizes the secret K. So, the feasible region of model (2) is not empty.
According to the above analysis, the following Algorithm 2 returns a closed-loop behavior to solve model (2). Preprints 98498 i002
By theoretical proof, the following Theorem 5 states that Algorithm 2 can produce an optimal solution of model (2).
Theorem 5. 
Given system G and secret K L ( G ) . In case 2 of scenario 1, the closed-loop L ( f / G ) produced in Algorithm 2 is an optimal solution of model (2).
Proof. 
Firstly, we will prove that the closed-loop behavior L ( f / G ) produced in Algorithm 2 is a feasible solution of optimal model (2).
At Line 1, M is a closed and controllable sublanguage of L ( G ) . At Line 5, it is obvious that M L ( G ) is the infimal closed and controllable superlanguage of K ¯ . So, Line 3 and Line 6 can produce a supervisor f such that L ( f / G ) = M .
Therefore, we have K ¯ L ( f / G ) which means constraint condition 2 of model (2) is true.
By Theorem A1 of Appendix, we conclude that L ( f / G ) can ensure the opacity of K under case 2 of scenario1, which implies that constraint condition 1 of model (2) is true.
From the above points, it is true that L ( f / G ) produced in Algorithm 2 is a feasible solution of model (2).
Nextly, we prove by contraction that the discount total choosing costs of L ( f / G ) produced in Algorithm 2 is minimal. Assume that there exists a feasible solution L ( f / G ) ( L ( f / G ) ) of model (2) such that V ( L ( f / G ) ) < V ( L ( f / G ) ) .
According to the constraint condition 2, we have K ¯ L ( f / G ) . Afterwards, we consider the controllability of K ¯ .
If K ¯ is controllable, it holds that L ( f / G ) = K ¯ by Line 2-3, which means L ( f / G ) L ( f / G ) . So, we have V ( L ( f / G ) ) V ( L ( f / G ) ) , which contracts with V ( L ( f / G ) ) < V ( L ( f / G ) ) .
If K ¯ is not controllable, it holds that K ¯ L ( f / G ) = M by Line 5-6. For any s M K ¯ , there exists t s such that t K ¯ and s = t Σ u * hold. As shown in Theorem 3 and Assumption *, we have V ϵ ( s ) = V ϵ ( t ) + V t ( Σ u * ) = V ϵ ( t ) . So, it holds that V ( K ¯ = V ( M ) = V ( L ( f / G ) ) .By the formula K ¯ L ( f / G ) , it is true that V ( L ( f / G ) ) V ( L ( f / G ) ) , which contracts with V ( L ( f / G ) ) < V ( L ( f / G ) ) .
In summary, it is true that V ( L ( f / G ) ) V ( L ( f / G ) ) , which means that the discount total choosing costs of L ( f / G ) produced in Algorithm 2 is minimal.
   □
According to the proof of Theorem 5, we have the following corollaries.
Corollary 1. 
Given a language L. If a new language L as the concatenation of any string of L with an uncontrollable string (i.e. Σ u * ), then the discount total choosing costs of L and L are the same, that is, V ( L ) = V ( L ) .
Corollary 2. 
Given system G and secret K L ( G ) . In case 2 of scenario 1, V ( K ¯ = V ( K ¯ = V ( L ( f / G ) ) holds, where L ( f / G ) is the closed-loop system produced in Algorithm 2.
Example 3. 
Given a finite transition system G = ( Q , Σ , δ , q 0 ) shown in Figure 4, where Σ u = { f , t } . Obviously, for system G, Assumption * is true. Suppose that secret K = { a , a b , a e b g t } which can be recognized by Q s = { 3 , 6 , 16 } . To show choosing cost c ( s , γ ) , a label p · | n q means if there is a transition from p to q by ·, the notation n denotes the choosing cost c ( s , · ) . For control input Γ, the cost of choosing γ Γ is defined as c ( s , γ ) = σ γ c ( s , σ ) .
Assume that adversary has complete knowledge of the supervisor’s control policy. From adversary’s view, adversary can see partial set of events, denoted by Σ a = { a , b , d , f , g } . For secret K, it can be verified that K is opaque w.r.t. L ( G ) and Σ a (scenario 1). To reduce the choosing cost, a closed-loop system L ( f / G ) can be obtained by Algorithm 2,where L ( f / G ) = { ϵ , t , a , a b , a b t , a b t f , a e , a e b , a e b , a e b g , a e b g t } is shown in Figure 5.
By Definition 4 and Algorithm 1, V ( L ( f / G ) ) = V ε ( t ) + V ε ( a ) + V a ( e b g t ) + V a ( b t f ) = 1.726 is minimal.

4.1.3. Case 3: K ¯ L ( G ) and s K , s K ¯ K such that θ ( s ) θ ( s )

In case 3, the condition means that there exists some secret in K such that all the non-secret confused with them are outside of K ¯ .
For L ( G ) , let K ̲ = { s K | s K ¯ K , θ ( s ) θ ( s ) } be a set of some secret which can not be confused by any string in K ¯ K , and L = { s L ( G ) | θ ( s ) θ ( K ̲ } be a set of strings which can confuse the secret of K ̲ . For language L, we call [ s ] = { s L | θ ( s ) = θ ( s ) } be the coset (or equivalence class) of s w.r.t. L and θ , where s is said to be equivalent string of s. And, L / [ · ] is defined as the quotient set of L w.r.t. the coset [ · ] . For determined finite transition system, the number of strings in L is finite and the length of each string of L is finite too. Obviously, coset [ · ] and quotient set L / [ · ] are also finite.
To solve model (2), Algorithm 3 shown as follows is proposed by calling function 1 (seen in Algorithm 4) and function 2 (seen in Algorithm 5). Preprints 98498 i003
Preprints 98498 i004
Preprints 98498 i005
In Line 13 of Algorithm 3, function 1 (in Algorithm 4 shows how to compute the choosing cost outside of the closure of secret K. For a quotient set, take any string of some coset and get a prefix with a maximal length in the closure of secret. And then, we compute the discount cost of choosing the remaining string after the prefix. The specific process is shown in Algorithm 4.
In Line 14 of Algorithm 3, function 2 constructs a weighted directed diagram with multi-stages and produces a path with minimal discount total choosing cost. For the diagram, the elements of a set H are regarded as the stages, and the elements of H i are defined as the nodes of each stage. Based on dynamic programming, the optimal weight between different nodes of adjacent stages is obtained in function 3 (in Algorithm 6). Then, the weighted directed diagram is got. For every node of the diagram, an ordered pair is obtained by calling function 3 (in Algorithm 6). For the ordered pair, the first element is the set of shortest path with minimal discount total choosing cost from starting node to current node, and the second is the discount total choosing cost of the path. When the current node is ending node, the path with minimal discount total choosing cost is obtained. The specific processes are shown in Algorithms 5 and Algorithms 6. Preprints 98498 i006
According to the calculaton process of Algorithm 3, we have the following theorem to show the solution of model (2).
Theorem 6. 
Given system G and secret K L ( G ) . In case 3 of scenario 1, the closed-loop behavior L ( f / G ) produced in Algorithm 3 is an optimal solution of model (2).
Proof. 
We firstly show that closed-loop behavior L ( f / G ) produced in Algorithm 3 is a feasible solution of optimal model (2).
1.
To prove the opacity (the first constraint condition).
As shown in case 3, the secret of K K ̲ can be confused by the non-secret strings of K ¯ . For K ̲ , all the non-secret in L ( G ) which can not be distinguished with the secret in K ̲ are in j H j of Line 12. At Lines 14 and 15, the string s i is from H i , where i = { 1 , 2 , , j 1 } . At Lines 15 and 16, we have K ¯ L ( f / G ) and { s 1 , s 2 , , s j 1 } L ( f / G ) . So, the closed-loop behavior L ( f / G ) produced in Algorithm 3 can ensure the opacity of secret K.
2.
To prove the secret remained in the closed-loop system is maximal (the second constraint condition).
According to the Lines 15 and 16, it holds that K ¯ L ( f / G ) . So, the second constraint condition is true.
To sum up, the closed-loop behavior L ( f / G ) obtained in Algorithm 3 is a feasible solution of optimal model (2).
Secondly, we will show that the discount total choosing cost of closed-loop behavior L ( f / G ) produced by Algorithm 3 is minimal.
Since it holds that K ¯ L ( f / G ) , the discount total choosing cost of L ( f / G ) can be computed as follows.
V ( L ( f / G ) ) = V ( K ¯ + V ( L ( f / G ) K ¯ .
Obviously, V ( K ¯ is finite. To minimize the discount total choosing cost of L ( f / G ) , we need to show V ( L ( f / G ) K ¯ is minimal by formula (3). As shown at Line 1 of Algorithm 3, it is obvious that language L contains all the non-secret in L ( G ) , which can not distinguish with all the secret of K ̲ . So, if we want to make V ( L ( f / G ) K ¯ is minimal, all the strings s in L ( f / G ) K ¯ must come from L. And then L ( f / G ) K ¯ L holds. According to Lines 3-11 of Algorithm 3, we have L = j H j . And, all the strings in H j can confuse one secret of K ̲ and its equivalent secret. Obviously, only one string chosen in each set H j of H is the necessary condition to minimize V ( L ( f / G ) K ¯ .
At Line 13 of Algorithm 3 (i.e. function 1 of Algorithm 4), all the strings s = s i t in L are traversed and the choosing cost V s i ( t ) after s i K ¯ can be obtained, where t = σ i + 1 σ n and s i σ i + 1 K ¯ .
At Line 14 of Algorithm 3, a diagram with multi-stages is constructed in j H j { t s } { t t } , where initial node t s and ending node t t are virtual, H j are the set of nodes in j-th stage. To only pick a string in each H j , we will find a path from t s to t t . And then, Algorithm 6 is proposed to optimize the weight a s , s of a transition from node s to node s between adjacent stages. For the optimal weight, it is obvious that the discount total choosing cost of each node (i.e. string) of the path is equal to the total weight of the path (at Line 12 of Algorithm 5). At Lines 3-19 of Algorithm 5, the shortest path Lable s and its minimal discount total choosing cost V min ( s ) of j-th stage can be obtained by Lable s and V min ( s ) of ( j 1 ) -th stage based on dynamical programming. As shown in the above analysis about Line 14 of Algorithm 3(i.e. function 2 in Algorithm 5), the first element Lable s of the ordered pair ( Lable s , V min ( s ) ) is the shorted path (i.e. the set of strings) with minimal discount total choosing cost from starting node t s to current node s , and the second V min ( s ) is the discount total choosing cost of the path. When the current node is t t (i.e. Line 20 of Algorithm 5), Lable t t is the shorted path from initial to ending node and V min ( t t ) is the minimal discount total choosing cost of the path (see Lines 21-25 of Algorithm 5). So, Lable t t is the subset of L, whose discount total choosing cost is minimal and whose strings can confuse all the secret of K ̲ .
At Lines 16 and 17 of Algorithm 3, the closed-loop behavior L ( f / G ) can be ensured to be controllable and closed by Corollary 2. And the discount total choosing L ( f / G ) is minimal as shown in above analysis.
All in all, the closed-loop behavior L ( f / G ) produced in Algorithm 3 is an optimal solution of model (2).    □
Example 4. 
Given a finite transition system G = ( Q , Σ , δ , q 0 ) and secret K = { a , a b , a e b g , a e b g t } shown in Figure 6, where Σ u = { f , t } and K can be recognized by Q s = { 3 , 6 , 13 , 16 } . Suppose that adversary has complete knowledge of the supervisor’s control policy, and the observed set of events by adversary is Σ a = { a , d , f , g , t } . It is verified that K is opaque w.r.t. L ( G ) and Σ a . But, K is not opaque w.r.t. K ¯ and Σ a , i.e. secret a e b g , a e b g t can not be confused by any non-secret of K ¯ . Obviously, case 3 of scenario 1 is fulfilled and Assumptions * are true. Nextly, we will construct a closed-loop behavior L ( f / G ) by Algorithm 3.
For system G and secret K, we get a language K ̲ = { a e b g , a e b g t } , where the strings in K ̲ can not be confused by any strings of K ¯ . From the opacity of L ( G ) , we can find a sub-language L = { a e b g t e , a e b g b t , a e b g b , e a b g t , e a b g , e a b e g , e a b e g t } , whose strings can not be distinguished with the secret in K ̲ . For language L, we give the following computational process.
Take a e b g K ̲ , and then we have θ ( a e b g ) = a g and H 1 = [ a e b g ] = { e a b g , e a b e g , a e b g b } .
Take a e b g t K ̲ , and then we have θ ( a e b g t ) = a g t and H 2 [ a e b g t ] = { e a b g t , e a b e g t , a e b g t e , a e b g b t } .
So, the quotient set L / [ · ] = { e a b g , e a b e g , a e b g b } , { e a b g t , e a b e g t , a e b g t e , a e b g b t } is a partition of L.
For coset [ a e b g ] , we can compute the choosing cost of suffix of the non-secret string out of K ¯ in the first stage (seen in the function 1 of Algorithm 4.
If s = e a b g , we have ϵ K ¯ and V ϵ ( e a b g ) = 5.126 .
If s = e a b e g , we have ϵ K ¯ and V ϵ ( e a b e g ) = 5.1256 .
If s = a e b g b , we have a e b g K ¯ and V a e b g ( b ) = 0.0002 .
For coset [ a e b g t ] , we can similarly get the following in the second stage.
If s = e a b g t , we have ϵ K ¯ and V ϵ ( e a b g t ) = 5.126 .
If s = e a b e g t , we have ϵ K ¯ and V ϵ ( e a b e g t ) = 5.1256 .
If s = a e b g t e , we have a e b g t K ¯ and V a e b g t ( e ) = 0.00005 .
If s = a e b g b t , we have a e b g K ¯ and V a e b g ( b t ) = 0.0002 .
Based on H 1 , H 2 and the choosing cost out of K ¯ above, a weighted directed diagram shown in Figure 7 is constructed by using Algorithm 5 calling Algorithm 6. In the diagram, every node denoted by ⊙ is shown as a fraction. For the fraction, its numerator is a non-secret string s = s i σ i + 1 σ i + k in j = 1 2 H j , and its denominator is V s i ( t ) , where s i K ¯ and s i σ i + 1 K ¯ .
To show the weight between nodes of adjacent stages, some weight is given as follows by Definition 4.
V e a b g ( t ) = 0 , V e a b ( e g t ) = 0.0056 , V e a b e g ( t ) = 0 , V e a b ( g t ) = 0.006 , V a e b g b ( t ) = 0 , V a e b g ( t e ) = 0.00005 .
By Algorithm 5, the label L a b e l s and minimal choosing cost V min ( s ) of a path from initial node t s to current node s are computed in Table 1.
In Table 1, we have L a b e l t t = { t s , a e b g b , a e b g b t , t t } for the ending node t t . From Line 16 of Algorithm 3, we know that shortest path is t s a e b g b a e b g b t t t . So, it holds that min V ( L ( f / G ) K ¯ = 0.0002 .Since V ( K ¯ = 1.726 (by Algorithm 1 is finite, V ( L ( f / G ) ) = 1.726 + 0.0002 = 1.7262 . So, in Line 17, we have L ( f / G ) = { t ¯ , a b t f ¯ , a e b g t ¯ , a e b g b t ¯ } shown in Figure 8.
In Figure 8, it is verified that V ( L ( f / G ) ) = 1.7262 by Algorithm 1.

4.2. Scenario 2: secret K is not opaque w.r.t. L ( G ) and Σ a .

For system G and secret K L ( G ) , if K is not opaque w.r.t. L ( G ) and Σ a , we need to design a supervisor to prohibit all the secret disclosed. To get the supervisor, we can use the method of [1,4] to obtain the maximal permission sublanguage of L ( G ) , which can ensure the opacity of K. Then, scenario 1 is fulfilled. Nextly, we propose Algorithm 7 to solve model (1). Preprints 98498 i007
According to the above algorithm, we firstly construct a maximal permissive supervisor g to enforce the opacity of K. And then, it is obviously verified that the closed-loop behavior L ( g / G ) and the restricted secret K L ( g / G ) meet scenario 1. As shown in Theorem 4, Theorem 5 and Theorem 6, we can conclude that Algorithm 7 can produce an optimal solution of model (1).
Theorem 7. 
Given system G and secret K L ( G ) . In scenario2, the closed-loop behavior L ( f / G ) obtained in Algorithm 7 is an optimal solution of model (1).
Proof. 
Firstly, we will prove that L ( f / G ) is controllable and closed sublanguage of L ( G ) . As shown at Lines 7, 10 and 12, L ( f / G ) is a closed sublanguage of L ( G ) . Nextly, we will prove L ( f / G ) is controllable w.r.t. L ( G ) .
s L ( f / G ) , σ Σ u , s σ L ( G ) s L ( g / G ) , σ Σ u , s σ L ( G ) ( L ( f / G ) L ( G ) = L ( f / G ) ) s σ L ( g / G ) ( L ( f / G ) is controllable w . r . t . L ( G ) ) s σ L ( f / G ) ( L ( f / G ) is controllable w . r . t . L ( g / G ) )
So, L ( f / G ) is a controllable and closed sublanguage of L ( G ) . At Line 15, there exits supervisor f such that L ( f / G ) = L ( f / G ) .
Secondly, we will show the closed-loop behavior L ( f / G ) produced by Algorithm 7 is a feasible solution of model (1).
1.
To show the opacity of L ( f / G ) .
At Lines 3-5, it is obvious that K is opaque w.r.t. L ( G ) and Σ a . At Lines 6-14, by Theorems 4, 5 and 6, it holds that K is opaque w.r.t. L ( f / G ) and Σ a , which implies that θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) . According to Lines 4, 5 and 15, we have θ ( K L ( f / G ) ) θ ( L ( f / G ) K L ( g / G ) ) . Since L ( f / G ) L ( g / G ) , we have θ ( L ( f / G ) K L ( g / G ) ) θ ( L ( f / G ) K L ( f / G ) ) . So, θ ( K L ( f / G ) ) θ ( L ( f / G ) K L ( f / G ) ) holds, which means θ ( K L ( f / G ) ) θ ( L ( f / G ) K ) is true. Therefore, K is opaque w.r.t. L ( f / G ) and Σ a .
2.
To show that the closed-loop behavior L ( f / G ) can preserve the maximal secret information.
For system G and secret K , at Lines 4-14, it is obvious that L ( f / G ) is a feasible solution of model (2), which implies that K ¯ L ( f / G ) . Since K L ( g / G ) = K at line 5 and L ( f / G ) = L ( f / G ) at Line 15, it holds that K L ( g / G ) L ( f / G ) , which implies that K L ( g / G ) K L ( f / G ) . Since L ( f / G ) L ( G ) holds, it is true that L ( f / G ) L ( g / G ) . So, we have K L ( f / G ) K L ( g / G ) . Therefore, we have K L ( f / G ) = K L ( g / G ) .
To conclude, the closed-loop behavior L ( f / G ) produced by Algorithm 7 is a feasible solution of model (1).
Finally, we will show that the discount total choosing cost of L ( f / G ) produced by Algorithm 7 is minimal for model (1).
Assume that the closed-loop behavior L ( f / G ) produced by Algorithm 7 is not optimal solution of model (1). So, there exists a supervisor f 1 such that L ( f 1 / G ) is a feasible solution of model (1) and V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) holds. For model (1), the two constrain conditions are satisfied for L ( f 1 / G ) . The two conditions mean that K L ( f 1 / G ) is opaque w.r.t. L ( f 1 / G ) and Σ a , and K L ( f 1 / G ) = K L ( g / G ) holds. As shown at Line 5, we have K = K L ( f 1 / G ) . Taking G 1 = f 1 / G , it holds that K ¯ L ( G 1 ) . Based on the assumption about L ( f 1 / G ) and L ( g / G ) , we have that L ( f 1 / G ) L ( g / G ) . Then, we will consider the following two cases.
Case 1 
If L ( f 1 / G ) = L ( g / G ) , we will discuss the relation between K ¯ and L ( G 1 ) .
1.1 
If K ¯ = L ( G 1 ) , it holds that L ( f 1 / G ) = L ( f / G ) at Lines 6-7, which contracts with the assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
1.2 
If K ¯ L ( G 1 ) , it holds that V ( L ( f / G ) ) V ( L ( f 1 / G ) ) by Theorem 5 and 6 at Lines 9-13, which contracts with the assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
Case 2 
If L ( f 1 / G ) L ( g / G ) , it is true that s K ¯ for any s L ( g / G ) L ( f 1 / G ) because of the formulas K ¯ L ( G 1 ) and L ( f 1 / G ) = L ( G 1 ) , which implies that s is out of K ¯ . Then, we will discuss the relation between K ¯ and L ( G 1 ) again.
2.1 
If K ¯ = L ( G 1 ) , there exists s K ¯ K such that θ ( s ) = θ ( s ) for any s K , which means that all the secret string in K can be confused by the non-secret string in K ¯ . By Corollary 2, it holds that V ( L ( f 1 / G ) ) = V ( K ¯ . At Lines 9-10, it is true that V ( L ( f / G ) ) = V ( K ¯ . So, it holds that V ( L ( f / G ) ) = V ( L ( f 1 / G ) ) , which contracts with assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
2.2 
If K ¯ L ( G 1 ) , we will discuss the following two sub-cases.
2.2.1 
If there exists s K ¯ K such that θ ( s ) = θ ( s ) for any s K , it is obvious that V ( L ( f 1 / G ) ) = V ( K ¯ by Corollary 2. At Lines 9, 10 and 15 of Algorithm 7, it holds that V ( L ( f / G ) ) = V ( K ¯ . So, it is true that V ( L ( f / G ) ) = V ( L ( f 1 / G ) ) , which contracts with assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
2.2.2 
If there exists s K such that θ ( s ) θ ( s ) for any s K ¯ K , we have the following formulas.
V ( L ( f 1 / G ) ) = V ( K ¯ , f 1 ) + V ( L ( f 1 / G ) K ¯
V ( L ( f / G ) ) = V ( K ¯ , f ) + V ( L ( f / G ) K ¯ .
Owing to the definition of feasible solution, we have K ¯ L ( f 1 / G ) and K ¯ L ( f / G ) . So, it holds that V ( K ¯ , f 1 ) = V ( K ¯ , f ) . For the remaining part of formulas (4) and (5), we construct two weight directed diagrams T 1 and T, where T 1 (or T) is produced in Algorithm 5 (e.g. Line 14 of Algorithm 3 if f 1 / G , K , c ( s , γ ) ( or g / G , K , c ( s , γ ) ) is inputed in Algorithm 3. By the constructions of L and H in Algorithm 3, it is obvious that T 1 T (i.e. T 1 is a sub-diagram of T) and that a s , s T 1 a s , s T , where a s , s T 1 (and a s , s T ) is the weight of arc ( s , s ) in diagram T 1 (and T respectively). So, the sum of weight of shortest path of T is less than that of T 1 , which implies that V ( L ( f / G ) K ¯ V ( L ( f 1 / G ) K ¯ . According to the formulas (4) and (5), we have that V ( L ( f / G ) ) V ( L ( f 1 / G ) ) , which contracts with assumption that V ( L ( f 1 / G ) ) < V ( L ( f / G ) ) .
To conclude, it is true that V ( L ( f / G ) ) V ( L ( f 1 / G ) ) , which implies that the discount total choosing cost of the closed-loop behavior L ( f / G ) produced by Algorithm 7 is minimal for optimal model (1). □
To show the effective of Algorithm 7, we introduce the model of [1] to compute the optimal choosing control strategy.
Example 5. 
Given a transition system G[1] shown in Figure 9, which models all sequences of possible moves of an agent in a three storey building with a south wing and a north wing, both equipped with lifts and both connected by a corridor at each floor. Moreover, there is a staircase that leads from the first floor in the south wing to the third floor in the north wing. The agent starts from the first floor in the south wing. He can walk up the stairs (s) or walk through the corridors (c) from south to north without any control. The lifts can be used serval times one floor upwards (u) and at most once on floor downwards (d) altogether. The moves of the lifts are controllable. Thus Σ c = { u , d } . The secret is that the agent is either at the second floor in the south wing or at the third floor in the north wing, i.e. Q s = { 1 , 5 , 7 , 11 } marked by double circle. The adversary may gather the exact subsequence of moves in Σ a = { u , c , s } from sensors, but he cannot observe the downwards moves of the lifts.
For every transition of system G, the choosing cost is shown in Figure 9. In [1], a unique supremal prefix-closed and controllable sublanguage L ( f / G ) (shown in Figure 10 of L ( G ) such that secret S is opaque w.r.t. L ( f / G ) and Σ a . So, V ( K ¯ = V ε ( s ) + V ε ( c u u ) = 0 + 0.55 = 0.55 Supposed that choosing cost c ( s , γ ) is inserted in G and g / G , shown in Figure 9 and Figure 10 respectively. According to Line 12 of Algorithm 7 (i.e. Algorithm 3, K ̲ = { s , c u u } and L = { s d , c u u d } . By the process of Algorithm 3, we have H 1 = { s d } and H 2 = { c u u d } , which means there exists only one path (shown in Figure 11 from starting node to ending node. So, min f Γ L ( G ) V ( L ( f / G ) K ¯ = V s ( d ) + V c u u ( d ) = 0.2 + 0.002 = 0.202 . At Lines 16 and 17 of Algorithm 3, it is obvious that L ( f / G ) is shown in Figure 12, which has the minimal discount total choosing cost V ( f / G ) = V ( K ¯ , f ) + V ( L ( f / G ) K ¯ = 0.55 + 0.202 = 0.752 . The optimal supervisory control defined by L ( f / G ) prevents the agent from using the lift of the south wing, from using the lift of the north wing from the second floor to the third floor at any time after he has used this lift downwards, and from using the lift of the north wing downwards on the second floor.

5. Conclusions

To enforce opacity by supervisor with minimal discount total choosing cost, an optimal supervisory control model is formulated. In the model, the objection function is to minimize the discount total choosing cost of closed-loop behavior, and the two constraint conditions are given that one is to enforce the opacity of closed-loop behavior, the other is to preserve the maximal part of secret information for closed-loop behavior. To solve the above optimal model, some algorithms and theorems are formulated from simplicity to complexity. For the simple case, we suppose that the secret is opaque w.r.t. the behavior of the system. The complex case is that we suppose the secret is not opaque w.r.t. the behavior of the system. Based on the two cases, the algorithms and theorems are proved to be correct.

Author Contributions

Conceptualization, methodology, Y.D. and F.W.; supervision, J.L.. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by National Natural Science Foundation of China grant number 61203040, National Natural Science Foundation of Fujian Province grant number 2022J01295 and Science and Technology Association Project of Quanzhou.

Acknowledgments

In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A.

Theorem A1. 
Given system G and languages L , K satisfied with K ¯ L and K ¯ L ( G ) . If K is opaque w.r.t. K ¯ and Σ a , then K is opaque w.r.t. L and Σ a .
Proof. 
s K s K ¯ K , s . t . θ ( s ) = θ ( s ) ( K is opaque w . r . t . K ¯ and Σ a ) s L K , s . t . θ ( s ) = θ ( s ) ( K ¯ L ) K is opaque w . r . t . L and Σ a ( Definition 2 )

References

  1. Dubreil, J.; Darondeau, P.; Marchand, H. Supervisory control for opacity. IEEE Transactions on Automatic Control 2010, 55, 1089–1100. [Google Scholar] [CrossRef]
  2. Takai, S.; Oka, Y. A formula for the supremal controllable and opaque sublanguage arising in supervisory control. SICE Journal of Control, Measurement, and System Integration 2008, 1, 307–311. [Google Scholar] [CrossRef]
  3. Takai, S.; Watanabe, Y. Modular synthesis of maximally permissive opacity-enforcing supervisors for discrete event systems. IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences 2011, E94A, 1041–1044. [Google Scholar] [CrossRef]
  4. Moulton, R.; Hamgini, B.; Khouzani, Z.; Meira-Goes, R.; Wang, F.; Rudie, K. Using subobservers to synthesize opacity enforcing supervisors. Discrete Event Dynamic Systems 2022, 32, 611–640. [Google Scholar] [CrossRef]
  5. Mazare, L. Using unification for opacity properties. Proceedings of Workshop on Information Technology& Systems, 2004, pp. 165–176.
  6. Bryans, J.W.; Koutny, M.; Ryan, P.Y. Modelling opacity using petri nets. Electronic Notes in Theoretical Computer Science 2005, 121, 101–115. [Google Scholar] [CrossRef]
  7. Lin, F. Opacity of discrete event systems and its applications. Automatica 2011, 47, 496–503. [Google Scholar] [CrossRef]
  8. Ben-Kalefa, M.; Lin, F. Opaque superlanguages and sublanguages in discrete event systems. Cybernetics and Systems 2016, 47, 392–426. [Google Scholar] [CrossRef]
  9. Saboori, A.; Hadjicostis, C.N. Notions of security and opacity in discrete event systems. IEEE Conference on Decision and Control, 2008, pp. 5056–5061.
  10. Saboori, A.; Hadjicostis, C.N. Verification of k-step opacity and analysis of its complexity. IEEE Transactions on Automation Science and Engineering 2011, 8, 549–559. [Google Scholar] [CrossRef]
  11. Falcone, Y.; Marchand, H. Enforcement and validation (at runtime) of various notions of opacity. Discrete Event Dynamic Systems 2015, 25, 531–570. [Google Scholar] [CrossRef]
  12. Saboori, A.; Hadjicostis, C.N. Verification of infinite-step opacity and analysis of its complexity. IFAC Workshop on Dependable Control of Discrete Systems;, 2009; pp. 46–51.
  13. Wu, Y.C.; Lafortune, S. Comparative analysis of related notions of opacity in centralized and coordinated architectures. Discrete Event Dynamic Systems 2013, 23, 307–339. [Google Scholar] [CrossRef]
  14. Saboori, A.; Hadjicostis, C.N. Verification of initial-state opacity in security applications of DES. International Workshop on Discrete Event Systems, 2008, pp. 328–333.
  15. Balun, J.; Masopust, T. Comparing the notions of opacity for discrete-event systems. Discrete Event Dynamic Systems 2021, 31, 553–582. [Google Scholar] [CrossRef]
  16. Wintenberg, A.; Blischke, M.; Lafortune, S.; Ozay, N. A general language-based framework for specifying and verifying notions of opacity. Discrete Event Dynamic Systems 2022, 32, 253–289. [Google Scholar] [CrossRef]
  17. Ma, Z.; Yin, X.; Li, Z. Verification and enforcement of strong infinite- and k-step opacity using state recognizers. Automatica 2021, 133. [Google Scholar] [CrossRef]
  18. Liu, R.; Lu, J. Enforcement for infinite-step opacity and K-step opacity via insertion mechanism. Automatica 2022, 140. [Google Scholar] [CrossRef]
  19. Ji, Y.; Wu, Y.C.; Lafortune, S. Enforcement of opacity by public and private insertion functions. Automatica 2018, 93, 369–378. [Google Scholar] [CrossRef]
  20. Ji, Y.; Yin, X.; Lafortune, S. Enforcing opacity by insertion functions under multiple energy constraints. Automatica 2019, 108. [Google Scholar] [CrossRef]
  21. Ji, Y.; Yin, X.; Lafortune, S. Opacity enforcement using nondeterministic publicly-known edit functions. IEEE Transactions on Automatic Control 2019, 64, 4369–4376. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Chen, Z.; Liu, Z.X. Verification and enforcement of current-state opacity based on a state space approach. European Journal of Control 2023, 71. [Google Scholar] [CrossRef]
  23. Wu, Y.C.; Lafortune, S. Synthesis of insertion functions for enforcement of opacity security properties. Automatica 2014, 50, 1336–1348. [Google Scholar] [CrossRef]
  24. Zhang, B.; Shu, S.L.; Lin, F. Maximum information release while ensuring opacity in discrete event systems. IEEE Transactions on Automation Science and Engineering 2015, 12, 1067–1079. [Google Scholar] [CrossRef]
  25. Behinaein, B.; Lin, F.; Rudie, K. Optimal information release for mixed opacity in discrete-event systems. IEEE Transactions on Automation Science and Engineering 2019, 16, 1960–1970. [Google Scholar] [CrossRef]
  26. Khouzani, Z.A. Optimal payoff to ensure opacity in discrete-event systems; Queen’s University, 2019.
  27. Hou, J.; Yin, X.; Li, S. A framework for current-state opacity under dynamic information release mechanism. Automatica 2022. [Google Scholar] [CrossRef]
  28. Sengupta, R.; Lafortune, S. An optimal control theory for discrete event systems. SIAM Journal on Control and Optimization 1998, 36, 488–541. [Google Scholar] [CrossRef]
  29. Pruekprasert, S.; Ushio, T. Optimal stabilizing supervisor of quantitative discrete event systems under partial observation. IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences 2016, 99, 475–482. [Google Scholar] [CrossRef]
  30. Ji, X.; Lafortune, S. Optimal supervisory control with mean payoff objectives and under partial observation. Automatica 2021, 123. [Google Scholar] [CrossRef]
  31. Hu, Q.; Yue, W. Two new optimal models for controlling discrete event systems. Journal of Industrial and Management Optimization 2017, 1, 65–80. [Google Scholar] [CrossRef]
  32. Yue, W.; Hu, Q. Optimal control for discrete event systems with arbitrary control pattern. Discrete and Continuous Dynamical Systems - Series B (DCDS-B) 2012, 6, 535–558. [Google Scholar]
  33. Cassandras, C.; Lafortune, S. Introduction to discrete event systems; Springer, 2008.
Figure 1. System f / G
Figure 1. System f / G
Preprints 98498 g001
Figure 2. Tree automaton based on f / G
Figure 2. Tree automaton based on f / G
Preprints 98498 g002
Figure 3. Tree automaton based on K ¯
Figure 3. Tree automaton based on K ¯
Preprints 98498 g003
Figure 4. System G.
Figure 4. System G.
Preprints 98498 g004
Figure 5. Closed-loop language L ( f / G ) .
Figure 5. Closed-loop language L ( f / G ) .
Preprints 98498 g005
Figure 6. System G.
Figure 6. System G.
Preprints 98498 g006
Figure 7. A weighted directed diagram.
Figure 7. A weighted directed diagram.
Preprints 98498 g007
Figure 8. Closed-loop language L ( f / G )
Figure 8. Closed-loop language L ( f / G )
Preprints 98498 g008
Figure 9. System G
Figure 9. System G
Preprints 98498 g009
Figure 10. A weighted directed diagram
Figure 10. A weighted directed diagram
Preprints 98498 g010
Figure 11. A weighted directed diagram
Figure 11. A weighted directed diagram
Preprints 98498 g011
Figure 12. Closed-loop behavior L ( f / G ) with the minimal discount total choosing cost
Figure 12. Closed-loop behavior L ( f / G ) with the minimal discount total choosing cost
Preprints 98498 g012
Table 1. The set L a b e l s of shortest path and its minimal discount total choosing cost V min ( s ) at every node s of the diagram
Table 1. The set L a b e l s of shortest path and its minimal discount total choosing cost V min ( s ) at every node s of the diagram
j = 0 j = 1 j = 2 j = 3
s L a b e l s , V min ( s ) L a b e l s , V min ( s ) L a b e l s , V min ( s ) L a b e l s , V min ( s )
t s { t s } , 0
e a b g { t s , e a b g } , 5.126
e a b e g { t s , e a b e g } , 5.1256
a e b g b { t s , a e b g b } , 0.0002
e a b g t { t s , e a b g , e a b g t } , 5.126
e a b e g t { t s , e a b e g , e a b e g t } , 5.1256
a e b g b t { t s , a e b g b , a e b g b t } , 0.0002
a e b g t e { t s , a e b g b , a e b g t e } , 0.00025
t t { t s , a e b g b , a e b g b t , t t } , 0.0002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated