Preprint
Article

This version is not peer-reviewed.

A Note on Rényi Entropy Production for Markovian Evolution

Submitted:

10 September 2024

Posted:

11 September 2024

You are already at the latest version

Abstract
Entropy production in stochastic thermodynamics is defined as the average over all the possible system trajectories of a quantity (observable) which measures the divergence between a trajectory and its time-reversal. In this way a quantitative measure of the irreversible evolution of a system out of equilibrium can be given. In this note we consider a Markov system and substitute the linear average with a more general notion of average over the trajectories (Kolmogorov-Nagumo mean) which corresponds to substituting Shannon entropy with Rény entropy. We show that, unlike the linear average case, the resulting notion of entropy production depends on how the observable is defined.
Keywords: 
;  ;  

1. Entropy Production in Stochastic Thermodynamics

A basic tenet of non equilibrium thermodynamics is that for a thermodynamic system in a non equilibrium state the variation of its (time dependent) entropy is the sum of two contributions
Δ h = Δ σ p r o d + Δ σ e x .
The first term is non negative and represents the system entropy production, while the second term with no definite sign describes the entropy exchange (flow) with the environment. This classical formulation has been put forward in the approach of thermodynamics of continuous systems using balance laws, see [1]. More recently it has been re-derived in the stochastic thermodynamic setting [2,3,4,5,6,7], where we turn our attention.
We start with a brief recognition of the theory in the simplest case of a discrete system χ with finite states and discrete time evolution. Therefore the evolution of the system in the discrete time interval { 0 , n } is a sequence ω = ( x 0 , , x n ) Ω n , Ω n = χ { 0 , , n } . We denote with
ω R = R ( ω ) = ( x n , , x 0 ) Ω n
the time-reversed trajectory associated to ω . Assume that ( Ω n , p ) is a discrete probability space such that p ( ω ) > 0 for every ω . In stochastic thermodynamics, the existence of a probability over the different system evolutions allows for the introduction of the concept of information associated to the trajectory as I ( ω ) = ln ( p ( ω ) ) . As a measure of the irreversibility of system evolution described by ω we take [2]
I R ( ω ) I ( ω ) = I ( R ( ω ) ) I ( ω ) = ln p ( ω ) p ( ω R )
Since I R and I are random variables, we take their average with respect to p and their limit for n going to infinity as (we intend lim n as lim n throughout the paper)
s R = lim n 1 n ω Ω n p ( ω ) I R ( ω ) , s = lim n 1 n ω Ω n p ( ω ) I ( ω ) .
s and s R are called respectively entropy and time-reversed entropy per unit time. If we assume that both limit exist, we can define the system entropy production σ in three equivalent ways [2] i.e. σ = σ 1 = σ 2 = σ 3 where
σ 1 = s R s , σ 2 = lim n 1 n ω Ω n p ( ω ) ln p ( ω ) p ( ω R ) , σ 3 = lim n 1 n D ( p , p R )
and we have introduced the relative entropy between two discrete distributions p and q
D ( p , q ) = i p i ln p i q i .
Note that D ( p , q ) 0 and that D ( p , q ) = 0 only if p = q , so we gain the result that entropy production σ 3 is non negative. Note also that the entropy per unit time s in ( ) 2 coincides with the (Shannon) entropy rate of the stochastic process defined by ( Ω n , p ) [8,9].

2. Rény Entropy and Information

The aim of this section is to retrace the derivation of the entropy production in stochastic thermodynamics exposed above but with a different notion of average over all the system possible evolutions introduced by Rény. A different approach is taken in [17]. We follow the exposition of Rény original argument [10] given in [11] for discrete probability distributions. A first point is the following: if p and q denote the probability of two events and I ( p ) and I ( q ) is a sought measure of the information associated to the events, then the requirement that I ( p q ) = I ( p ) + I ( q ) for independent events singles out a unique information measure: I ( p ) = ln p , which is the one that we adopted earlier.
If different events are to be taken into account with probability p i we would like to define the average information associated to the observable I ( p i ) weighted by the probability of event i. The most general notion of average complaining with the postulates of probability theory is the so called Kolmogorov-Nagumo mean [11,14] which for the informations I ( p i ) reads
I f ( p ) = f 1 i p i f ( I ( p i ) )
where f is a real invertible function. Rény showed that the only possible class of functions f such that I f satisfies the additivity postulate for independent events are the linear and exponential functions. i.e.
f ( x ) = c x , c 0 , a n d f ( x ) = 1 γ e ( 1 α ) x 1 , α 1
Plugging (8) into (7) for I ( p i ) = ln p i we obtain in the linear case Shannon entropy information measure
I 1 ( p ) = i p i ln p i
and in the exponential case the Rény information measure of order α 1
I α ( p ) = 1 1 α ln i p i α .
Note that for α = 1 , I α = 0 / 0 but a straightforward application of de l’Hôpital rule shows that lim α 1 I α ( p ) = I 1 ( p ) so Rény entropy measure is a proper generalization of Shannon entropy. While the importance of Shannon entropy in the formulation of Statistical Mechanics is unquestioned, there is a growing literature showing that the notion of Rény entropy is not only pivotal in information theory but also it is appropriate to describe physical systems with fractal nature, see again [11].
The pioneering paper of Rény [10] contains also the proper generalization of the relative entropy D ( p , q ) considered as the measure of the divergence between p and q: it is given by
D α ( p , q ) = 1 α 1 ln i p i α q i 1 α
and we have again by de l’Hôpital rule that lim α 1 D α ( p , q ) = D ( p , q ) .

3. On the Definition of Rény Entropy Production

The different but equivalent notions of entropy production given above were derived in (5) using the linear average ( ) 1 of the information I ( ω ) and I R ( ω ) . We now use the exponential average ( ) 2 to compute the average of the information over all the possible trajectories and then take the limit for n going to infinity. More in detail, for a random variable A ( ω ) in ( Ω n , p ) instead of the linear average p ( ω ) A ( ω ) we consider the exponential average given by (7) and ( ) 2 . A direct computation shows that the exponential average is (here ω denotes the event index)
I = ln p ( ω ) I α = 1 1 α ln ω Ω n p α ( ω )
I R = ln p ( ω R ) I α R = 1 1 α ln ω Ω n p ( ω ) p α 1 ( ω R )
S = ln p ( ω ) p ( ω R ) S α = 1 1 α ln ω Ω n p 2 α ( ω ) p α 1 ( ω R )
We have thus the following result:
Proposition 1
(Rény entropy production). If one uses the exponential average instead of the linear average in the definition (5) of system entropy production one obtains three a priori different notions of Rény entropy production: for σ 1 in (5)1
σ 1 ( α ) = lim n 1 n I α R lim n 1 n I α = lim n 1 n 1 1 α ln ω Ω n p ( ω ) p α 1 ( ω R ) ω Ω n p α ( ω )
for σ 2 in (5)2
σ 2 ( α ) = lim n 1 n S α = lim n 1 n 1 1 α ln ω Ω n p 2 α ( ω ) p α 1 ( ω R )
for σ 3 in (5)3 we resort directly to the expression (11) of the α-divergence measure and set
σ 3 ( α ) = lim n 1 n D α ( p , p R ) = lim n 1 n 1 α 1 ln ω Ω n p α ( ω ) p 1 α ( ω R )

4. The Case of Markovian Evolution

We now specialize to the important case of a system described by a stationary ergodic Markov chain defined by a stochastic transition matrix P i j > 0 for all i , j and a stationary probability distribution π such that π P = P T π = π . In this setting we define for ( Ω n , p )
p ( ω ) = π x 0 P x 0 x 1 P x n 1 x n , p ( ω R ) = π x n P x n x n 1 P x 1 x 0
Note that in case the probability distribution π is not stationary, the definition given of p ( ω R ) is not the only possible one; a different choice would be to substitute π x n with π ( n ) x n where π ( n ) = π P n is the probability distribution describing the state of the system a time n while π is the non-stationary initial distribution. The difference between the two choices is not relevant in the n going to infinity limit because the Markovian evolution forgets about the initial conditions. For more details see e.g. [12]. A classical results [2,8] states that
Proposition 2.
For a stationary ergodic chain described by ( P , π ) the (linear) entropy production defined in (5) is the same for the three cases σ 1 , σ 2 , σ 3 and is equal to
σ = lim n 1 n D ( p , p R ) = lim n 1 n ω Ω n p ( ω ) ln p ( ω ) p ( ω R ) = i , j χ π i P i j ln P i j P j i .
This shows that the entropy production is zero for a symmetric matrix P. Moreover the entropy production admits the equivalent formulations
σ = i , j χ π i P i j ln π i P i j π j P j i = 1 2 i , j χ ( π i P i j π j P j i ) ln π i P i j π j P j i
The last formulation [3] shows that if the detailed balance condition (DBC) π i P i j π j P j i = 0 for all i , j hold the entropy production is zero; however a stationary distribution π need not to fulfill DCB if χ has more than two states, so we may have non-zero entropy production even if the chain is stationary and the system χ is in a non equilibrium stationary state.

4.1. Rény Entropy Production for Stationary Markovian Evolution

The computations of this section are based on [13] and [9]. We consider the case of a system described by a stationary ergodic Markov chain and we compute Rény entropy production σ i ( α ) for i = 1 , 2 , 3 as given by Proposition 1. We give the detailed computation for σ 3 ( α ) , the ones for σ 2 ( α ) and σ 3 ( α ) follow with minor modifications. Let p ( ω ) and p ( ω R ) be given by (18). Then in (17) we have
1 1 α ln ω Ω n p α ( ω ) p 1 α ( ω R ) = 1 1 α ln x 0 x n π x 0 α P x 0 x 1 α P x n 1 x n α π x n 1 α P x n x n 1 1 α P x 1 x 0 1 α .
If we define the vectors s i = π i α , w i = π i 1 α , the matrix with positive entries
M 3 ( α ) i j = P i j α P j i 1 α > 0
and we denote with M n the n-th power of M we see that
1 1 α ln ω Ω n p α ( ω ) p 1 α ( ω R ) = 1 1 α ln s · M 3 n ( α ) w .
To compute the n limit of the above expression we use this result of Perron-Frobenius theory for positive matrices [8]: Let A > 0 be a positive matrix, then there exist a positive real eigenvalue λ of maximum modulus and positive right [resp. left] eigenvalues e and f such that
lim n ( A n ) i j λ n = e i f j = ( e f ) i j i , j .
Let λ 3 = λ 3 ( α ) be the positive maximum modulus eigenvalue of M 3 ( α ) defined in (22). Then by (23) and (24)
1 n ln ω Ω n p α ( ω ) p ( 1 α ) ( ω R ) = 1 n ln s · M 3 n ( α ) w λ 3 n λ 3 n = 1 n ln s · M 3 n ( α ) w λ 3 n + ln λ 3
and taking the limit for n going to infinity, since logarithm is a continuous function we have
lim n ln ( s · M 3 n ( α ) w λ 3 n ) = ln [ s · ( lim n M 3 n ( α ) λ 3 n ) w ] = ln ( s · e f w )
and hence, see (17) and (25),
σ 3 ( α ) = lim n 1 n D α ( p , p R ) = lim n 1 n ln ω Ω n p α ( ω ) p ( 1 α ) ( ω R ) = 1 α 1 ln λ 3 ( α ) .
With minor modifications of the above argument we can prove for (16) that
σ 2 ( α ) = lim n 1 n S α = 1 1 α ln λ 2 ( α ) ,
where λ 2 ( α ) is the positive maximum modulus eigenvalue of
M 2 ( α ) i j = P i j 2 α P j i α 1 .
Note that for a symmetric matrix P i j = P j i we have M 3 ( α ) = M 2 ( α ) = P , independent of α . Since λ = 1 for a stochastic matrix, we recover the result that the entropy production is zero for a symmetric matrix. In the same way used above, we can prove for (15) that
σ 1 ( α ) = lim n 1 n I α R lim n 1 n I α = 1 1 α ( ln λ 1 R ( α ) λ 1 ( α ) )
where λ 1 R ( α ) is the positive maximum modulus eigenvalue of
M 1 R ( α ) i j = P i j P j i α 1
and λ 1 ( α ) is the positive maximum modulus eigenvalue of M 1 ( α ) i j = P i j α . For a symmetric matrix we have
M 1 ( α ) = M R ( α ) = P = P T
hence σ 1 ( α ) = 0 for a symmetric matrix. We have obtained three expressions for the Rényi entropy production of a stationary ergodic Markov chain as resumed by the following
Proposition 3.
Denote with σ i ( n , α ) , i = 1 , 2 , 3 the quantity in square brackets in the r.h.s. of (15), (16), (17). For a stationary ergodic Markov chain the Rényi entropy production can be defined using Proposition 1 as
σ 1 ( α ) = lim n 1 n σ 1 ( n , α ) = 1 1 α ( ln λ 1 R ( α ) λ 1 ( α ) )
σ 2 ( α ) = lim n 1 n σ 2 ( n , α ) = 1 1 α ln λ 2 ( α )
σ 3 ( α ) = lim n 1 n σ 3 ( n , α ) = 1 α 1 ln λ 3 ( α ) .

4.2. Exchangeability of the ( n , α ) -Limits in Rény Entropy Production σ i ( α )

In this section we show that
Proposition 4.
For all three expression of Rény entropy production σ i ( n , α ) in Proposition 3 we can exchange the limit order that is
lim n lim α 1 1 n σ i ( n , α ) = lim α 1 lim n 1 n σ i ( n , α ) = σ i = 1 , 2 , 3 .
Proof. 
Denote with σ i ( n , α ) , i = 1 , 2 , 3 the quantity in square brackets in the r.h.s. of (15), (16),(17). A preliminary investigation concerns their behavior in the limit α 1 . A direct application of the de l’Hôpital rule shows that
lim α 1 σ i ( n , α ) = ω Ω n p ( ω ) ln p ( ω ) p ( ω R ) = D ( p , p R ) i = 1 , 2 , 3
Therefore, by (19) in all three cases, computing the limit for n we have
lim n lim α 1 1 n σ i ( n , α ) = lim n 1 n D ( p , p R ) = σ , i = 1 , 2 , 3 .
We now turn to the computation of lim α 1 σ i ( α ) . We show the computation of the limit α 1 for σ 3 ( α ) in (26) , the other cases are dealt with minor modifications.
In all generality, we know that the maximum modulus eigenvalue is a continuous function of the entries of the matrix. Therefore since all the matrices M i ( α ) for i = 1 , 2 , 3 and M R ( α ) tends to P for α going to 1, we have that in all cases λ i ( α ) and λ R ( α ) tends to the maximum modulus eigenvalue of P which is equal to 1 because P is a stochastic matrix. Hence lim α 1 σ i ( α ) is an undeterminate form 0 / 0 to which we apply again de l’Hôpital rule. We compute
lim α 1 σ 3 ( α ) = lim α 1 ln λ 3 ( α ) α 1 = lim α 1 λ 3 1 ( α ) λ 3 ( α ) 1 = λ 3 ( 1 )
where λ ( α ) = d λ ( α ) / d α . From now on we drop the 3 index in λ 3 and M 3 to ease the notation. To compute λ ( α ) , note that at t least in a neighborhood U of α = 1 , the eigenvalue λ ( α ) is the solution of the equation
det A ( α ) = det ( M ( α ) λ ( α ) I ) = 0 α U
Note that by (22)
A i i ( α ) = P i i λ ( α ) , A i j ( α ) = M i j ( α ) i f i j
Denote with C i j the algebraic complement of A i j and write det A as det A = j C i j A i j . Then
d d α det A = i j det A A i j d A i j d α = i j C i j d A i j d α = i C i i ( λ ( α ) ) + i j C i j d d α M i j ( α ) = 0
Now from (22)
d d α M i j ( α ) = M i j ( α ) ln P i j P j i
therefore
λ ( α ) = 1 i C i i i j C i j M i j ( α ) ln P i j P j i = i j C i j k C k k M i j ( α ) ln P i j P j i
We conclude the proof using the following lemma
Lemma 1.Let A ( α ) = M ( α ) λ ( α ) I and let C i j ( α ) be the matrix of algebraic complements of A ( α ) . Then
lim α 1 C i j ( α ) k C k k ( α ) = C i i ( 1 ) k C k k ( 1 ) = π i
where π is the stationary distribution of P, π P = π .
Proof of Lemma. See Appendix.
We are now able to conclude our computation: since lim α 1 M ( α ) = P , we have from (34) and (35) that
lim α 1 σ 3 ( α ) = lim α 1 λ ( α ) = λ ( 1 ) = i j π i P i j ln P i j P j i = σ

5. Numerical example

We have shown that there are three different formulations which generalize the notion of the entropy production for an ergodic stationary Markov chain. In this section we introduce a simple stochastic matrix P and we compute the Rény entropy production σ i ( α ) for i = 1 , 2 , 3 . Let 0 p , q 1 and define
P = 1 p q p q q 1 p q p p q 1 p q
Note that, even if P is a non symmetric stochastic matrix, it has π = ( 1 3 , 1 3 , 1 3 ) as a stationary distribution. Using (19) we can compute the entropy production of ( P , π )
σ = i , j χ π i P i j ln P i j P j i = ( p q ) ln p q 0 .
To compute σ i ( α ) one needs to compute the corresponding matrix M i ( α ) and the relative maximum modulus eigenvalue λ i ( α ) for α in a neighborhood of 1. This has been done using the software Wolfram Mathematica. See Figure below for a plot of σ i ( α ) as function of α for p = 1 / 4 and q = 1 / 2 . The intersection between curves is at α = 1 , σ = 1 4 ln 2 corresponding to σ computed by (36).
Figure 1. (a) Plot of σ 3 ( α ) (dotted), σ 2 ( α ) (dashed) and σ 1 ( α ) (dot-dashed) as a function of α [ 0 , 5 ] for p = 1 / 4 , q = 1 / 2 (b) the same as in (a), for α [ 0 , 2 ]
Figure 1. (a) Plot of σ 3 ( α ) (dotted), σ 2 ( α ) (dashed) and σ 1 ( α ) (dot-dashed) as a function of α [ 0 , 5 ] for p = 1 / 4 , q = 1 / 2 (b) the same as in (a), for α [ 0 , 2 ]
Preprints 117827 g001

6. Conclusions

In stochastic thermodynamics the irreversibility of system evolution is expressed in terms of the information I ( p ( ω ) ) associated to a trajectory ω and its time-reversal. The additivity postulate on information I ( p ) associated to an event of probability p restricts the possible forms of the average information of the events of information I ( p ) to the linear or exponential Kolmogorov-Nagumo mean [11]. In the first case we obtain Shannon information measure and in the second one Rény information measure of order α . Note that Rény information measure reduces to Shannon one when α tends to 1.
In stochastic thermodynamics, the entropy production is defined as the linearaverage over the system trajectories ω of the information difference I ( p ( ω R ) ) I ( p ( ω ) ) where ω R is the time-reversed trajectory. This coincides with the linear average of S ( ω ) = ln [ p ( ω ) / p ( ω R ) ] which coincides with the relative entropy D ( p , p R ) [2].
In this paper we have shown that, if one considers the exponential average of order α of the above introduced random variables, one obtains three different definitions of the entropy production when one considers trajectories of infinite length which we have called Rény entropy production. It is also shown that in the limit α going to 1 all three definitions reduce to the linear entropy production σ in (19) and this result is independent of the order in which the α and n limits are taken. However, Rény information measure is used with α 1 in the description of diverse physical systems (turbulent fluid flows, fractal system, DNA sequences, [11]). It is therefore important to investigate the existence of a notion of entropy production associated to Rény entropy via the exponential average. In this respect we have shown that different straightforward generalization of the usual linear average definition are possible, which give quantitatively different answers to the entropy production rate for α 1 . This fact is relevant for example is one intends to adopt the maximum entropy production rate principle to describe a thermodynamic system in a stationary non equilibrium state ([15,16]). We think that Rény information measure, containing an additional parameter α is able to ’resolve’ the theoretical differences between multiple definitions of the notion of entropy production which remains unseen when considering only the α = 1 case, corresponding to the use of Shannon entropy. We hope that the issue raised in this work may help to shed light also on the usual notion of entropy production rate, which is a cornerstone of non equilibrium thermodynamics.

Funding

There are no funding bodies to thank relating to this creation of this article.

Acknowledgments

We wish to thank...

Conflicts of Interest

There were no competing interests to declare which arose during the preparation or publication process of this article.

Appendix A

Proof 
(Proof of Lemma 1). Note that A ( 1 ) = P I . We prove that C i j ( 1 ) = k π i for all j for some k > 0 . Let π be the unique stationary distribution of P. Then P T π = π i.e. A T π = ( P T I ) π = 0 . So we have that det A T = 0 . In addition ker A T = π . Using Kroneker formula for the determinant we have
det A T = 0 j C i j ( A T ) ( A T ) i j = 0 .
If we denote with a d ( A ) = C ( A ) T the adjoint matrix, it holds that a d ( A ) = a d ( A T ) T therefore C ( A ) T = C ( A T ) . Then we have
j C i j ( A T ) ( A T ) i j = j ( A T ) i j C j i ( A ) = 0 i .
Therefore C j i ( A ) = k π j for some k > 0 for all i as requested. □

References

  1. De Groot SR, Mazur P.Non-equilibrium thermodynamics. Dover Publications.
  2. Gaspard P. Time-reversed dynamical entropy and irreversibility in Markovian random processes. Journal of statistical physics. 2004 Nov;117:599-615. [CrossRef]
  3. Schnakenberg J. Network theory of microscopic and macroscopic behavior of master equation systems. Reviews of Modern physics. 1976 Oct 1;48(4):571. [CrossRef]
  4. Jiang DQ, Jiang D. Mathematical theory of nonequilibrium steady states: on the frontier of probability and dynamical systems. Springer Science & Business Media; 2004.
  5. Ge H, Qian H. Physical origins of entropy production, free energy dissipation, and their mathematical representations. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics. 2010 May;81(5):051133. [CrossRef]
  6. Esposito M, Van den Broeck C. Three faces of the second law. I. Master equation formulation. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics. 2010 Jul;82(1):011143. [CrossRef]
  7. Busiello, Daniel M., Deepak Gupta, and Amos Maritan. "Entropy production in systems with unidirectional transitions." Physical Review Research 2.2 (2020): 023011. [CrossRef]
  8. Koralov, Leonid, and Yakov G. Sinai. Theory of probability and random processes. Springer Science & Business Media, 2007.
  9. Golshani L, Pasha E, Yari G. Some properties of Rényi entropy and Rényi entropy rate. Information Sciences. 2009 Jun 27;179(14):2426-33.
  10. Rényi A. On measures of entropy and information. In Proceedings of the fourth Berkeley symposium on mathematical statistics and probability, volume 1: contributions to the theory of statistics 1961 Jan 1 (Vol. 4, pp. 547-562). University of California Press.
  11. Jizba P, Arimitsu T. The world according to Rényi: thermodynamics of multifractal systems. Annals of Physics. 2004 Jul 1;312(1):17-59. [CrossRef]
  12. Yang YJ, Qian H. Unified formalism for entropy production and fluctuation relations. Physical Review E. 2020 Feb;101(2):022129. [CrossRef]
  13. Rached Z, Alajaji F, Campbell LL. Rényi’s divergence and entropy rates for finite alphabet Markov sources. IEEE Transactions on Information theory. 2001 May;47(4):1553-61. [CrossRef]
  14. Masi M. A step beyond Tsallis and Rényi entropies. Physics Letters A. 2005 May 2;338(3-5):217-24. [CrossRef]
  15. Favretti M. The maximum entropy rate description of a thermodynamic system in a stationary non-equilibrium state. Entropy. 2009 Oct 29;11(4):675-87. [CrossRef]
  16. Monthus C. Non-equilibrium steady states: maximization of the Shannon entropy associated with the distribution of dynamical trajectories in the presence of constraints. Journal of Statistical Mechanics: Theory and Experiment. 2011 Mar 4;2011(03):P03008. [CrossRef]
  17. Cybulski, O., Babin, V. and Holyst, R., 2005. Minimization of the Renyi entropy production in the space-partitioning process. Physical Review E—Statistical, Nonlinear, and Soft Matter Physics, 71(4), p.046130. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated