Preprint
Article

This version is not peer-reviewed.

Consistent Markov Edge Processes and Random Graphs

A peer-reviewed article of this preprint also exists.

Submitted:

22 September 2025

Posted:

23 September 2025

You are already at the latest version

Abstract
We discuss Markov edge processes $\{Y_e; e \in E\}$ defined on edges of a directed acyclic graph $(V, E)$ with the consistency property: $$ {\mathrm P}_{E'}(Y_e; e \in E') = {\mathrm P}_E(Y_e; e \in E') $$ for a large class of subgraphs $(V',E')$ of $(V,E)$ obtained through a mesh dismantling algorithm. The probability distribution ${\mathrm P}_E$ of such edge process is a discrete version of consistent polygonal Markov graphs studied in \cite{arakDS1993, arakDS1989}. The class of Markov edge processes is related to the class of Bayesian networks and may be of interest to causal inference and decision theory. On regular $\nu$-dimensional lattices, consistent Markov edge processes have similar properties to Pickard random fields on ${\mathbb Z}^2$, representing a far-reaching extension of the latter class. A particular case of binary consistent edge process on ${\mathbb Z}^3$ was disclosed by Arak in a private communication. We prove that symmetric binary Pickard model generates Arak model on ${\mathbb Z}^2$ as a contour model.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph [Wikipedia]. Bayesian network is a fundamental concept in learning and artificial intelligence, see [20,21,25]. Formally, a Bayesian network is a random field X V = { X v ; v V } indexed by sites v V of a directed acyclic graph (DAG) G = ( V , E ) whose probability distribution writes as a product over of conditional probabilities of X v given its parent variables X u , u pa ( v ) , where pa ( v ) V is the set of parents of v.
Bayesian network is a special case of Markov random field on (generally undirected) graph G = ( V , E ) . Markov random fields include nearest-neighbor Gibbs random fields and play an important role in many applied sciences including statistical physics and image analysis [4,13,17,18].
As noted in [5,10,23] and elsewhere, manipulating marginals (e.g., computing the mean or the covariance function) of a Gibbs random field is generally very hard when V is large. A class of Markov random fields on Z 2 which avoids this difficulty was proposed by Pickard [22,23] (the main idea of their construction belongs to Verhagen [30], see [22]). Pickard and related unilateral random fields have been found useful applications in image analysis, coding, information theory, crystallography, and other scientific areas, not least because they allow for efficient simulation procedures [5,7,10,11,14]
Pickard model X V (rigorously defined in sec.2) is a family of Bayesian networks on rectangular graphs G = ( V , E ) , V Z 2 enjoying the important (Kolmogorov) consistency property: for any rectangles V V we have that
P V | V = P V ,
where P V is the distribution of X V and P V | V is the restriction of marginal of X V on V V . Property (1.1) implies that Pickard model extends to a stationary random field on Z 2 whose marginals coincide with P V and form Markov chains on each horizontal or vertical line.
Arak et al [1,2] introduced a class of polygonal Markov graphs and random fields X T = { X t ; t T } indexed by continuous argument t R 2 satisfying a similar consistency property: for any bounded convex domains T T the restriction of X T to T coincides in distribution with X T . The above models allow a Gibbs representation but are constructed via equilibrium evolution of a one-dimensional particle system with random piece-wise constant Markov velocities, with birth, death and branching. Particles move independently and interact only at collisions. Similarity between polygonal and Pickard models was noted in [28]. Several papers [15,16,29] used polygonal graphs in landscape modeling and random tesselation problems.
The present paper discusses a far-reaching extension of polygonal graphs and Pickard model, satisfying a consistency property similar to (1.1) but not directly related to lattices or rectangles. The original idea of the construction (communicated to the author by Taivo Arak in 1992) referred to a particle evolution on the three-dimensional lattice Z 3 . In this paper, the above Arak model is extended to any dimension and discussed in sec.5 in detail. It is a special case of a Markov edge process Y E = { Y e ; e E } with probability distribution P E introduced in sec.2 and defined on edges e E of a directed acyclic graph (DAG) G = ( V , E ) as the product over v V of the conditional clique probabilities π v ( y E out ( v ) | y E in ( v ) ) , somewhat similarly to Bayesian network, albeit the conditional probabilities refer to collection y E out ( v ) = { y e ; e E out } of `out-going’ clique variables and not to a single `child’ x v as in a Bayesian network. Markov random fields indexed by edges discussed in the literature [13,17,18] are usually defined on undirected graphs through Gibbs representation and experience a similar difficulty of computing their marginal distributions as Gibbsian site models. On the other hand, `directed’ Markov edge processes might be useful in causal analysis where edges of DAG G = ( V , E ) represent `decisions’ and Y e can be interpreted as (random) `cost’ of `decision’ e E .
The main result of this work is Theorem 2, providing sufficient conditions for consistency
P E | E = P E , G G
of a family P E , G = ( E , V ) S of Markov edge processes defined on DAGs G with the partial order G = ( V , E ) G = ( V , E ) given by V V , E E . The sufficient conditions for (1.2) are expressed in terms of clique distributions π v , v V and essentially reduce to marginal independence of incoming and outgoing clique variables (i.e., collections y E in ( v ) and y E out ( v ) ), see (3.16), (3.17).
The class S of DAGs G satisfying (1.2) is of special interest. In Theorem 2, S = S MDA ( G ) is the class of all sub-DAGs obtained from a given DAG G = ( V , E ) by a mesh dismantling algorithm (MDA). The above algorithm starts by erasing any edge that leads to a sink v + V or comes from a source v V , and proceeds in the same way and in arbitrary order, see Definition 4. The class S MDA ( G ) contains several interesting classes of `one-dimensional’ and `multi-dimensional’ sub-DAGs G on which the restriction in (1.2) is identified as a Markov chain or a sequence of independent r.v.s (Corollary 2).
Section 4 discusses consistent Markov edge process on `rectangular’ sub-graphs of Z ν endowed with nearest-neighbor edges directed in the lexicographic order. We discuss properties of such edge processes and present several examples of generic clique distributions with particular attention to the binary case (referred to as Arak model). In dimension ν = 2 we provide a detailed description of Arak model in terms of a particle system moving along edges of Z 2 . Finally, section 5 establishes a relation between Pickard and Arak models, the latter model identified in Theorem 3 as a contour model of the former model under an additional symmetry assumption.
We expect that the present work can be extended in several directions. Similarly to [5,10] and some other related studies, the discussion is limited to discrete probability distributions, albeit continuous distributions (e.g., Gaussian edge processes) are of interest. A major challenge is the application of Markov edge processes to causal analysis and Bayesian inference, bearing in mind the extensive research in the case of Bayesian networks [20,21]. Consistent Markov edge processes on regular lattices may be of interest to pattern recognition and information theory. Some open problems are mentioned in Remark 7.

2. Bayesian Networks and Pickard Random Fields

Let G = ( V , E ) be a given DAG. A directed edge from v 1 to v 2 , v 1 v 2 is denoted e = ( v 1 v 2 ) E . With any vertex v E we can associate three sets of edges
E in ( v ) : = { e = ( v v ) E , v V } , E out ( v ) : = { e = ( v v ) E , v V } , E ( v ) : = E in ( v ) E out ( v )
representing the incoming edges, outgoing edges, and all edges incident to v. A vertex v V is called a source (respectively, a sink) if E in ( v ) = (respectively, E out ( v ) = ). The partial order (reachability relation) v 1 v 2 on G means that there is a directed path on this DAG from v 1 to v 2 . We write v 1 v 2 if v 1 v 2 , v 1 v 2 . The above partial order carries over to edges of G, namely, e 1 = ( v 1 v 1 ) e 2 = ( v 2 v 2 ) v 1 v 2 . Following [21], the set of vertices pa ( v ) : = { v v : ( v v ) E } V is called the parents of v, whereas F ( v ) : = { v } pa ( v ) V is called the family of v.
Definition 1.
Let G = ( V , E ) be a given DAG. A family distribution at v V is any discrete probability distribution ϕ v = ϕ v ( x F ( v ) ) , x F ( v ) = ( x v ; v F ( v ) ) , that is, a sequence of positive numbers ϕ v ( x F ( v ) ) > 0 summing up to 1:
x F ( v ) ϕ v ( x F ( v ) ) = 1 .
The set of all family distributions ϕ v at v V is denoted by Φ v ( G ) .
For v V , ϕ v Φ v ( G ) , the conditional probability of x v given parent configuration x pa ( v ) = ( x v ; v pa ( v ) ) writes as
ϕ v ( x v | x pa ( v ) ) = ϕ v ( x F ( v ) ) ϕ v ( x pa ( v ) ) if ϕ v ( x F ( v ) ) > 0 ,
with ϕ v ( x v | x pa ( v ) ) : = 0 if ϕ v ( x F ( v ) ) = 0 and ϕ v ( x v | x pa ( v ) ) : = ϕ v ( x v ) if pa ( v ) = .
Definition 2.
A Bayesian network on a DAG G = ( V , E ) corresponding to a given set { ϕ v ; v V } of family distributions ϕ v Φ v ( G ) is a random process X V = { X v ; v V } indexed by vertices (sites) of G and such that for any configuration x V = ( x v ; v V )
P V ( x V ) = P ( X v = x v ; v V ) = v V ϕ v ( x v | x pa ( v ) ) .
We remark that the terminology `family distribution’ is not commonplace, whereas the definition of Bayesian network varies in the literature [3,20,21], being equivalent to (2.3) under the positivity condition P V ( x V ) > 0 . Actually, the conditional (family) distributions ϕ v ( x v | x pa ( v ) ) in (2.3) are most intuitive and used instead of ϕ v ( x F ( v ) ) . A Bayesian network satisfies several Markov conditions [21], particularly, the ordered Markov condition:
P V ( x v | x v , v v ) = P V ( x v | x pa ( v ) ) = ϕ v ( x v | x pa ( v ) ) , v V .
In the rest of this subsection, G = ( V , E ) is a directed subgraph of the infinite DAG ( Z 2 , E ( Z 2 ) ) with lexicographic partial order v = ( t , s ) v = ( t , s ) ( t , s , t , s Z ) :
v v s s , t t
and edge set
E ( Z 2 ) : = { ( v v ) ; v , v Z 2 , v = v ( 0 , 1 ) , v ( 1 , 0 ) , v ( 1 , 1 ) } .
The class of `rectangular’ subDAGs G = ( V , E ) with
V = { v Z 2 : v v v + } , E = { e = ( v v ) E ( Z 2 ) , v , v V }
( v ± Z 2 , v v + ) will be denoted G rec ( Z 2 ) . Given two `rectangular’ DAGs G i = ( V i , E i ) G rec ( Z 2 ) , i = 1 , 2 we write G 1 G 2 if V 1 V 2 . For generic family in Figure 1, right, the family distribution is is indexed by the elements of the table
C D A B
and written as the joint distribution ϕ ( A , B , C , D ) of four r.v.s A , B , C , D with x v ( 1 , 1 ) = A , x v ( 1 , 0 ) = B , x v ( 1 , 0 ) = C , x v = D . Marginal distributions of these r.v.s are designated by the corresponding subsets of the table (2.6), e.g., ϕ ( A ) is the distribution of r.v. A. The conditional distributions of these r.v.s are denoted as in (2.2), viz., ϕ ( B | A ) = ϕ ( A , B ) / ϕ ( A ) is the conditional distribution of B given a value of A.
Definition 3.
We call Pickard model a Bayesian network P V on DAG G = ( V , E ) G rec ( Z 2 ) with family distribution ϕ v ϕ independent of v V and satisfying the following conditions:
ϕ ( A ) = ϕ ( B ) = ϕ ( C ) = ϕ ( D ) , ϕ ( A , B ) = ϕ ( C , D ) , ϕ ( A , C ) = ϕ ( B , D ) ,
(stationarity), and
ϕ ( B , C | A ) = ϕ ( B | A ) ϕ ( C | A ) , ϕ ( B , C | D ) = ϕ ( B | D ) ϕ ( C | D )
(conditional independence).
In Pickard model, the family F ( v ) of v V consists of 4 points, see Figure 1, except when v belongs to the left lower boundary V : = { v = ( t , s ) V : t = t or s = s } , v = ( t , s ) of rectangle V in (2.5) and | F ( v ) | 2 , for v = v it consists of the single point: F ( v ) = { v } . Accordingly, the family distributions ϕ v ( x F ( v ) ) = ϕ ( x F ( v ) ) , v V are marginals of the generic distribution ϕ in Definition 3. We note that Pickard model (also called Pickard random field) is defined in [5,10,14,23] in somewhat different ways, which are equivalent to Definition 3.
Theorem 1.
[23] The family { P V ; V V rec ( Z 2 ) } of Pickard models on DAGs G = ( V , E ) G rec ( Z 2 ) and corresponding to the same generic family distribution ϕ in (2.7)-(2.8) is consistent; in other words, for any G = ( V , E ) G = ( V , E ) , G , G G rec ( Z 2 ) we have that
P V | V ( x V ) = P V ( x V )
Remark 1.
The consistency property in (2.9) implies that for G = ( V , E ) in (2.5), v ± = ( t ± , s ± ) and any s s s + the restriction of Pickard model P V on the horizontal interval H s : = { ( t , s ) , , ( t + , s ) } V is a Markov chain
P V ( x H s ) = ϕ ( x ( t , s ) ) ϕ ( x ( t + 1 , s ) | x ( t , s ) ) ϕ ( x ( t + , s ) | x ( t + 1 , s ) )
with initial distribution ϕ ( A ) and transition probabilities ϕ ( B | A ) . Indeed, H s = { v Z 2 : ( t , s ) v ( t + , s ) } is the set of vertices of G s G and the corresponding `one-dimensional’ Pickard model on G s is defined by (2.10). In a similar way, the restriction of Pickard model on a vertical interval V s : = { ( t , s ) , , ( t , s + ) } V is a Markov chain
P V ( x V s ) = ϕ ( x ( t , s ) ) ϕ ( x ( t , s + 1 ) | x ( t , s ) ) ϕ ( x ( t , s + ) | x ( t , s + 1 ) )
with initial distribution ϕ ( A ) and transition probabilities ϕ ( C | A ) .
[23, Thm.3] extend Markov chain identifications in (2.10) and (2.11) to any non-increasing undirected path in V. This fact and the discussion in sec.3 suggest that Pickard model and the consistency property in (2.9) can be extended to non-rectangular index sets V Z 2 .
Remark 2.
In the binary case X v { 0 , 1 } or A , B , C , D { 0 , 1 } the distribution ϕ is completely determined by probabilities
ϕ A : = P ( A = 1 ) , , ϕ A B C D : = P ( A = B = C = D = 1 ) .
In terms of (2.12), conditions (2.7)-(2.8) translate to
ϕ A = ϕ B = ϕ C = ϕ D , ϕ A B = ϕ C D , ϕ A C = ϕ B D , ϕ A B C = ϕ D B C , ϕ A B C ϕ A = ϕ A B ϕ A C , ( ϕ A ϕ A B ) ( ϕ A ϕ A C ) = ( ϕ B C ϕ A B C ) ( 1 ϕ A ) , ϕ A B C = ϕ B C D ,
see [23, p666]. The special cases ϕ A = 0 , 1 , ϕ A = ϕ A B , ϕ A = ϕ A C correspond to degenerated Pickard model, the latter implying A = B and A = C , respectively, and leading to a random field which takes constant values on each horizontal (respectively, vertical) line in Z 2 . Hence, a binary Pickard model can be parametrized by 7 parameters
ϕ A = : a , ϕ A B = : b , ϕ A C = : c , ϕ A D , ϕ A C D , ϕ A B D , ϕ A B C D ,
as ϕ A B C , ϕ B C in the non-degenerated case can be found from a , b , c :
ϕ A B C = b c a , ϕ B C = b c a + ( a b ) ( a c ) 1 a .
The parameters in (2.14) satisfy natural constraints resulting from their definition as probabilities, particularly, 1 a b c .

3. Markov Edge Process: General Properties and Consistency

It is convenient to allow G to have isolated vertices, which of course can be removed w.l.g., our interest primarily being focused on edges. The directed line graph  G E = ( V ( G E ) , E ( G E ) ) of DAG G = ( V , E ) is defined as the graph whose set of vertices is V ( G E ) : = E is the same as that of undirected line graph and the set of (directed) edges is
E ( G E ) : = { ( e 1 e 2 ) : e 1 E in ( v ) , e 2 E out ( v ) ( v V ) }
Note G E is acyclic, hence a DAG.
Given a DAG G = ( V , E ) we denote S ( G ) the class of all non-empty sub-DAGs (subgraphs) G = ( V , E ) with V V , E E .
Definition 4.
A transformation G + G S ( G ) is said a top mesh dismantling algorithm (top MDA) if it erases an edge e E in ( v + ) leading to a sink v + V ; in other words, if G = ( V , E ) with
V = V , E = E { e } , e E in ( v + ) , E out ( v + ) = .
Similarly, a transformation G G S ( G ) is said a bottom mesh dismantling algorithm (bottom MDA) if it erases an edge e E out ( v ) starting from a source v V , in other words, if G = ( V , E ) with
V = V , E = E { e } , e E out ( v ) , E in ( v ) = .
We denote S MDA + ( G ) , S MDA ( G ) , S MDA ( G ) the classes of DAGs containing G and the subDAGs G S ( G ) which can be obtained from G by succesively applying top MDA, bottom MDA, or both types of MDA in an arbitrary order:
S MDA + ( G ) : = { G } { G = G k , k 1 } , G + G 1 + G 2 + + G k , S MDA ( G ) : = { G } { G = G k , k 1 } , G G 1 G 2 G k , S MDA ( G ) : = { G } { G = G k , k 1 } , G ± G 1 ± G 2 ± ± G k
Note that the above transformations may lead to a subDAG G containing isolated vertices which can be removed from G w.l.g.
Definition 5.
A subDAG G = ( V , E ) S ( G ) is said:
(i) interval DAG if
V = { v V : v 1 v v 2 } , E = { e = ( v v ) E : v , v V } .
for some v 1 v 2 , v i V , i = 1 , 2 ;
(ii) chain DAG if V = { v 1 = v 1 v 2 v k = v 2 , E = { ( v 1 v 2 ) , , ( v k 1 v k ) } ( k 1 ) , for some v 1 v 2 , v i V , i = 1 , 2 ;
(iii) source-to-sink DAG if any edge e = ( v v ) E connects a source v of G to a sink v of G .
The corresponding classes of subDAGs G = ( V , E ) in Definition 5 (i)-(iii) will be denoted by S int ( G ) , S chain ( G ) , and S sts ( G ) .
Proposition 1.
For any DAG G = ( V , E ) we have that
S int ( G ) S MDA ( G ) . S chain ( G ) S MDA ( G ) . S sts ( G ) S MDA ( G ) .
Proof. The proof of each inclusion in (3.3) proceeds by induction on the number | E | of edges of G. Clearly, the proposition holds for | E | = 1 . Assume it holds for | E | = n 1 , we will show that it holds for | E | = n . The induction step n 1 n for the three relation in (3.3) is proved as follows.
(i) Fix v 1 v 2 and G = ( V , E ) S ( G ) be as in (3.2). Let { v + , 1 , , v + , m } be the set of sinks of G. If v 2 is a sink and m = 1 (or v 2 = v + , 1 ) then G = G belongs to S MDA + ( G ) by definition of the last class. If v 2 is a sink and m 2 , we can dismantle an edge e E in ( v + . 2 ) and the remaining graph G = ( V , E ) in (3.1) has n 1 edges and contains G = ( V , E ) , so that the inductive assumption applies, proving G S MDA + ( G ) S MDA + ( G ) S MDA ( G ) .
Next, let v 2 { v + , 1 , , v + , m } . Then there is a path in G from v 2 to at least one of these sinks. If v 2 v + = v + , 1 , we apply top MDA to v + and see that G in (3.1) has the number of edges | E | = n 1 and contains the interval graph G in (3.2), so that the inductive assumption applies to G and consequently to G as well, as before, proving the induction step in case (i).
(ii) Let G be a chain between v 1 v 2 and G G , since G = G belongs to S MDA + ( G ) by definition. Let { v + , 1 , , v + , m } be the set of sinks of G. If v 2 is a sink and m = 1 then G contains an edge e = ( v v + , 1 ) which does not belong to G . Then, by removing e from G by top MDA we see that G = ( V , E ) in (3.1) contains the chain G , viz., G S ( G ) , and therefore G S MDA + ( G ) by the inductive assumption. If v 2 is a sink and m 2 , we remove from G any edge leading to v + , 2 and arrive at the same conclusion. If v 2 { v + , 1 , , v + , m } is not a sink, we remove any edge leading to a sink and apply the inductive assumption to the remaining graph G having n 1 edges. This proves the induction step in case (ii).
(iii) Let G G , G S sts ( G ) , V = V V + , where V = { v , 1 , , v , k } , V + = { v + , 1 , , v + , } are the sets of sources and sinks of G , respectively. Let V , V + be the sets of sources and sinks of G = ( V , E ) . If V = V V + , i.e., G is a source-to-sink DAG, we can remove from it a edge e E , e E and the remaining graph G in (3.1) contains G and satisfies the inductive assumption. If G is not a source-to-sink DAG, it contains a sink v + V + , or a source v V . In the first case, there is e = ( v , v + ) E , e E which can be removed from G and the remaining graph G contains G and satisfies the inductive assumption. The second case follows from the first one by DAG reversion. This proves the induction step n 1 n in case (iii), hence the proposition. □
An edge process on DAG G = ( V , E ) is a family Y E = { Y e ; e E } of discrete r.v.s indexed by edges of G. It is identified with a (discrete) probability distribution P E ( y E ) , y E = ( y e ; e E ) .
Definition 6.
Let G = ( V , E ) be a given DAG. A clique distribution at v V is any discrete probability distribution π v = π v ( y E ( v ) ) , y E ( v ) = ( y e ; e E ( v ) ) , that is, a family of positive numbers π v ( y E ( v ) ) > 0 summing up to 1:
y E ( v ) π v ( y E ( v ) ) = 1 .
The set of all clique distributions π v at v V is denoted by Π v ( G ) .
Given π v Π v ( G ) , the conditional probabilities of out-configuration y E out ( v ) = ( y e ; e E out ( v ) ) given in-configuration y E in ( v ) = ( y e ; e E in ( v ) ) write as
π v ( y E out ( v ) | y E in ( v ) ) = π v ( y E ( v ) ) π v ( y E in ( v )
with y E ( v ) = ( y e ; e E out ( v ) E out ( v ) ) and π v ( y E out ( v ) | y E in ( v ) ) : = 1 for E out ( v ) = , : = π v ( E out ( v ) ) for E in ( v ) = .
Definition 7.
A Markov edge process on a DAG G = ( V , E ) corresponding to a given family { π v ; v V } of clique distributions π v Π v ( G ) is a random process Y E = { Y e ; e E } indexed by edges of G and such that for any configuration y E = ( y e ; e E )
P E ( y E ) = P ( Y e = y e ; e E ) : = v V π v ( y E out ( v ) | y E in ( v ) ) .
An edge process Y E = { Y e ; e E } on a DAG G = ( V , E ) can be viewed as a site process on the line graph G E = ( V ( G E ) , E ( G E ) ) of G = ( V , E ) , with V ( G E ) : = E ,   E ( G E ) : = { ( e 1 e 2 ) : e 1 E in ( v ) , e 2 E out ( v ) ( v V ) } .
Corollary 1.
A Markov edge process P E in (3.4) is a Bayesian network on the line DAG G E = ( V ( G E ) , E ( G E ) ) if
π v ( y E out ( v ) | y E in ( v ) ) = e E out ( v ) π v ( y e | y E in ( v ) ) , v V .
Condition (3.5) can be rephrased as the statement that outgoing’ variables Y e , e E out ( v ) are conditionally independent given `ingoing’ variables Y e , e E in ( v ) , for each node v V . Analogously, P E in (3.6) is a Bayesian network on the reversed line graph G E under a symmetric condition that `ingoing’ variables Y e , e E in ( v ) are conditionally independent given `outgoing’ variables Y e , e E out ( v ) , for each node v V .
Definition 8.
Let G = ( V , E ) be a DAG and S ( G ) S ( G ) be a family of subDAGs of G. A family of edge processes { P E ; G = ( V , E ) S ( G ) } is said consistent if
P E ( y E ) = P E ( y E ) , G = ( V , E ) S ( G ) .
Given edge process P E in (3.4), we define its restriction on a subDAG G S ( G ) as
P E | E ( y E ) : = P E ( y E ) , y E = ( y e ; e E ) .
In general, P E | E is not a Markov edge process on G , as shown in the following example.
Example 1.
Let G = ( V , E ) , V = { 1 , 2 , 3 } , E = { e 1 = ( 1 2 ) , e 2 = ( 1 3 ) , e 3 = ( 2 3 ) } , E = { e 2 , e 3 } . Then P E ( y e 1 , y e 2 , y e 3 ) = π 1 ( y e 1 , y e 2 ) π 2 ( y e 3 | y e 1 ) and
P E | E ( y e 2 , y e 3 ) = y e 1 π 1 ( y e 1 , y e 2 ) π 2 ( y e 3 | y e 1 ) .
The subDAG G = ( V , E ) is composed of two edges going from different sources 1 and 2 to the same sink 3. By definition, a Markov edge process P E on G corresponds to independent y e 2 , y e 3 , , viz.,
P E ( y e 2 , y e 3 ) = π 1 ( y e 2 ) π 2 ( y e 3 )
It is easy to see that the two probabilities in (3.8) and (3.9) are generally different (however, they are equal if π 1 ( y e 1 , y e 2 ) = π 1 ( y e 1 ) π 1 ( y e 2 ) is a product distribution and π 1 ( y e 1 ) = π 2 ( y e 1 ) , in which case π 1 ( y e 2 ) = π 1 ( y e 2 ) and π 2 ( y e 3 ) = π 2 ( y e 3 ) ) .
In Example 1, the restriction P E | E to E = { e 1 , e 2 } is a Markov edge process with P E | E ( y e 1 , y e 2 ) = π 1 ( y e 1 , y e 2 ) . The following proposition shows that a similar fact holds in a general case for subgraphs G obtained from G by applying top MDA.
Proposition 2.
Let P E be a Markov edge process on DAG G = ( V , E ) . Then for any G = ( V , E ) S MDA + ( G ) , the restriction P E | E is a Markov edge process on G with clique distribution π v Π v ( G ) , v V given by
π v ( y E ( v ) ) : = π v ( y E ( v ) ) , v V
which is the restriction of clique distributions π v to (sub-clique) E ( v ) = E ( v ) E .
Proof. It suffices to prove the proposition for a one-step top MDA, or G in (3.1). Let E in ( v + ) : = { e 1 = ( v 1 v + ) , , e k = ( v k v + ) } , k 2 , e = e k . From the definitions in (3.4) and (3.10),
P E | E ( y E ) = v V { v k } π v ( y E out ( v ) | y E in ( v ) ) y e k π v k ( y E out ( v k ) | y E in ( v k ) ) = v V { v k } π v ( y E out ( v ) | y E in ( v ) ) π v k ( y E out ( v k ) | y E in ( v k ) ) = v V π v ( y E out ( v ) | y E in ( v ) ) .
Example 2. Markov chain. Let G = ( V , E ) , V = { 0 , 1 , , n } , E = { e 1 = ( 0 1 ) , e 2 = ( 1 2 ) , , e n = ( n 1 n ) } be a chain from 0 to n. The classes S MDA + ( G ) , S MDA ( G ) , and S MDA ( G ) consist respectively of all chains from 0 to j { 1 , , n } , from i { 0 , , n 1 } to n, and from i to j ( 0 i < j n ) . A Markov edge process on the above graph is a Markov chain Y E = { Y e 1 , , Y e n } with probability distribution
P E ( y e 1 , y e 1 , , y e n ) = π 0 ( y e 1 ) π 1 ( y e 2 | y e 1 ) π n 1 ( y e n | y e n 1 )
where π 0 is a (discrete) univariate and π k , 1 k < n 1 are bivariate probability distributions; π k ( y e k + 1 | y e k ) = π k ( y e k , y e k + 1 ) / y π k ( y e k , y ) are conditional or transitional probabilities. It is clear that the restriction P E | E on E = { e 1 , , e j } is a Markov chain with π 0 = π 0 , π k = π k , 1 k < j ; in other words, it satisfies Proposition 2 and (3.10). However, the restriction P E | E to E = { e i , , e n } or G = ( V , E ) S MDA ( G ) is a Markov chain with initial distribution
π i ( y e i ) = y ˜ e 1 , , y ˜ e i 1 π 0 ( y ˜ e 1 ) 1 k < i π k ( y ˜ e k , y ˜ e k + 1 ) π k ( y ˜ e k ) , y ˜ e i = y e i
which is generally different from π i ( y e i ) . We conclude that Proposition 2 ((3.10) in particular) fail for subDAGs G of G obtained by bottom MDA. On the other hand, π i = π i , 1 i < n hold for the above G , G provided the π k ’s satisfy the additional compatibility condition:
π k ( y e k ) = π k 1 ( y e k ) , e k = ( k 1 k ) , k = 1 , , n 1
Proposition 3.
Let P E be a Markov edge process on DAG G = ( V , E ) in (3.4). Then for any edge e = ( v 1 v 2 ) E
P E ( y e | y e , e e , e E ) = P E ( y e | y e , e E out ( v 2 ) E in ( v 1 ) ) = π v 1 ( y E out ( v 1 ) | y E in ( v 1 ) ) π v 2 ( y E out ( v 2 ) | y E in ( v 2 ) ) y ˜ e π v 1 ( y ˜ E out ( v 1 ) | y ˜ E in ( v 1 ) ) π v 2 ( y ˜ E out ( v 2 ) | y ˜ E in ( v 2 ) )
and
P E ( y e | y e , e e ) = P E ( y e | y e , e E in ( v 1 ) ) = π v 1 ( y e | y E in ( v 1 ) ) ,
with y ˜ E = ( y ˜ e ; e E ) satisfying y ˜ e = y e , e e .
Proof. Relation (3.11) follows from
P E ( y e | y e , e e , e E ) = P E ( y E ) y ˜ e P E ( y ˜ E )
and the definition in (3.4), since the products v v 1 , v 2 cancel in the numerator and the denominator of (3.13).
Consider (3.12). We use Propositions 1 and 2 according to which the interval DAGs G i = ( V i , E i ) , V i : = { v V : v v i } , i = 1 , 2 belong to S MDA + ( G ) . The `intermediate’ DAG G ˜ = ( V ˜ , E ˜ ) constructed from G 1 by adding the single edge e = ( v 1 v 2 ) , viz., V ˜ : = V 1 { v 2 } , E ˜ : = E 1 { e } also belongs to S MDA + ( G ) since it can be attained from G 2 by dismantling all edges with the exception of E ˜ . Note { e } { e E : e } = E ˜ . Therefore, by Proposition 2, P E | E ˜ ( y E ˜ ) = P E ˜ ( y E ˜ ) where P E ˜ is a Markov edge process on G ˜ with clique distributions
π ˜ v ( y E ˜ ( v ) ) : = π v ( y E ˜ ( v ) ) , v V ˜ .
The expression in (3.12) follows from (3.14) and (3.11) with E replaced by E ˜ , by noting that v 2 is a sink in G ˜ , hence π ˜ v 2 ( y E ˜ out ( v 2 ) | y E ˜ in ( v 2 ) ) = 1 whereas π ˜ v 1 ( y E ˜ out ( v 1 ) | y E ˜ in ( v 1 ) ) = π v 2 ( y e | y E in ( v 1 ) ) . □
Remark 3.
Note that e E out ( v 2 ) E in ( v 1 ) in the conditional probability on the r.h.s. of (3.11) are nearest neighbors of e in the line graph (DAG) G E = ( V ( G E ) , E ( G E ) )
Theorem 2.
Let G = ( V , E ) be a given DAG and P E a Markov edge process in (3.4) with clique distributions π v Π v , v V satisfying the compatibility condition
π v 1 ( y e ) = π v 2 ( y e ) , e = ( v 1 v 2 ) E
and two marginal independence conditions:
π v ( y E out ( v ) ) = e E out ( v ) π v ( y e )
and
π v ( y E in ( v ) ) = e E in ( v ) π v ( y e ) .
Then the family of Markov edge process P E , G = ( V , E ) S MDA ( G ) with clique distributions π v Π v , v V given in (3.10) is consistent, viz.,
P E | E = P E , G = ( V , E ) S MDA ( G ) .
Remark 4. (i) Conditions (3.16) and (3.17) do not imply mutual independence of the in- and out- clique variables y E in ( v ) and y E out ( v ) under π v .
(ii) Conditions (3.16) and (3.17) automatically are satisfied if card ( E in ( v ) ) , card ( E out ( v ) ) { 0 , 1 } (Markov chain).
(iii) Conditions (3.16) and (3.17) are symmetric w.r.t. DAG reversion (all directions reversed)
(iv) In the binary case ( Y e = 0 , 1 ), the value Y e = 1 can be interpreted as the presence of `particle’ and Y e = 0 as its absence on edge e E . `Particles’ `move’ on DAG ( V , E ) in the direction of arrows. `Particles’ `collide’, `annihilate’, or `branch’ at nodes v V , with probabilities determined by clique distribution π v . See sec.4 for a detailed description of particle evolution for Arak model on Z 2 .
Proof of Theorem 2. it suffices to prove (3.18) for 1-step MDA:
E E = E { e } and E + E + = E { e + }
which remove a single edge e = ( v v ) coming from a source v V , and a single edge e + = ( v v + ) going to a sink v + V , respectively. Moreover, it suffices to consider E only. The proof for E + follows from Proposition 2 and does not require (3.15)-(3.17). (It also follows from E by DAG reversion.) Then, by marginal independence of π v , v = v , v and π v ( y e ) = π v ( y e ) ,
P E ( y E ) = π v ( y e ) e E out ( v ) , e e π v ( y e ) π v ( y E in ( v ) E out ( v ) ) π v ( y e ) π v ( y E in ( v ) { e } ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) = e E out ( v ) , e e π v ( y e ) π v ( y E in ( v ) E out ( v ) ) π v ( y E in ( v ) { e } ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) .
From the definition of π v in (3.10) we have that e E out ( v ) , e e π v ( y e ) = π v ( y E out ( v ) ) , π v ( y E in ( v ) { e } ) = π v ( y E in ( v ) ) , v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) and therefore
P E ( y E ) = π v ( y E out ( v ) ) π v ( y E in ( v ) E out ( v ) ) π v ( y E in ( v ) ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) .
leading to
P E ( y E ) = π v ( y E out ( v ) ) π v ( y E in ( v ) E out ( v ) ) π v ( y E in ( v ) ) v V , v v , v π v ( y E out ( v ) | y E in ( v ) ) = v V π v ( y E out ( v ) | y E in ( v ) )
and proving (3.18) for E = E defined above, or the statement of the theorem for 1-step MDA E E . □
Corollary 2.
Let P E be a Markov edge process on DAG G = ( V , E ) satisfying the conditions of Theorem 2. Then:
(i) The restriction P E | E of P E on a chain DAG G = ( V , E ) S chain ( G ) , V = { v 0 , v 1 , , v k } , E = { e 1 , , e k } , e i = ( v i 1 v i ) is a Markov chain, viz.,
P E | E ( y e 1 , , y e k ) = π v 0 ( y e 1 ) π v 1 ( y e 2 | y e 1 ) π v k 1 ( y e k | y e k 1 )
where
π v 0 ( y e 1 ) = π v 0 ( y e 1 ) , π v i ( y e i 1 , y e i ) = π v i ( y e i 1 , y e i ) , i = 1 , , k .
(ii) The restriction P E | E of P E on a source-to-sink DAG G = ( V , E ) S sts ( G ) is a sequence of independent r.v.s, viz.,
P E | E ( y e , e E ) = v V e E out ( v ) π v ( y e ) ,
where V V is the set of sources of G , E out ( v ) : = E out ( v ) E is the set of edges coming from a source v V and ending into a sink of G , π v ( y e ) : = π v ( y e ) , e E out ( v ) .
Proof. (i) By Proposition 1, G belongs to S MDA ( G ) so that Theorem 2 applies with π v in (3.10) given by (3.14), resulting in (3.19).
(ii) By Proposition 1, G belongs to S MDA ( G ) so that Theorem 2 applies with π v in (3.10) given by
π v ( y E out ( v ) ) = π v ( y E out ( v ) ) = e E out ( v ) π v ( y e ) = e E out ( v ) π v ( y e ) , v V ,
(the second equality holds by (3.16)), resulting in (3.20). □
Remark 5.
A natural generalization of chain and source-to-sink DAGs is a source-chain-sink DAG G = ( V , E ) S ( G ) with the property that any sink v + V is reachable from a source v V by a single chain (directed path). We conjecture that for a source-chain-sink DAG G , the restriction P E | E of P E in Theorem 2 is a product of independent Markov chains on disjoint directed paths of G , in agreement with the representations (3.19) and (3.20) of Corollary 2.
Gibbsian representation of Markov edge process. Gibbsian representation is fundamental in the study of Markov random fields [4,17]. Gibbsian representation of Pickard random fields was discussed in [5,10,22,30]. The following Corollary 3 provides Gibbsian representation of consistent Markov edge process in Theorem 2. Accordingly, the set of vertices of DAG G = ( V , E ) is written as V = V 0 V , where the boundary V : = { v V : E in ( v ) = or E out ( v ) = } consists of sinks and sources of G, and the interior V 0 : = V V of the remaining sites.
Corollary 3.
Let P E be a Markov edge process on DAG G = ( V , E ) satisfying the conditions of Theorem 2 and the positivity condition P E ( y E ) > 0 . Then
P E ( y E ) = exp v V 0 Q v ( y E ( v ) ) + v V Q v V ( y E ( v ) ) ,
where the inner and boundary potentials Q v , Q v V are given by
Q v ( y E ( v ) ) : = log π v ( y E ( v ) ) e E ( v ) π v ( y e ) , Q v V ( y E ( v ) ) : = log e E ( v ) π v ( y e ) .
Formula (3.21) follows by writing (3.4) as P E ( y E ) = exp v V log π v ( y E ( v ) ) log e E in ( v ) π v ( y e ) and rearranging terms in the exponent using (3.15)-(3.17). Note (3.21) is invariant w.r.t. graph reversal (direction of all edges reversed). Formally, the inner potentials Q v in (3.22) do not depend on the orientation of G, raising the question of the necessity of conditions (3.16)-(3.17) in Theorem 2. An interesting perspective seems the study of Markov evolution { Y E ( t ) ; t 0 } of Markov edge process on DAG G = ( V , E ) with invariant Gibbs distribution in (3.21).

4. Consistent Markov Edge Process on Z ν and Arak Model

Let ( Z ν , E ( Z ν ) ) be infinite DAG whose vertices are points of regular lattice Z ν and whose edges are pairs e = ( v v ) , | v v | = 1 , v v directed in the lexicographic order: v = ( t 1 , , t ν ) v = ( t 1 , , t ν ) if and only if t i t i , i = 1 , , ν . We write v v if v v , v v . A ν ˜ -dimensional hyperplane ( 1 ν ˜ < ν )
H ν ˜ = { ( t 1 , , t ν ˜ , s ν ˜ + 1 , , s ν ) Z ν : ( t 1 , , t ν ˜ ) Z ν ˜ }
can be identified with Z ν ˜ , for any fixed ( s ν ˜ + 1 , , s ν ) Z ν ν ˜ .
Let G rec ( Z ν ) denote the class of all finite `rectangular’ subgraphs of ( Z ν , E ( Z ν ) ) , viz., ( V , E ) G rec ( Z ν ) if V = { v Z ν : v v v + } for some v v = Z ν , v ± Z ν , and E = { e = ( v v ) E ( Z ν ) , v , v V } . We denote
G rec ( V ) = { G ν ˜ , 1 ν ˜ < ν }
the class of all subgraphs G ν ˜ = ( V ν ˜ , E ν ˜ ) of ( V , E ) G rec ( Z ν ) formed by intersection V ν ˜ = V H ν ˜ , E ν ˜ = { e = ( v v ) E , v , v V ν ˜ } with a ν ˜ -dimensional hyperplane H ν ˜ Z ν , H ν ˜ V can be identified with an element of G rec ( Z ν ˜ ) . Particularly, for ν = 3 and any ( s 2 , s 3 ) Z 2 ,
H 1 = { ( t 1 , s 2 , s 3 ) : t 1 Z } , H 1 V
the subgraph G 1 = ( V 1 , E 1 ) can be identified with a chain
E 1 = { v 1 v 1 + 1 v 1 + 1 v 1 + } , v 1 + > v 1
of length n = v 1 + > v 1 1 . Similarly, for any s 3 Z ,
H 2 = { ( t 1 , t 2 , s 3 ) : ( t 1 , t 2 ) Z 2 } , H 2 V
the subgraph G 2 = ( V 2 , E 2 ) of G = ( V , E ) G rec ( Z 3 ) is a planar `rectangular’ graph belonging to G rec ( Z 2 ) .
Proposition 4.
Let G = ( V , E ) G rec ( Z ν ) . Any subgraph G ν ˜ G rec ( V ) , 1 ν ˜ < ν can be obtained by applying MDA to G.
We designate Y 1 , Y 3 , , Y 2 ν 1 `outgoing’ and Y 2 , Y 4 , , Y 2 ν `incoming’ clique variables (see Figure 1). Let π be a generic clique distribution of random vector ( Y 1 , , Y 2 , Y 2 ν 1 , Y 2 ν ) on incident edges of v Z ν . We use the notation π A ( y A ) = π ( Y i = y i , i A ) , y A = ( y i , i A ) , A A ν : = { 1 , , 2 ν } , A . A ν = A ν out A ν in , A ν out : = { 1 , 3 , , 2 ν 1 } ,   A ν in : = { 2 , 4 , , 2 ν } .
The compatibility and consistency relations in Theorem 1 write as follows: For any
π i ( y ) = π i + 1 ( y ) , i = 1 , 3 , , 2 ν 1
and
π A ν out ( y A ν out ) = i A ν out π i ( y i ) , π A ν in ( y A ν in ) = i A ν in π i ( y i ) .
Let Π ν be the class of all discrete probability distributions π on R 2 ν satisfying (4.6)-(4.7).
Corollary 4.
Let Y E = { Y e ; e E } be a (consistent) Markov edge process on DAG G = ( V , E ) G rec ( Z ν ) with clique distribution π Π ν . The restriction
Y E ˜ : = { Y e ; e E ˜ } , E ˜ = E H ν ˜
of Y E on any ν ˜ -dimensional hyperplane H ν ˜ in (4.1) is a consistent Markov edge process on DAG G ν ˜ = ( V ˜ , E ˜ ) G rec ( Z ν ˜ ) with generic clique distribution π ˜ ( y A ν ˜ ) : = π ( y A ν ˜ ) , π ˜ Π ν ˜ . Particularly, the restriction
Y E 1 : = { Y e ; e E 1 } , E 1 = E H 1
of Y E to a 1-dimensional hyperplane H 1 in ... is a reversible Markov chain with probability distribution
P ( Y e 1 = y 1 , , Y e n = y n ) = π 1 ( y 1 ) π 1 ( y 2 | y 3 ) π 1 ( y n | y n 1 ) ,
where π 1 ( y 1 , y 2 ) : = π 12 ( y 1 , y 2 ) and π 1 ( y 1 | y 2 ) = π 1 ( y 1 , y 2 ) / π 1 ( y 2 ) , π 1 ( y 2 ) > 0 are the conditional probabilities.
Example 3.
Broken line process. Let ν = 2 and Y i , i = 1 , 2 , 3 , 4 take integer values Y i N = { 0 , 1 , } and have a joint distribution
π ( y 1 , y 2 , y 3 , y 4 ) = ( 1 λ 2 ) ( 1 λ ) 2 λ y 1 + y 2 + y 3 + y 4 λ | y 3 y 1 | 1 ( y 1 y 3 = y 4 y 2 ) ,
y i N , i = 1 , 2 , 3 , 4 , where 0 < λ < 1 is a parameter. Note that for y 1 y 3
y 2 , y 4 N π ( y 1 , y 2 , y 3 , y 4 ) = ( 1 λ 2 ) ( 1 λ ) 2 λ 2 y 3 y 2 , y 4 N λ y 2 + y 4 1 ( y 1 y 3 = y 4 y 2 ) = ( 1 λ ) 2 λ y 1 + y 3
the last equality is valid for y 1 y 3 , too. Therefore, π in (4.11) is a probability distribution on N 4 with (marginals) π 13 and π 24 written as the product of the geometric distribution with parameter λ > 0 . We see that (4.11) belongs to Π 2 and satisfies (4.6) -(4.7) of Corollary 4. The corresponding edge process called the discrete broken line process was studied in Rollo et al [24] and Sidoravicius et al [26] in connection with planar Bernoulli first passage percolation (edge variables are interpreted as random passage times between neighboring sites). The discrete broken line process can be described as an evolution of particles moving (horizontally or vertically) with constant velocity until collision with another particle and dying upon collision, with independent immigration of pairs of particles. An interesting and challenging open problem is the extension of the broken line process to higher dimensions, particularly, to ν = 3 or G G rec ( Z 3 ) .
Definition 9.
We call Arak model a binary Markov edge process Y E = { Y e ; e E } on a DAG G = ( V , E ) G rec ( Z ν ) with clique distribution π Π ν .
We also use the same terminology for the restriction of Arak model on a subDAG G S ( G ) , G G rec ( Z ν ) , and speak about Arak model on Z ν since Y E extends to infinite DAG ( Z ν , E ( Z ν ) ) ; the extension Y = { Y e ; e E ( Z ν ) } is a stationary binary random field whose restriction coincides with Y E . Note that when ( Y 1 , , Y 2 ν ) { 0 , 1 } 2 ν , the clique distribution π is determined by 2 3 ν 1 probabilities
p A : = P ( Y i = 1 , i A ) , A A ν , A .
Particularly, for ν = 3 the compatibility and consistency relations in (4.6)-(4.7) read as
p 1 = p 2 , p 3 = p 4 , p 5 = p 6 , p 135 = p 1 p 3 p 5 , p 13 = p 1 p 3 , p 15 = p 1 p 5 , p 15 = p 1 p 5 , p 246 = p 2 p 4 p 6 , p 24 = p 2 p 4 , p 26 = p 2 p 6 , p 46 = p 4 p 6 .
The resulting consistent Markov edge process on Z 3 depends on 63 3 8 = 52 parameters. In the lattice isotropic case it depends on 9 parameters p 1 , p 13 , p 14 , p 135 , p 123 , p 1234 , p 1345 , p 12345 , p 123456 satisfying 2 consistency equations: p 135 = p 1 3 , p 13 = p 1 2 , resulting in a 7-parameter isotropic binary edge process communicated to the author by Arak
Denote v i : = ( 0 , , 0 i 1 , 1 , 0 , , 0 ) R ν , 1 i ν unit vectors in Z ν , e i = ( v v + v i ) ,   τ n e i : = ( v + n v i v + ( n + 1 ) v i ) - edges parallel to vectors v i , 1 i ν . The following corollary is a consequence of Corollary 1 and the formula for transition probabilities of binary Markov chain in Feller [8][Ch.16.2].
Corollary 5.
Covariance function of Arak model. Let Y E be Arak model on DAG G = ( V , E ) G rec ( Z ν ) Then for any 1 i ν , n N , e i E , τ n e i E
E Y e i Y τ n e i = p i 2 + p i ( 1 p i ) p i , i + 3 p i 2 p i ( 1 p i ) n ,
where p i = P ( Y e i = 1 ) = P ( Y i = 1 ) are as in (4.12).
Remark 6.
The conditional probabilities in Corollary 2 (3.5) for Arak model can be expressed through p A in (4.12). Particularly, Arak model in dimension ν = 2 is a Bayesian network on the line graph of ( Z 2 , E ( Z 2 ) ) if
π 13 2 = π 1234 , π 123 2 = π 1 2 π 1234
In the two-dimensional case ν = 2 Arak model on DAG G = ( V , E ) G rec ( Z 2 ) is determined by clique distribution π of ( Y 1 , Y 2 , Y 3 , Y 4 ) in Figure 1 (b) with probabilities p 1 = P ( Y 1 = 1 ) , , p 1234 = P ( Y 1 = = Y 4 = 1 ) satisfying four conditions:
p 1 = p 2 , p 3 = p 4 , p 13 = p 1 p 3 , p 24 = p 2 p 4 .
The resulting edge process depends on 11 = (15 -4) parameters. In the lattice isotropic case, we have 5 parameters p 1 , p 12 , p 13 , p 123 , p 1234 satisfying a single condition p 13 = p 1 2 and leading to a 4-parameter consistent isotropic binary Markov edge process on Z 2 . Some special cases of parameters in (4.15) are discussed in Examples
Evolution of particle system. Below, we describe the binary edge process Y E in Remark 2 on DAG G = ( V , E ) G rec ( Z 2 ) , V = { ( t , s ) : 1 t m , 1 s n } in terms of particle system evolution. The description becomes somewhat simpler by embedding G into G ¯ = ( V ¯ , E ¯ ) , where
V ¯ = V 1 0 V 2 0 V 1 1 V 1 1 V ,
where 1 0 V = { ( 1 , 0 ) , , ( m , 0 ) } , 2 0 V = { ( 0 , 1 ) , , ( 0 , n ) } , 1 1 V = { ( 1 , n + 1 ) , , ( m , n + 1 ) } , 2 1 V = { ( m + 1 , 1 ) , , ( m + 1 , n ) } are boundary sites (sources or sinks belonging of the extended graph G ¯ ) as shown in Figure 2. The edge process Y E is obtained from Y E ¯ = { Y ¯ e : e E ¯ } as Y E = { Y ¯ e : e E } .
The presence of a particle on edge e = ( v v ) E ¯ (in the sense explained in sec.2) is identified with Y ¯ e = 1 . A particle moving on a horizontal (respectively, vertical) edge is termed horizontal (respectively, vertical). The evolution of particles is described by the following rules:
(p0)
Particles enter the boundary edges 1 0 E = { ( 1 , 0 ) ( 1 , 1 ) , , ( m , 0 ) ( m , 1 ) ) } and 2 0 E = { ( 0 , 1 ) ( 1 , 1 ) , , ( 0 , n ) ( 0 , n ) } independently of each other with respective probabilities p 3 and p 1 .
(p1)
Particles move along directed edges of G ¯ independently of each other until collision with another moving particle. A vertical particle entering an empty site v V , either
(i)
leaves v as a vertical particle with probability ( p 34 p 134 p 234 + p 1234 ) / p 3 ( 1 p 1 ) ; or
(ii)
changes the direction at v to horizontal with probability ( p 14 p 124 p 134 + p 1234 ) / p 3 ( 1 p 1 ) ; or
(iii)
branches at v into two particles moving into different directions, with probability ( p 134 p 1234 ) / p 3 ( 1 p 1 ) ;, or
(iv)
dies at v, with probability ( p 4 p 14 p 24 p 34 + p 124 + p 134 + p 234 p 1234 ) / p 3 ( 1 p 1 ) .
Similarly, a horizontal particle entering an empty site v V exhibits transformations as in (i)-(iv) with respective probabilities ( p 12 p 123 p 124 + p 1234 ) / p 1 ( 1 p 3 ) , ( p 23 p 123 p 234 + p 1234 ) / p 1 ( 1 p 3 ) ,   ( p 123 p 1234 ) / p 1 ( 1 p 3 ) , and ( p 2 p 12 p 23 p 24 + p 123 + p 124 + p 234 p 1234 ) / p 1 ( 1 p 3 ) .
(p2)
Two (horizontal and vertical) particles entering v V , either
(i)
both die with probability ( p 24 p 124 p 234 + p 1234 ) / p 1 p 3 ; or
(ii)
the horizontal one survives and the vertical one dies with probability ( p 124 p 1234 ) / p 1 p 3 . or
(iii)
the vertical one survives and the horizontal one dies with probability ( p 234 p 1234 ) / p 1 p 3 . or
(iv)
both particles survive with probability p 1234 / p 1 p 3 .
(p3)
At an empty site v V (no particle enters v), either
(i)
a single horizontal particle is born with probability ( p 1 p 12 p 13 p 14 + p 123 + p 124 + p 134 p 1234 ) / ( 1 p 1 ) ( 1 p 2 ) ; or
(ii)
a single vertical particle is born with probability ( p 3 p 13 p 23 p 34 + p 123 + p 134 + p 234 p 1234 ) / ( 1 p 1 ) ( 1 p 2 ) ; or
(iii)
no particles are born, with probability ( 1 p 1 p 2 p 3 p 4 + p 12 + p 13 + p 14 + p 23 + p 24 + p 34 p 123 p 124 p 134 p 234 + p 1234 ) / ( 1 p 1 ) ( 1 p 3 ) . or
(iii)
two (horizontal and vertical particles) are born with probability ( p 13 p 123 p 134 + p 1234 ) / ( 1 p 1 ) ( 1 p 3 ) .
The above list provides a complete description of particle transformations or transition probabilities of the edge process in terms of parameters p 1 , , p 1234 .
Example 4. ( Y 1 , Y 2 , Y 3 , Y 4 ) are independent r.v.s, P ( Y 1 = 1 ) = P ( Y 2 ) = p 1 , P ( Y 3 = 1 ) = P ( Y 4 = 1 ) = p 3 , p 12 = p 1 2 , p 34 = p 3 2 , , p 1234 = p 1 2 p 3 2 . Accordingly, Y e , e E take independent values on edges of a ’rectangular’ graph G = ( V , E ) , with generally different probabilities p 1 and p 3 for horizontal and vertical edges. In terms of the particle evolution, this means the `outgoing’ particles at each site v V being independent and independent of the `incoming’ ones.
Example 5.
p 12 = p 1 , implying P ( Y 1 Y 2 ) = 0 and Y e 2 = Y e 2 + τ n e 1 for any horizontal shift of e 2 . The corresponding edge process Y E in this case takes constant values on each horizontal line of the rectangle V. Similarly, case p 34 = p 3 leads to Y E taking constant values on each vertical line of V.
Example 6.
P ( Y i Y j . j i , j = 1 , 2 , 3 , 4 ) = 0 , i = 1 , 2 , 3 , 4 , meaning that none of the four binary edge variables can be different from the remaining three. This implies E Y 1 Y 2 Y 3 ( 1 Y 4 ) = = E ( 1 Y 1 ) Y 2 Y 3 Y 4 = E Y 1 Y 2 Y 3 Y 4 , or
p 123 = p 124 = p 134 = p 234 = p 1234 ,
and E ( 1 Y 1 ) ( 1 Y 2 ) ( 1 Y 3 ) Y 4 = E ( 1 Y 1 ) ( 1 Y 2 ) ( 1 Y 4 ) Y 3 = E ( 1 Y 1 ) ( 1 Y 3 ) ( 1 Y 4 ) Y 2 = E ( 1 Y 2 ) ( 1 Y 3 ) ( 1 Y 4 ) Y 1 = 0 , or
p 14 = p 23 , p 12 = p 1 p 13 p 14 + 2 p 1234 , p 34 = p 3 p 13 p 23 + 2 p 1234 .
The resulting 4-parameter model is determined by p 1 , p 3 , p 23 , p 1234 . In the isotropic case we have two parameters p 1 , p 1234 since p 3 = p 1 , p 23 = p 1 2 . This case does not allow for death, birth or branching of a single particle; particles move independently till collision with another moving particle, upon which both colliding particles die or cross each other with probabilities defined in (p1)-(p3) above.

5. Contour Edge Process Induced by Pickard Model

Consider a binary site model X V = { X v ; v V } on rectangular graph G = ( V , E ) ,
V = { v Z 2 : v v v + } , E = { ( v v ) , ; v , v V , v = v ( 0 , 1 ) , v ( 1 , 0 ) , v ( 1 , 1 ) } ,
Let Z ˜ 2 : = Z 2 + ( 1 / 2 , 1 / 2 ) be the shifted lattice, v ˜ = v + ( 1 / 2 , 1 / 2 ) , v ˜ + = v ( 1 / 2 , 1 / 2 ) , v ˜ ± Z ˜ 2 and G ˜ = ( V ˜ , E ˜ ) G rec ( Z 2 ) be the rectangular graph with V ˜ = { v ˜ Z ˜ 2 : v ˜ v ˜ v ˜ + } , E ˜ = { e ˜ = ( v ˜ v ˜ ) E ( Z ˜ 2 ) , v ˜ , v ˜ V ˜ } . For any horizontal or vertical edge e = ( v v ) E E ( Z 2 ) we designate
e ˜ ( e ) = ( v ˜ , v ˜ ) E ˜
the edge of G ˜ which `perpendicularly crosses e at the middle’ and is defined formally by
v ˜ : = v + ( 1 / 2 , 1 / 2 ) , v = v ( 1 , 0 ) , v + ( 1 / 2 , 1 / 2 ) , v = v ( 0 , 1 ) , v ˜ : = v ˜ ( 0 , 1 ) , v = v ( 1 , 0 ) , v ˜ ( 1 , 0 ) , v = v ( 1 , 0 ) .
A Boolean function β = β ( i , j ) , i , j { 0 , 1 } takes two values 0 or 1. The class of such Boolean functions has 4 2 = 16 elements.
Definition 10.
Let β , β be two Boolean functions. A contour edge process induced by a binary site model X V on G = ( V , E ) in (5.1) and corresponding to β , β is a binary edge process on G ˜ = ( V ˜ , E ˜ ) defined by
Y e ˜ ( e ) : = β ( X v , X v ) if v = v ( 1 , 0 ) , β ( X v , X v ) if v = v ( 0 , 1 ) ,
Probably, the most natural contour process occurs in the case of Boolean functions
β ( i , j ) = β ( i , j ) : = 1 ( i j ) , i , j = 0 , 1 ,
visualized by `drawing an edge’ e ˜ E ˜ across neighboring `occupied’ and `empty’ sites v , v in the site process. The introduced `edges’ form `contours’ between connected components of the random set { v V : X v = 1 } . Contour edge processes are usually defined for site models on undirected lattices and are well-known in statistical physics [27] (e.g., the Ising model), where they represent boundaries between ± 1 `spins’ and are very helpful in rigorous study of phase transitions. In our paper, a contour process is viewed as a bridge between binary site and edge processes on DAG G = ( V , E ) in (5.1), particularly, between Pickard and Arak models. Even in this special case, our results are limited to the Boolean functions in (5.4), raising many open questions for future work. Some of these questions are mentioned at the end of the paper.
Theorem 3.
The contour edge process Y E ˜ in (5.3)-(5.4) induced by a non-degenerate Pickard model coincides with Arak model on G ˜ if and only if X V is symmetric w.r.t. X v 1 X v . The last condition is equivalent to
ϕ A = 1 / 2 , ϕ A B D = ϕ A C D , ϕ B C 2 ϕ A B C = ϕ A D 2 ϕ A B D .
Moreover, Y E ˜ coincides with Arak model in Example 6 with
p 1 = p 2 = 1 2 ϕ A C , p 3 = p 4 = 1 2 ϕ A B , p 23 = 1 / 2 ϕ A B ϕ A C + ϕ A D , p 1234 = 1 2 ϕ A B 2 ϕ A C + 2 ϕ A B C D .
Proof.Necessity. Let Y E ˜ in (5.3)-(5.4) agree with Arak model. Accordingly, the generic clique distribution ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) is determined by generic family distribution ( A , B , C , D ) as
Y ˜ 1 = 1 ( B D ) , Y ˜ 2 = 1 ( A C ) , Y ˜ 3 = 1 ( C D ) , Y ˜ 4 = 1 ( A B ) ,
see Figure 3. From (2.13) we find that
p 1 = P ( Y ˜ 1 = 1 ) = 2 ( ϕ B ϕ B D ) = 2 ( ϕ A ϕ A C ) = P ( Y ˜ 2 = 1 ) = p 2 , p 3 = P ( Y ˜ 3 = 1 ) = 2 ( ϕ A ϕ A B ) = P ( Y ˜ 4 = 1 ) = p 4 , P ( Y ˜ 2 = Y ˜ 4 = 1 ) = p 24 = P ( A = 0 , B = C = 1 ) + P ( A = 1 , B = C = 0 ) = ϕ B C + ϕ A ϕ A B ϕ A C = P ( Y ˜ 1 = Y ˜ 3 = 1 ) .
The two first equations in (4.15) are satisfied by (5.8). Let us show that (5.8) imply the last two equations in (4.15), viz., P ( Y ˜ 2 = Y ˜ 4 = 1 ) = P ( Y ˜ 2 = 1 ) P ( Y ˜ 4 = 1 ) . Using (2.8), (5.8) and the same notation as in (2.15) we see that p 24 = p 2 p 4 is equivalent to equation
4 ( a b ) ( a c ) = a b c + b c a + ( a b ) ( a c ) 1 a .
that factorizes as ( 2 a 1 ) 2 ( a b ) ( a c ) = 0 . Hence, a = ϕ A = 1 / 2 since a = b , a = c are excluded by non-degeneracy of X V .
Let us show the necessity of the two other conditions in (5.5). Let V = { A , B , C , D , A , C } be a 2 × 1 rectangle as in Figure 3, and ( Y ˜ 1 , , Y ˜ 7 ) be the corresponding edge process, where Y ˜ i , i = 1 , 2 , 3 , 4 are as in ... and
Y ˜ 5 = 1 ( B A ) , Y ˜ 6 = 1 ( D C ) , Y ˜ 7 = 1 ( A C ) .
The last fact implies that ( Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) and ( Y ˜ 5 , Y ˜ 6 , Y ˜ 7 ) are conditionally independent given Y ˜ 1 = 0 , 1 . Particularly,
P ( Y ˜ 1 = 1 , Y ˜ 2 = = Y ˜ 7 = 1 ) P ( Y ˜ 1 = 1 ) = P ( Y ˜ 1 = Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 1 ) P ( Y ˜ 1 = Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 1 )
and
P ( Y ˜ 1 = 1 , Y ˜ 2 = = Y ˜ 7 = 0 ) P ( Y ˜ 1 = 1 ) = P ( Y ˜ 1 = 1 , Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 0 ) P ( Y ˜ 1 = 1 , Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 0 ) .
From ϕ A = 1 / 2 and (2.13) we find that P ( Y ˜ 1 = 1 ) = P ( A C ) = 1 2 ϕ A C ,
P ( Y ˜ 1 = Y ˜ 2 = = Y ˜ 7 = 1 ) = P ( A = D = A = 1 , B = C = C = 0 ) + P ( A = D = A = 0 , B = C = C = 1 ) = ϕ ( A = D = 1 , B = C = 0 ) ϕ ( A = 1 | B = 0 ) ϕ ( C = 0 | A = D = 1 , B = 0 ) + ϕ ( A = D = 0 , B = C = 1 ) ϕ ( A = 0 | B = 1 ) ϕ ( C = 1 | A = D = 0 , B = 1 ) = 4 ( ϕ B C 2 ϕ A B C + ϕ A B C D ) ( ϕ A D ϕ A C D ϕ A B D + ϕ A B C D ) / ( 1 2 ϕ A C ) ,
and
P ( Y ˜ 1 = Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 1 ) = P ( Y ˜ 1 = Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 1 ) = P ( A = D = 1 , B = C = 0 ) + P ( A = D = 0 , B = C = 1 ) = ( ϕ A D ϕ A B D ϕ A C D + ϕ A B C D ) + ( ϕ B C 2 ϕ A B C + ϕ A B C D )
Hence, (5.9) for x : = ϕ B C 2 ϕ A B C + ϕ A B C D , y : = ϕ A D ϕ A C D ϕ A B D + ϕ A B C D leads to 4 x y = ( x + y ) 2 , or
ϕ B C 2 ϕ A B C = ϕ A D ϕ A C D ϕ A B D
Next, consider (5.10). We have
P ( Y ˜ 1 = 1 , Y ˜ 2 = = Y ˜ 7 = 0 ) = P ( A = C = D = 1 , B = A = C = 0 ) + P ( A = C = D = 0 , B = A = C = 1 ) = ϕ ( A = C = D = 1 , B = 0 ) ϕ ( A = 0 | B = 0 ) ϕ ( C = 0 | A = B = 0 , D = 1 ) + ϕ ( A = C = D = 0 , B = 1 ) ϕ ( A = 1 | B = 1 ) ϕ ( C = 1 | A = B = 1 , D = 0 ) = 2 ( ϕ A B D ϕ A B C D ) 2 + ( ϕ A C D ϕ A B C D ) 2 / ( 1 2 ϕ A C ) , P ( Y ˜ 1 = 1 , Y ˜ 2 = Y ˜ 3 = Y ˜ 4 = 0 ) = P ( Y ˜ 1 = 1 , Y ˜ 5 = Y ˜ 6 = Y ˜ 7 = 0 ) = ϕ A B D + ϕ A C D 2 ϕ A B C D .
Hence, (5.10) for x : = ϕ A B D ϕ A B C D , y : = ϕ A C D ϕ A B C D writes as 2 ( x 2 + y 2 ) = ( x + y ) 2 yielding x = y and
ϕ A B D = ϕ A C D .
Equations (5.11) and (5.12) prove (5.5).
Finally, let us show that (5.5) implies symmetry of family distribution ϕ and Pickard model X V . Indeed, ϕ ( A , B , C , D ) = ϕ ( 1 A , 1 B , 1 C , 1 D ) is equivalent to equality of moment functions:
E A = E ( 1 A ) , , E A B C D = E ( 1 A ) ( 1 B ) ( 1 C ) ( 1 D ) .
Relation E A = 1 / 2 implies the coincidence of moment functions up to order 2: E A B = E ( 1 A ) ( 1 B ) , E A C = E ( 1 A ) ( 1 C ) , E A D = E ( 1 A ) ( 1 D ) , E B C = E ( 1 B ) ( 1 C ) so that (5.13) reduces to
E A B C = E ( 1 A ) ( 1 B ) ( 1 C ) , E A B D = E ( 1 A ) ( 1 B ) ( 1 D ) , E A C D = E ( 1 A ) ( 1 B ) ( 1 C ) , E A B C D = E ( 1 A ) ( 1 B ) ( 1 C ) ( 1 D ) .
Relation E ( 1 A ) ( 1 B ) ( 1 C ) = 1 3 / 2 + ϕ A B + ϕ A C + ϕ B C ϕ A B C = ϕ A B C = E A B C follows from (2.13), whereas the remaining three relations in (5.14) use (2.13) and (5.5).
It remains to show the symmetry of Pickard model X V . Write x ^ V = { x ^ v ; v V } , x ^ v : = 1 x v { 0 , 1 } for configuration of the transformed Pickard model X ^ v : = 1 X v . Then by the definition of Bayesian network in (2.3),
P V ( x ^ V ) = v V ϕ v ( x ^ v | x ^ F ( v ) ) .
where ϕ v ( x ^ v | x ^ F ( v ) ) = ϕ v ( x v | x F ( v ) ) provided ϕ v ( x ^ F ( v ) ) = ϕ v ( x F ( v ) ) have the symmetry property. The latter property is valid for our family distribution ϕ , implying P V ( x ^ V ) = P V ( x V ) and ending the proof of the necessity part of Theorem 3.
Sufficiency. As shown above, the symmetry P V ( x ^ V ) = P V ( x V ) implies (5.5) and (4.15) for the distribution π ˜ of the quadruple ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) in (5.7). We need to show that Y E ˜ in (5.3)-(5.4) is a Markov edge process with clique distribution π ˜ following Definition 7.
Let v = ( 1 , 1 ) be the left bottom point of V in (5.1) and
Y E ˜ : = { y e ˜ ( e ) { 0 , 1 } : x V = { x v ; v V } { 0 , 1 } V s . t . y e ˜ ( e ) = β ( x v , x v ) e = ( v v ) , v = v ( 1 , 0 ) , β ( x v , x v ) e = ( v v ) , v = v ( 0 , 1 ) }
be the set of all configurations of the contour model. It is clear that any y E ˜ = { y e ˜ ; E ˜ } Y E ˜ uniquely determines x V = { x v ; v V } { 0 , 1 } V in (5.16) up to the symmetry transformation: there exist two and only two configurations x V = { x v ; v V } , x ^ V = { x ^ v ; v V } , x ^ v = 1 x v , x ( 1 , 1 ) = 1 , x ^ ( 1 , 1 ) = 0 satisfying (5.16). Then, by the definition of Pickard model, and the symmetry of ϕ
P ( Y e ˜ ( e ) = y e ˜ ; e ˜ E ˜ ) = P V ( x V ) + P V ( x ^ V ) = 2 v V ϕ ( x v | x pa ( v ) )
where
ϕ ( x v | x pa ( v ) ) = ϕ ( A , B , C , D ) ϕ ( A , B , C )
for x v = D , x v ( 1 , 1 ) = A , x v ( 1 , 0 ) = C , x v ( 0 , 1 ) = D and v V V having a 4-point family as in Figure 1, right. From the definitions in (5.7) we see that
P ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) = ϕ ( A , B , C , D ) + ϕ ( 1 A , 1 B , 1 C , 1 D ) = 2 ϕ ( A , B , C , D ) , P ( Y ˜ 2 , Y ˜ 4 ) = ϕ ( A , B , C ) + ϕ ( 1 A , 1 B , 1 C ) = 2 ϕ ( A , B , C ) , P ( Y ˜ 4 ) = ϕ ( A , B ) + ϕ ( 1 A , 1 B ) = 2 ϕ ( A , B ) , P ( Y ˜ 2 ) = ϕ ( A , C ) + ϕ ( 1 A , 1 C ) = 2 ϕ ( A , C ) .
This and (5.18) yield
ϕ ( x v | x pa ( v ) ) = π ˜ ( y e ˜ 1 ( v ) , y e ˜ 2 ( v ) | y e ˜ 2 ( v ) , y e ˜ 4 ( v ) ) , v V V
where e ˜ 1 ( v ) = ( v ( 1 / 2 , 1 / 2 ) v + ( 1 / 2 , 1 / 2 ) ) , e ˜ 3 ( v ) = ( v ( 1 / 2 , 1 / 2 ) v + ( 1 / 2 , 1 / 2 ) ) are out-edges and e ˜ 2 ( v ) = ( v ( 3 / 2 , 1 / 2 ) v ( 1 / 2 , 1 / 2 ) ) , e ˜ 4 ( v ) = ( v ( 1 / 2 , 3 / 2 ) v ( 1 / 2 , 1 / 2 ) ) in-edges of Z ˜ 2 . An analogous relation to (5.19) holds for v V , v v whereas for v = v we have ϕ ( x v | x pa ( v ) ) = ϕ ( x v ) = 1 / 2 , cancelling with factor 2 on the r.h.s. of (5.17). The above argument leads to the desired expression
P ( Y e ˜ ( e ) = y e ˜ ; e ˜ E ˜ ) = v ˜ V ˜ π ˜ ( y E out ( v ˜ ) | y E in ( v ˜ ) )
of the contour model as the Arak model with clique distribution π ˜ given by the distribution of ( Y ˜ 1 , Y ˜ 2 , Y ˜ 3 , Y ˜ 4 ) in (5.7).
Let us show that the latter model coincides with Example 6 and its parameters satisfy (5.6), (4.16). and (4.17). Indeed, (5.7) imply P ( Y ˜ i Y ˜ j . j i , j = 1 , 2 , 3 , 4 ) = 0 , i = 1 , 2 , 3 , 4 hence (4.16) since p 23 = p 14 = ϕ A D ϕ A B D follow from the 2nd equation in (5.5). Finally, (5.6) is a consequence of (2.15). Theorem 3 is proved. □
Remark 7. (i) The Boolean functions β ( i , j ) , β ( i , j ) in (5.4) are invariant under symmetry ( i , j ) ( 1 i , 1 j ) , i , j = 0 , 1 . This fact seems to be related to the symmetry of Pickard model in Theorem 3. It is of interest to extend Theorem 3 to non-symmetric Boolean functions, in an attempt to completely clarify the relation between Pickard and Arak models in dimension 2.
(ii) A contour model in dimension ν 3 is usually formed by drawing a ( ν 1 ) -dimensional plaquette perpendicularly to the edge between neighboring sites | v v | = 1 , X v X v of a site model X in Z ν [12,27]. It is possible that a natural extension of the Pickard model in higher dimensions is a plaquette model (i.e., a random field indexed by plaquettes rather than sites in Z ν ), which satisfies a similar consistency property as in (2.9) and is related to the Arak model in Z ν . Plaquette models may be a useful and realistic alternative to site models in crystallography [6].

Acknowledgments

This work was inspired by collaboration and personal communication with Taivo Arak (1946-2007). I also thank Mindaugas Bloznelis for his interest and encouragement.

References

  1. Arak, T., Clifford, P. and Surgailis, D. (1993) Point-based polygonal models for random graphs. Adv. Appl. Prob. 25, 348–372.
  2. Arak, T. and Surgailis, D. (1989) Markov fields with polygonal realizations. Probab. Th. Rel. Fields 80, 543–579.
  3. Ben-Gal, I. (2007) Bayesian networks. In: Ruggeri F., Faltin F. and Kenett R., Encyclopedia of Statistics in Quality and Reliability, 1–6, Wiley.
  4. Besag, J. (1974) Spatial interaction and the statistical analysis of lattice systems (with Discussion). J. Royal Statist. Soc. B, 36, 192–236,.
  5. Champagnat, F., Idier, J. and Goussard, Y. (1998) Stationary Markov random fields on a finite rectangular lattice. IEEE Trans. Inf. Th. 44, 2901 - 2916.
  6. Enting, I.G. (1977) Crystal growth models and Ising models: disorder points. J. Phys. C. Solid State Phys. 10, 1379–1388.
  7. Davidson, J., Talukder, A. and Cressie, N. (1994) Texture analysis using partially ordered Markov models. In: Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference (pp. 402-406). IEEE Computer Soc Press.
  8. Feller, W. (1950) An Introduction to Probability Theory and Its Applications, vol. 1. Wiley, New York.
  9. Forchhammer, S. and Justesen, J. (2009) Block Pickard models for two-dimensional constraints. IEEE Trans. Inf. Th. 55, 4626–4634.
  10. Goutsias, J. (1989) Mutually compatible Gibbs random fields. IEEE Trans. Inform. Theory 35, 1233–1249.
  11. Gray, A.J., Kay, J.W. and Titterington, D.M. (1994) An empirical study of the simulation of various models used for images. IEEE Trans. Pattern Anal. Machine Intel. 16, 507–513.
  12. Grimmet, G. (2006) The Random Cluster Model. Springer, New York.
  13. Grimmet, G. (2018) Probability on Graphs, 2nd edition. Cambridge Univ. Press.
  14. Justesen, J. (2005) Fields from Markov chains. IEEE Trans. Inf. Th. 51, 4358–4362.
  15. Kahn, J. (2015) How many T-tessellations on k lines? Existence of associated Gibbs measures on bounded convex domains. Random Struct. Alg. 47, 561–587.
  16. Kiêu, K., Adamczyk-Chauvat, K., Monod, H. and Stoica, R. (2013) A completely random T-tessellation model and Gibbsian extensions. Spat. Statist. 6, 118–138.
  17. Kindermann, R. and Snell, J.L. (1980) Markov Random Fields and Their Applications. Contemporary Mathematics, v.1. American Math. Soc.
  18. Lauritzen, S. (1996) Graphical Models. Oxford Univ. Press.
  19. Lauritzen, S., Dawid, A.P., Larsen, B. and Leimer, H. (1990) Independence properties of directed Markov fields. Networks 20, 491–505.
  20. Neapolitan R.E. (2004) Learning Bayesian networks. Prentice Hall, N.Y.
  21. Pearl, J. (2000) Causality: models, reasoning, and inference. Cambridge University Press, N.Y.
  22. Pickard, D.K. (1977) A curious binary lattice process. J. Appl. Probab. 14, 717–731.
  23. Pickard, D.K. (1980) Unilateral Markov fields. Adv. Appl. Probab. 12, 655–671.
  24. Rollo, L.T., Sidoravicius, V., Surgailis, D. and Vares, M.E. (2010) The discrete and continuum broken line process. Markov Process. Rel. Fields 16, 79–116.
  25. Russell, S.J. and Norvig, P. (2003) Artificial Intelligence: A Modern Approach (2nd ed.), : Prentice Hall. N.Y.
  26. Sidoravicius, V., Surgailis, D. and Vares, M.E. (1999) Poisson broken lines’ process and its application to Bernoulli first passage percolation. Acta Appl. Math. 58, 311–325.
  27. Sinai, Ya.G. (1982) Theory of Phase Transitions: Rigorous Results. Pergamon Press, Oxford.
  28. Surgailis, D. (1991) The thermodynamic limit of polygonal models. Acta Appl. Math. 22, 77–102.
  29. Thäle, C. (2011) Arak-Clifford-Surgailis tesselations. Basic properties and variance of the total edge length. J. Statist. Phys. 144, 1329–1339.
  30. Verhagen, A.M.V. (1977) A three parameter isotropic distribution of atoms and the hard-core square lattice gas. J. Chem. Phys. 67, 5060–5065.
Figure 1. Left: rectangular DAG in (2.5), right: generic family F ( v ) = v pa ( v ) , pa ( v ) = { v , v , v }
Figure 1. Left: rectangular DAG in (2.5), right: generic family F ( v ) = v pa ( v ) , pa ( v ) = { v , v , v }
Preprints 177727 g001
Figure 2. Generic clique variables of Markov edge process on Z ν , ` ν = 3 , 2 , 1 .
Figure 2. Generic clique variables of Markov edge process on Z ν , ` ν = 3 , 2 , 1 .
Preprints 177727 g002
Figure 3. Contour edge variables in Theorem 3
Figure 3. Contour edge variables in Theorem 3
Preprints 177727 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated