Constrained Dynamic Tree Networks^{*}Matthew Hague Vincent PenelleRoyal Holloway, University of London Royal Holloway, University of London and Université ParisEst, LIGM (CNRS UMR 8049), UPEM, CNRS 
Abstract: We generalise Constrained Dynamic Pushdown Networks, introduced by Bouajjani et al., to Constrained Dynamic Tree Networks. In this model, we have trees of processes which may monitor their children. We allow the processes to be defined by any computation model for which the alternating reachability problem is decidable. We address the problem of symbolic reachability analysis for this model. More precisely, we consider the problem of computing an effective representation of their reachability sets using finite state automata. We show that backward reachability sets starting from regular sets of configurations are always regular. We provide an algorithm for computing backward reachability sets using tree automata.
Bouajjani et al. [2] defined Constrained Dynamic Networks of Pushdown Systems: a model of concurrent computation where configurations of processes are tree structures, and each process is given by a pushdown system. During an execution, new child processes can be created, and a parent can test the states of its children before performing an execution step. They considered the global backwards reachability problem for these systems. That is, given a regular set of target configurations, compute the set of configurations that can reach the target set. They showed that, under a stability constraint, this backwards reachability set is regular and computable.
The stability constraint requires that once a test a parent may make on its children is satisfied, then it will remain satisfied, even if the children continue their execution. In the simplest case, this allows a parent to test for termination in a given state of its children. In general, this constraint allows a parent to (repeatedly) test whether its children have passed certain stages of execution (and their state in doing so).
We show here that Bouajjani et al.’s result is not dependent on the processes in the tree being modelled by pushdown systems. In fact, all that is required is that the alternating reachability problem is decidable for the systems labelling the nodes in the tree. Intuitively, in the alternating reachability problem, some steps during the run of the system may be required to split into separate paths. From the initial state, we ask whether all paths of the execution reach a given final state.
Thus, we introduce Constrained Dynamic Tree Networks, which are tree networks of processes as before, but the individual processes can be labelled by any system for which the alternating reachability problem is decidable.
One particular instance of interest is the case of networks of collapsible pushdown systems [13]. Collapsible pushdown systems are a generalisation of pushdown systems that are known to be equiexpressive with HigherOrder Recursion Schemes. The alternating reachability problem is known to be decidable for these systems [29]. In fact, the backwards reachability sets of alternating collapsible pushdown systems are also known to be computable and regular [4]. Thus, we obtain a new model of concurrent higherorder programs for which the backwards reachability sets are also computable and regular. An advantage of our approach is that we do not need to consider the technical difficulties of reasoning about collapsible pushdown systems. The proof presented here only needs to take care of the concurrent aspects of the computations. Thus, we obtain results for quite complex systems with a relatively modest proof.
Modern day programming increasingly embraces higherorder programming, both via the inclusion of higherorder constructs in languages such as C++, JavaScript and Python, but also via the importance of callbacks in highly popular technologies such as jQuery and Node.js. For example, to read a file in Node.js, one would write
In this code, the call to readFile
spawns a new thread that
asynchronously reads f.txt
and sends the data
to the function
argument. This function will have access to, and frequently use, the closure
information of the scope in which it appears. The rest of the program runs
in parallel with this call. This style of programming is
fundamental to both jQuery and Node.js programming, as well as being a popular
for programs handling input events or slow IO operations such as fetching
remote data or querying databases (e.g. HTML5’s indexedDB).
Analysing such programs is a challenge for verification tools which usually do not model higherorder recursion, or closures, accurately. However, several higherorder modelchecking tools have been recently developed. This trend was pioneered by Kobayashi et al. [16]. The feasibility of higherorder modelchecking in practice has been demonstrated by numerous higherorder modelchecking tools [15, 17, 19, 27, 5, 6, 31]. Since all of these tools can handle the alternating reachability problem, it is possible that our techniques may be used to provide model checking tools for concurrent higherorder programs.
Our construction follows Bouajjani et al. and uses a saturation method to construct a regular representation of the backwards reachability set. However, our automaton representation is different: it separates the representation of the system states from the tree structure. We also use different techniques to prove correctness of the construction. In particular, our soundness proof works by defining and showing soundness of each transition of the automaton, rather than dissecting complete runs. This is an application of a technique first used for a saturation technique for solving parity games over pushdown systems [14].
Dynamic pushdown networks have been wellstudied. Often these are studied without the process tree structure or constraints allowing a parent to inspect its children [24, 35]. Various decidabilitypreserving locking techniques have also been investigated [22]. Some of these works also allow the tree structure to be taken into account [21, 28]. Touili and Atig have also considered communication structures that are not necessarily trees [37]. However, these works consider pushdown networks only.
There has been some work studying concurrent variants of recursion scheme model checking, including a contextbounded algorithm for recursion schemes [18], and further underapproximation methods such as phasebounded, ordered, and scopebounding [11, 34]. These works allow only a fixed number of threads.
Dynamic thread creation is permitted by both Yasukata et al. [38] and by Chadha and Viswanathan [7]. In Yasukata et al.’s model, recursion schemes may spawn and join threads. Communication is permitted only via nested locks. Their work is a generalisation of results for order1 pushdown systems [10]. Chadha and Viswanathan allow threads to be spawned, but only one thread runs at a time, and must run to completion. Moreover, the tree structure is not maintained.
The saturation technique was popularised by Bouajjani et al. [1] for the analysis of pushdown systems, which was implemented in the successful Moped tool [33, 36]. Saturation methods also exist for ground tree rewrite systems and related systems [23, 3, 25], though use different techniques.
Ground tree rewrite systems may also be generalised to trees where the nodes are labelled by higherorder stacks. Penelle proves decidability of first order logic with reachability over such systems [30]. However, this result does not allow nodes to have an unbounded number of direct children, and does not consider collapsible stacks in their full generality.
Finally, there is various research into metaresults on the analysis of concurrent systems, where the concurrent structure is the object of the research. Recent work by La Torre et al. has shown that parameterised safety analysis is possible of asynchronous networks of sharedmemory systems [20], provided, amongst other constraints, the downwards closure of the system is computable. Collapsible pushdown systems are known to have these properties [8, 12, 39]. Other works have studied multistack pushdown systems and offered either bounded treewidth [26], or splitwidth [9], as explanations for decidability. However, these results have not been extended to higher orders.
We define the notion of alternating transition system. Given a (possibly infinite) set of elements Γ and operations θ over Γ, and a finite set of states S and final states F, an alternating transition system has transitions of the form s –[θ]→ S. A label γ is always accepted from a final state. When we apply an alternating transition s –[θ]→ S, to accept an element γ from s, we need to accept θ(γ) from every state in S.
In the following, we consider Γ to be a set, and Ops a set of operations θ : Γ → Γ over Γ. Note, we do not require that θ is defined over all elements of Γ. We also define the special operation Id such that Id(γ) = γ for all γ ∈ Γ.
Given γ ∈ Γ and s ∈ S, we say that γ is accepted from s, denoted γ ⊢ s, if s is final, or if there is a transition ν = (s,θ,S) such that θ(γ) is defined and θ(γ) ⊢ s′ for every s′ ∈ S.
In all the following, we suppose that for every alternating transition system N, given γ ∈ Γ and s ∈ S, we can decide whether γ ⊢ s.
(a, u)(w) = 

We define constrained dynamic tree networks (CDTNs), which allow process trees with dynamic thread creation and parents to inspect the states of their children.
An Mconfiguration is a tree labelled by P × Γ. That is, a configuration is either a leaf node (p,γ)(∅) or a tree (p,γ)(t_{1},⋯, t_{m}) with root (p,γ) and children t_{1},⋯,t_{m} where p ∈ P, γ ∈ Γ, and for each 1 ≤ i ≤ m we have that t_{i} is an Mconfiguration. We write C[t] to denote a configuration containing t as a subtree. Furthermore, we define
S((p,γ)(t_{1},⋯, t_{m})) = p 
to extract the internal state of the root node of a configuration.
Transitions of the form 2 apply θ to a node, while transitions of the form 2 create a new child process. That is, the application of a transition of the form 2 to a configuration C[(p,γ)(t_{1},⋯, t_{m})] yields the configuration C[(p_{a},θ(γ))(t_{1}, ⋯,t_{m})], if θ(γ) is defined and S(t_{1})⋯S(t_{m}) ∈ φ. The application of a transition of the form 2 to a configuration C[(p,γ)(t_{1},⋯, t_{m})] yields the configuration C[(p_{a},γ)(t_{1},⋯, t_{m},(p_{b},γ)(∅))] if S(t_{1})⋯S(t_{m}) ∈ φ.
We give the restriction on child constraints φ that allows us to preserve decidability of reachability for constrained dynamic tree networks. Intuitively, this constraint asserts that once a constraint φ is satisfied, it will remain satisfied even if its children progress. We first define the notion of a stable constraint.
Given a CDTN M = (P,F,δ) we define
ρ_{τ}= {(p,p′) ∣ ∃ φ:p –[θ]→ p′ ∈ δ ∨ ∃ φ:p → p′ ⊳ p″ ∈ δ} . 
We say M is ρ_{τ}stable iff for all φ: p –[θ]→ p_{a} ∈ δ and φ: p → p_{a} ⊳ p_{b} ∈ δ we have φ is ρ_{τ}stable.
We now define a notion of tree automata over the configurations of a constrained dynamic tree network. As these configurations can have an unbounded arity, we need to have an automaton model which can deal with unbounded arity, thus we use an adapted version of hedge automata. Transitions of our automata are thus of the form p(L) → q, meaning they can rewrite a tree to a state q, if
Moreover, the automaton as well checks that the element of the root is accepted by an alternating transition system which is bound to the rule (more precisely, we will use a single alternating transition system for the whole automaton, which has a unique initial state for each rule of the automaton). In the following definition, let Rec(Q) be the set of regular languages over alphabet Q.
An Aconfiguration is a tree labelled by (P × Γ) ∪ Q, such that only leaves can be labelled by Q. Let T(P × Γ) be the set of such trees. Given a transition r = p(L) → q and two Aconfigurations t and t′, we have t –[r]→ t′ if and only if t = C[(p,γ)(q_{1},⋯, q_{m})], t′ = C[q], q_{1}⋯q_{m}∈ L and γ ⊢ s_{r}.
Let –[Δ, *]→ be the transitive closure of ( ∪_{r ∈ Δ} –[r]→ ). The set of Mconfigurations recognised by A from the state q is L_{q}(A) = {t ∈ T(P × Γ) ∣ t –[Δ, *]→ q}.
In this section, we show that the we can compute the backwards reachability set of CDTNs. That is, if a CDTN M is ρ_{δ}stable, then the set of predecessors of a regular set is regular.
To prove this theorem, we construct an automaton A′ from A and M. We construct the automaton in two steps.
The first step consists in adding in the states of the automaton the information of what was the internal state of the root of the Mconfiguration that was reduced to this state. Informally, we replace every transition p(L)→ q with p(L) → (q,p), so given an Mconfiguration t such that if t –[Δ, *]→ q, we have t –[Δ_{p}, *]→ (q,S(t)). This will be useful in the actual construction of A′, as to inversely apply Mrules, we will need to check if the constraint of the rule is satisfied, which will be given by this information (using the stability property, as we remember the final state of the root of each son). More formally, we will also need to adapt the constraint L and to add states to the inner alternating transition system.
We define A_{p} = (Q × P, F × P, Δ_{p}, N_{p}), where
Δ_{p} =  ⎧ ⎪ ⎨ ⎪ ⎩  p(L_{P}) → (q,p)  
 ⎫ ⎪ ⎬ ⎪ ⎭ 
and N_{p} = (S_{p},F_{p},η_{p}), with
Proof. We only have to observe that for every t, t –[Δ_{p}, *]→ (q,S(t)) if and only if t –[Δ, *]→ q, and that (q,p) is final if and only if q is final.
In order to faithfully compute the automaton A′, we need to be able to transfer the constraint of M to the states of A′. Indeed, we need to recognise only valid predecessors of the configurations recognised by A, i.e. those who satisfy the constraints φ. Given a regular language φ ⊆ P^{∗}, we thus define ⟨ φ ⟩ = {(q_{1},p_{1})⋯(q_{m},p_{m}) ∣ p_{1}⋯p_{m}∈ φ, q_{1},⋯,q_{m}∈ Q}. It is straightforward to see that this language is also regular.
During the construction of A′, we add new transitions of the form p(L) → (q′,p′), where the constraints L are constructed from those already appearing in A_{p} and the constraints φ used in M, using intersection and rightquotient operations. Intersection of the form L ∩ ⟨ φ ⟩ allows us to check that the guarding constraint of an Mrule is satisfied at the considered position in the configuration. The rightquotient
L(q,p)^{−1} =  ⎧ ⎨ ⎩  (q_{1},p_{1}) ⋯ (q_{m},p_{m})  (q_{1},p_{1}) ⋯ (q_{m}, p_{m}) (q,p) ∈ L  ⎫ ⎬ ⎭ 
allows us to get immediate predecessors by an operation of the form 2. We define Λ to be the smallest family of languages over Q_{p} such that:
The following lemma was shown by Bouajjani et al. [2].
To prove this lemma, we first remark that as all the L and φ are regular, there is an automaton recognising it. Moreover there is a finite number of such constraint. Thus, we can define the product automaton of all these automata to get a finite automaton, and associate each constraint with a set of final states. Thus, the constraints produced by the construction of Λ are obtained by modifying the set of final states (as taking the rightproduct is equivalent to moving backward by one transition, and as we already have a product automaton, we don’t have to introduce new states for the intersection). Thus, only a finite number of automata can be generated this way.
We now actually describe our saturation algorithm constructing A′. To do so we start from A_{p} and only add new transitions: we will never add new states, so this process terminates. The main idea is, for every Mrule r = φ:p –[θ]→ p_{a} and every transition starting with p_{a}, to add a transition starting with p, ending in the same automaton state. Moreover, we as well ensure the sons of the node we apply the rule to satisfy φ by using the constraint L ∩ ⟨ φ ⟩. We also ensure that the elements recognised by the alternating transition systems from the state associated with the new rule are predecessors by θ of those recognised from the one associated with the old rule. For the spawning rule φ:p→ p_{a}⊳ p_{b}, we moreover ensure that there is exactly one son less and the label was also accepted by the last son. We need that the label was also accepted by the last son since the spawn operation creates a copy of the parent process’s label. Hence, the label of the parent must also be the label of the last son.
We construct A′ = (Q × P, F × P, Δ′, N′), with N′ = (S_{p},F_{p},η′). We give the formal definition of the construction first, and then informally explain the two rules 2 and 2.
We define Δ′ and η′ inductively as the fixed point of the following sequence. We begin with Δ′_{0} = Δ_{p} and η′_{0} = η_{p}. Now, suppose Δ′_{i−1} and η′_{i−1} are defined. We construct Δ′_{i} and η′_{i} to be at least Δ′_{i−1} and η′_{i−1} plus transitions added with one of the following rules:
This process terminates when Δ′_{i−1} = Δ′_{i} and η′_{i−1} = η′_{i}. As the set of states is fixed, there is a finite number of possible rules, thus this process terminates and A′ exists.
Intuitively, 2 works as follows. We want to extend the automaton to recognise the result of a reverse application of φ:p –[θ]→ p_{a}. That is, whenever a configuration t′ with the root node having internal state p_{a} is accepted, we should now accept a configuration t with root internal state p. Hence, we look for a transition (r) that will read and accept the root node of t′ and introduce a new transition (r′) that will read and accept the root of t. In addition, we need to take care of the children of the root. In particular, to be able to apply τ the children must satisfy φ. This is why we intersect with ⟨ φ ⟩. Furthermore, to simulate the (reverse) update to γ, we add the transition s_{r′} –[θ]→ {s_{r}} to assert that the label accepted by r′ would be accepted by r after an application of θ.
The rule 2 works similarly to 2, except we need to deal with the addition of a new child in the transition from t to t′. This is a removal when applied in reverse, hence the introduced transition performs a rightquotient on the language of children. In addition, we have to ensure that the spawned child has the same label as the parent. To do this, we look at the transition r_{2} used to accept the final child. Note, the right quotient removes the target (q″, p″) of this transition. When applying this transition is reverse, the label γ of the root of t must be the same as the label of the root of t′ and its final child. This explains the transition s_{r′} –[Id]→ {s_{r1},s_{r2}} which ensures γ is accepted at both the root and its final child.
This section is devoted to showing that A′ accepts pre_{M}^{*}(L(A)). It is sufficient to prove the following property:
The next two subsection are devoted to prove each direction of this property.
Proof. To prove this proposition, we associate to each state of an automaton (and the inner alternating transition system as well) a meaning that is intimately connected to the backwards reachability set we want to construct. We consider a transition r = p(L) → (q,p′) to be sound under the following condition: if we take elements satisfying the meaning of each state appearing at the left of the transition, then the configuration including all these elements satisfies the meaning of the right state of the transition. Intuitively this says that, assuming all actions taken by other transitions in the automaton are correct, the current transition does nothing wrong. We inductively show that every transition appearing in A′ is sound. Finally, we show that if an automaton is sound and contains A_{p}, it satisfies the proposition, showing that it is the case for A′.
Given a state s ∈ S_{p} which is not a s_{r}, and γ ∈ Γ, we define γ ⊨ s ⇔ γ ⊢ s.
Given a state s_{r}∈ S_{p}, with r = p(L) → (q,p′), and γ ∈ Γ, we define γ ⊨ s_{r} iff for all t_{1}⊨ (q_{1},p_{1}),⋯, t_{m}⊨(q_{m},p_{m}), with (q_{1},p_{1})⋯(q_{m},p_{m}) ∈ L, we have (p,γ)(t_{1},⋯,t_{m}) ⊨ (q,p′).
Given a set of states S ⊆ S_{p}, and γ ∈ Γ, we define γ ⊨ S ⇔ ∀ s∈ S, γ⊨ s.
Once we have defined the meaning of the states of the automaton, we can define soundness of a transition.
A transition ν = s –[θ]→ S is sound if for every γ∈ S such that θ(γ) ⊨ S, we have γ ⊨ s.
Intuitively an automaton is sound if all of its transitions are sound. We will later show that a sound automaton does not accept configurations that are not in the backwards reachability set.
We show A′ is sound inductively, hence we must start with the base case and show that A_{p} is sound.
Proof. A_{p} is constructed from itself by adding nothing.
Given r = p(L)→ (q,p), we consider (q_{1},p_{1})⋯(q_{m},p_{m}) ∈ L, t_{1},⋯,t_{m}, with for every i, t_{i} ⊨ (q_{i},p_{i}), and γ ⊨ s_{r}. By definition of γ ⊨ s_{r}, we have (p,γ)(t_{1},⋯,t_{m}) ⊨ (q,p). Thus, r is sound.
Given ν = s –[θ]→ S with s not a s_{r}, we consider γ such that θ(γ) ⊨ S. Thus, for every s′ ∈ S, θ(γ) ⊢ s′, and thus, by definition, we have γ ⊢ s. This is the definition of γ ⊨ s, and thus ν is sound.
Given ν = s_{r}–[θ]→ S with r = p(L)→ (q,p), we consider γ such that θ(γ) ⊨ S. As we suppose that no state s_{r′} has an incoming transition in η_{p}, we get as previously γ ⊢ s. We consider (q_{1},p_{1})⋯(q_{m},p_{m}) ∈ L, t_{1} ⊨ (q_{1},p_{1}),⋯, t_{m}⊨ (q_{m},p_{m}). By definition, for every i, we have t_{i} ∈ pre_{M}^{*}(t′_{i}), such that t′_{i} –[Δ_{p}, *]→ (q_{i},p′_{i}). Thus, the following run is valid:
(p,γ)(t′_{1},⋯,t′_{m}) –[Δ_{p}, *]→ (p,γ)((q_{1},p_{1}),⋯,(q_{m}, p_{m})) –[r]→ (q,p). 
Thus, (p,γ)(t′_{1},⋯,t′_{m}) ∈ L_{q,p}(A_{p}), and thus (p,γ)(t_{1},⋯,t_{m}) ∈ pre_{M}^{*}(L_{q,p}(A_{p})). By definition, we have (p,γ)(t_{1},⋯,t_{m}) ⊨ (q,p), and we can deduce that γ ⊨ s_{r}, which concludes the proof that ν is sound.
We can now show the inductive case, to show that A′ is sound.
Proof. We prove this property by induction on A′_{i}. A′_{0} = A_{p} is sound by the previous lemma. Suppose A_{i−1} is sound. We distinguished the possible cases for adding new rules.
Now, let us show that ν′ is sound. Consider γ such that θ(γ) ⊨ s_{ν}. We want to show that γ ⊨ s_{ν′}. Consider t_{1} ⊨ (q_{1},s_{1}),⋯,t_{m}⊨ (q_{m},s_{m}). Thus, for every i, we have t′_{i} ∈ L_{qi,pi}(A_{p}) such that t_{i} ∈ pre_{M}^{*}(t′_{i}). As A_{i−1} is sound, r is sound. Thus we have (p_{a},θ(γ)) (t′_{1}, ⋯,t′_{m}) ⊨ (q′,p′), which means (p_{a},θ(γ))(t′_{1},⋯,t′_{m}) ∈ pre_{M}^{*}(L_{q′,p′}(A_{p})). For all i, S(t′_{i}) = p_{i}, as t′_{i} ∈ L_{qi,pi}(A_{p}). Thus, we can apply τ to (p,γ)(t′_{1},⋯, t′_{m}) and get (p_{a},θ(γ))(t′_{1},⋯, t′_{m}). Thus, (p,γ)(t′_{1},⋯,t′_{m}) ∈ pre_{M}^{*}(L_{q′,p′}(A_{p})), and finally, (p,γ)(t_{1},⋯,t_{m}) ∈ pre_{M}^{*}(L_{q′,p′}(A_{p})). Thus, by definition, (p,γ)(t_{1},⋯,t_{m}) ⊨ (q′,p′), which proves that γ ⊨ s_{r′}, and thus ν′ is sound.
Now, let us show that ν′ is sound. Consider γ such that γ ⊨ s_{r1} and γ ⊨ s_{r2}. We want to show γ ⊨ s_{r′}. Consider t_{1} ⊨ (q_{1},p_{1}),⋯, t_{m}⊨ (q_{m},p_{m}). Thus, for every i, we have t′_{i} ∈ L_{qi,pi}(A_{p}) such that t_{i} ∈ pre_{M}^{*}(t_{i}). As A′_{i−1} is sound, r_{1} and r_{2} are sound. As ε ∈ L_{2}, we have (p_{b},γ)(∅)) ⊨ (q″,p″). Thus, (p_{a},γ)(t′_{1},⋯, t′_{m},(p_{b},γ)(∅)) ⊨ (q′,p′), which means (p_{a},γ)(t′_{1},⋯,t′_{m}, (p_{b},γ)(∅)) ∈ pre_{M}^{*}(L_{q′,p′}(A_{p})). For all i, S(t′_{i}) = p_{i}, as t′_{i} ∈ L_{qi,pi}(A_{p}). Thus, we can apply τ to (p,γ)(t′_{1},⋯, t′_{m}) and get (p_{a},γ)(t′_{1},⋯, t′_{m}, (p_{b},γ)(∅)). Thus, (p,γ)(t′_{1},⋯,t′_{m}) ∈ pre_{M}^{*}(L_{q′,p′}(A_{p})), and finally, (p,γ)(t_{1},⋯,t_{m}) ∈ pre_{M}^{*}(L_{q′,p′}(A_{p})). Thus, by definition, (p,γ)(t_{1},⋯,t_{m}) ⊨ (q′,p′), which proves that γ ⊨ s_{r′}, and thus ν′ is sound.
Thus, A′_{i} is sound, which concludes the proof.
We are now ready to complete the proof by showing that a sound automaton only accepts the correct configurations.
Proof. To show this lemma, it is sufficient to show that if t ∈ L_{q0,p0}(A′), then t ⊨ (q_{0},p_{0}).
First, we show that for every element γ, and every s ∈ S′, γ ⊢ s ⇒ γ ⊨ s. We show it by induction on the size of the run. First, suppose s is final, thus, by definition of the soundness we have that γ ⊨ s for every γ. Suppose now we have γ ⊢ s, such that the run accepting γ from s starts with a transition ν = s –[θ]→ S. By definition, we have for every s′ ∈ S, θ(γ) ⊢ s′. By hypothesis of induction, we have that θ(γ) ⊨ s′, and then θ(γ) ⊨ S. Thus as ν is sound, we obtain that γ ⊨ s.
We now prove that if t ∈ L_{q0,p0}(A′), then t ⊨ (q_{0},p_{0}). We show it by induction on the size of the run. First suppose t = (p,γ)(∅). We thus have t –[r]→ (q_{0},p_{0}), with r = p(L) → (q_{0},p_{0}), ε∈ L and γ ⊢ s_{r}. We thus have γ ⊨ s_{r}, from what precedes. Thus, as the transition r is sound by hypothesis, we have t ⊨ (q_{0},p_{0}).
Suppose now t= (p,γ)(t_{1},⋯,t_{m}), and t –[Δ, *]→ (p,γ)((q_{1},p_{1}),⋯,(q_{m}, p_{m})) –[r]→ (q_{0},p_{0}), with r = p(L)→(q_{0},p_{0}), (q_{1},p_{1})⋯(q_{m},p_{m}) ∈ L and γ ⊢ s_{r}. We thus have γ ⊨ s_{r}, from what precedes. By hypothesis of induction, we have for every i, t_{i}⊨ (q_{i},p_{i}). Thus, as the transition r is sound by hypothesis, we have t⊨ (q_{0},p_{0}).
Thus, for every state (q,p), we have that L_{q,p}(A′) ⊆ L_{q,p}(pre_{M}^{*}(A_{p})).
The proof of completeness of A′ is conceptually simpler than the soundness proof. It proceeds by a straightforward induction over the length of the run showing a configuration is in the backwards reachability set. In the base case we have the configuration is accepted by A_{p} and the proof is immediate. In the inductive case, we have t reaches t′ by a single transition, and an accepting run of A′ over t. We then inspect the transition from t to t′ and show that our construction of A′ ensures that we can modify the accepting run of t′ to obtain an accepting run of t.
Proof. The proof of this proposition is made inductively: first we recall that L_{q0,p0}(A_{p}) is included in L_{q0,p0}(A′). Thus, we show that the predecessor by one Mrule of a configuration t accepted by A′ is accepted by A′. In this last proof, the stability condition plays a crucial role, to show that the transition introduced by the automaton for this rule is actually applicable to accept the predecessor t.
We choose t such that t ∈ pre_{M}^{*}(L_{(q0,p0)}(A_{p})). We show by induction on the size of the run that t ∈ L_{(q0,p0)}(A′).
If t ∈ L_{(q0,p0)}(A_{p}), then as η_{p} ⊆ η′ and Δ_{p}⊆ Δ′, we have the result.
We choose t′, such that t ∈ pre_{M}(t′). By induction hypothesis, t′ ∈ L_{(q0,p0)}(A′). We have two cases to consider.
t′ –[Δ′, *]→ C[(p_{a},θ(γ)) ((q_{1},p_{1}),⋯, (q_{m},p_{m}))] –[r]→ C[q′,p′] –[Δ′, *]→ (q_{0},p_{0}) 
Let us show that (q_{1},p_{1})⋯(q_{m}, p_{m}) ∈ ⟨ φ ⟩. For every i, we have t_{i} –[Δ′, *]→ (q_{i},p_{i}). From the proof of the soundness of A′, we get that t_{i} ∈ pre_{M}^{*}(L_{qi,pi}(A_{p})). Thus, there is t′_{i} ∈ L_{qi,pi}(A_{p}), such that S(t′_{i}) = p_{i}, and t_{i} ∈ pre_{M}^{*}(t′_{i}). Thus, we have ρ_{Δ′}^{*} (S(t_{i}),S(t′_{i})). As, by hypothesis, we can apply r to t, we have S(t_{1}) ⋯ S(t_{m}) ∈ φ. As φ is ρ_{Δ′}stable, we deduce that p_{1},⋯ p_{m}∈ φ, and thus (q_{1},p_{1}) ⋯(q_{m},p_{m}) ∈ ⟨ φ ⟩.
By construction of A′, we have that r′ = p(L ∩ ⟨ φ ⟩) → (q′,p′) in Δ′ and ν′ = s_{r′} –[θ]→ s_{r} in η′. As θ(γ) ⊢ s_{r}, we get that γ ⊢ s_{r′}. As we have moreover (q_{1},p_{1}) ⋯ (q_{m},p_{m}) ∈ L ∩ ⟨ φ ⟩ the following run is valid:
t –[Δ′, *]→ C[(q_{1},p_{1}),⋯,(q_{m}, p_{m})] –[r′]→ C[q′,p′] –[Δ′, *]→ (q_{0}, p_{0}) 
Thus t ∈ L_{q0,p0}(A′).

Let us show that (q_{1},p_{1})⋯(q_{m}, p_{m}) ∈ ⟨ φ ⟩. For every i, we have t_{i} –[Δ′, *]→ (q_{i},p_{i}). From the proof of the soundness of A′, we get that t_{i} ∈ pre_{M}^{*}(L_{qi,pi}(A_{p})). Thus, there is t′_{i} ∈ L_{qi,pi}(A_{p}), such that S(t′_{i}) = p_{i}, and t_{i} ∈ pre_{M}^{*}(t′_{i}). Thus, we have ρ_{Δ′}^{*} (S(t_{i}),S(t′_{i})). As, by hypothesis, we can apply r to t, we have S(t_{1}) ⋯ S(t_{m}) ∈ φ. As φ is ρ_{Δ′}stable, we deduce that p_{1},⋯ p_{m}∈ φ, and thus (q_{1},p_{1}) ⋯(q_{m},p_{m}) ∈ ⟨ φ ⟩.
By construction of A′, we have that r′ = p(L_{1}(q″,p″)^{−1} ∩ ⟨ φ ⟩) → (q′,p′) in Δ′ and ν′ = s_{r′} –[θ]→ {s_{r1},s_{r2}} in η′. As γ ⊢ s_{r1} and γ ⊢ s_{r2}, we get that γ ⊢ s_{r′}. As we have moreover (q_{1},p_{1}) ⋯ (q_{m},p_{m}) ∈ L_{1}(q″,p″)^{−1} ∩ ⟨ φ ⟩ the following run is valid:
t –[Δ′, *]→ C[(q_{1},p_{1}),⋯,(q_{m}, p_{m})] –[r′]→ C[q′,p′] –[Δ′, *]→ (q_{0}, p_{0}) 
Thus t ∈ L_{q0,p0}(A′).
Hence the automaton is complete.
We have shown that the saturation algorithm for constrained dynamic pushdown networks introduced by Bouajjani et al. [2] can be generalised to not only pushdown networks, but networks of any system for which the alternating reachability problem is decidable. In particular, this includes collapsible pushdown systems, or higherorder recursion schemes, which thus allows the analysis of a kind of concurrent higherorder programs.
We showed that, given a regular target set of configurations, the backwards reachability set computable and is also regular. We make some remarks on our notion of regularity. In order to accept a configuration, an automaton must perform several alternating reachability checks. This is not regular in the conventional sense. However, for alternating pushdown systems, and indeed alternating collapsible pushdown systems, the backwards reachability set of a regular set of stacks is known to have a regular representation [1, 4]. Thus, we can replace the alternating reachability tests with regular automata which run over the stack contents labelling each node. Thus we obtain a truly regular representation of the backwards reachability sets of CDTNs over these systems.
The natural avenue of future work is to attempt to generalise our model further, to permit more intricate communication between processes. One option is to allow the child nodes to inspect the internal state of their parent processes. In general this leads to an undecidable model. It is an open problem to discover a form of interesting upwards communication that is decidable. Similarly, we may seek to relax the stability constraint. One such option is to use the stability constraint defined by Touili and Atig [37] where internal states are grouped into mutually reachable equivalence classes. Thus, any run moves through a bounded number of equivalence classes. We can then insist that constraints are over the equivalence classes rather than individual states. This is reminiscent of contextbounded analysis [32]. We can adapt our construction to allow downwards and upwards communication of this form, but it is not clear whether Λ remains finite.
This document was translated from L^{A}T_{E}X by H^{E}V^{E}A.