Learning Based Assume-Guarantee
Reasoning
Corina Păsăreanu Perot Systems Government Services,
NASA Ames Research Center
Joint work with:Dimitra Giannakopoulou (RIACS/NASA Ames)
Howard Barringer (U. of Manchester)Jamie Cobleigh (U. of Massachusetts Amherst/MathWorks)
Mihaela Gheorghiu (U. of Toronto)
Thanks
Eric Madelaine
Monique Simonetti
INRIA
Objective:An integrated environment that supports software development and verification/validation throughout the lifecycle; detect integration problems early, prior to coding
Approach: Compositional (“divide and conquer”) verification, for increased scalability, at design levelUse design level artifacts to improve/aid coding and testing
CompositionalVerification
TestingDesign CodingRequirements Deployment
C1 C2
C1
C2
M1
M2
models implementations
Cost of detecting/fixing defects increases
Integration issues handled early
Context
Compositional Verification
M2
M1
A
satisfies P?
Check P on entire system: too many states!
Use the natural decomposition of the system into its components to break-up the verification task
Check components in isolation:
Does M1 satisfy P?
– Typically a component is designed to satisfy its requirements in specific contexts / environments
Assume-guarantee reasoning: – Introduces assumption A representing M1’s “context”
Does system made up of M1 and M2 satisfy property P?
Assume-Guarantee Rules
M2
M1
A
satisfies P? Simplest assume-guarantee rule – ASYM
“discharge” the assumption
1. A M1 P2. true M2 A
3. true M1 || M2 P
How do we come up with the assumption?(usually a difficult manual process)Solution: use a learning algorithm.
Reason about triples:
A M PThe formula is true if whenever M is part of a system
that satisfies A, then the system must also guarantee P
Outline
Framework for learning based assume-guarantee reasoning [TACAS’03]– Automates rule ASYM
Extension with symmetric [SAVCBS’03] and circular rules
Extension with alphabet refinement [TACAS’07]
Implementation and experiments
Other extensions
Related work
Conclusions
Formalisms
Components modeled as finite state machines (FSM)– FSMs assembled with parallel composition operator “||”
• Synchronizes shared actions, interleaves remaining actions
A safety property P is a FSM– P describes all legal behaviors
– Perr – complement of P
• determinize & complete P with an “error” state;
• bad behaviors lead to error
– Component M satisfies P iff error state unreachable in (M || Perr)
Assume-guarantee reasoning– Assumptions and guarantees are FSMs A M P holds iff error state unreachable in (A || M || Perr)
Example
Input
in send
ack
Output
outsend
ack
Ordererr
in
out
in out
||
Learning for Assume-Guarantee Reasoning
Use an off-the-shelf learning algorithm to build appropriate assumption for rule ASYM
Process is iterativeAssumptions are generated by querying the system, and are gradually refined Queries are answered by model checkingRefinement is based on counterexamples obtained by model checkingTermination is guaranteed
1. A M1 P2. true M2 A
3. true M1 || M2 P
Learning with L*
L* algorithm by Angluin, improved by Rivest & Schapire
Learns an unknown regular language U (over alphabet ) and produces a DFA A such that L(A) = U
Uses a teacher to answer two types of questionsUnknown regular language U
L*
conjecture: Ai
query: string s
true
false
remove string t
add string t
output DFA A such that L(A) = U
true
false
false
is s in U?
is L(Ai)=U?
Learning Assumptions
Use L* to generate candidate assumptions
A = (M1 P) M2
L*
query: string s
true
false
s M1 P
conjecture: AiAi M1 P
true M2 Ai
counterex. analysis t/A M1 P
true
false (cex. t)
true
false
false (cex. t)
true
remove cex. t/A
add cex. t/A
P holds in M1 || M2
P violated
1. A M1 P2. true M2 A
3. true M1 || M2 P
Model Checking
Characteristics
Terminates with minimal automaton A for U
Generates DFA candidates Ai: |A1| < | A2| < … < |A|
Produces at most n candidates, where n = |A|
# queries: (kn2 + n logm),– m is size of largest counterexample, k is size of alphabet
Ordererr in
out out in
Output
send
ack
out
Example
Input
in
ack
send
ack
send
out, send
A2:
Computed Assumption
Extension to n components
To check if M1 || M2 || … || Mn satisfies P
– decompose it into M1 and M’2 = M2 || … || Mn
– apply learning framework recursively for 2nd premise of rule– A plays the role of the property
At each recursive invocation for Mj and M’j = Mj+1 || … || Mn
– use learning to compute Aj such that
Ai Mj Aj-1 is true
true Mj+1 || … || MnAj is true
1. A M1 P2. true M2 || … || Mn A
3. true M1 || M2 … || Mn P
Symmetric Rules
Assumptions for both components at the same time– Early termination; smaller assumptions
Example symmetric rule – SYM
coAi = complement of Ai, for i=1,2Requirements for alphabets: P M1 M2; Ai (M1 M2) P, for i =1,2
The rule is sound and completeCompleteness needed to guarantee terminationStraightforward extension to n components
1. A1 M1 P
2. A2 M2 P
3. L(coA1 || coA2) L(P)
true M1 || M2 P
Learning Framework for Rule SYM
L*
A1 M1 P
L*
A2 M2 P
A1 A2
false false
L(coA1 || coA2) L(P)
counterex.analysis
true true
falseP holds in M1||M2
P violated in M1||M2
add counterex.add counterex.
removecounterex.
removecounterex.
true
Circular Rule
Rule CIRC – from [Grumberg&Long – Concur’91]
Similar to rule ASYM applied recursively to 3 components– First and last component coincide– Hence learning framework similar
Straightforward extension to n components
1. A1 M1 P
2. A2 M2 A1
3. true M1 A2
true M1 || M2 P
Outline
Framework for assume-guarantee reasoning [TACAS’03]– Uses learning algorithm to compute assumptions– Automates rule ASYM
Extension with symmetric [SAVCBS’03] and circular rules Extension with alphabet refinement [TACAS’07]Implementation and experimentsOther extensionsRelated workConclusions
Assumption Alphabet Refinement
Assumption alphabet was fixed during learning A = (M1 P) M2
[SPIN’06]: A subset alphabet
– May be sufficient to prove the desired property
– May lead to smaller assumption
How do we compute a good subset of the assumption alphabet?
Solution – iterative alphabet refinement• Start with small (empty) alphabet• Add actions as necessary• Discovered by analysis of counterexamples obtained from model checking
Implementation & Experiments
Implementation in the LTSA tool– Learning using rules ASYM, SYM and CIRC
– Supports reasoning about two and n components– Alphabet refinement for all the rules
Experiments– Compare effectiveness of different rules– Measure effect of alphabet refinement– Measure scalability as compared to non-compositional
verification
Case Studies
Model of Ames K9 Rover Executive– Executes flexible plans for autonomy
– Consists of main Executive thread and ExecCondChecker thread for monitoring state conditions
– Checked for specific shared variable: if the Executive reads its value, the ExecCondChecker should not read it before the Executive clears it
Model of JPL MER Resource Arbiter– Local management of resource contention between
resource consumers (e.g. science instruments, communication systems)
– Consists of k user threads and one server thread (arbiter)
– Checked mutual exclusion between resources
…
K9 Rover
MER Rover
Results
Rule ASYM more effective than rules SYM and CIRC
Recursive version of ASYM the most effective– When reasoning about more than two components
Alphabet refinement improves learning based assume guarantee verification significantlyBackward refinement slightly better than other refinement heuristicsLearning based assume guarantee reasoning– Can incur significant time penalties– Not always better than non-compositional (monolithic)
verification– Sometimes, significantly better in terms of memory
Case |A| Mem Time |A| Mem Time Mem Time
MER 2 40 8.65 21.90 6 1.23 1.60 1.04 0.04
MER 3 501 240.06 -- 8 3.54 4.76 4.05 0.111
MER 4 273 101.59 -- 10 9.61 13.68 14.29 1.46
MER 5 200 78.10 -- 12 19.03 35.23 14.24 27.73
MER 6 162 84.95 -- 14 47.09 91.82 -- 600
K9 Rover 11 2.65 1.82 4 2.37 2.53 6.27 0.015
Analysis Results
|A| = assumption sizeMem = memory (MB)Time (seconds)-- = reached time (30min) or memory limit (1GB)
ASYM ASYM + refinement Monolithic
Other Extensions
Design-level assumptions used to check implementations in an assume-guarantee way [ICSE’04]
– Allows for detection of integration problems during unit verification/testing
Extension of SPIN model checker to perform learning based assume-guarantee reasoning [SPIN’06]
– Our approach can use any model checker
Similar extension for Ames Java PathFider tool – ongoing work– Support compositional reasoning about Java code/UML statecharts
– Support for interface synthesis: compute assumption for M1 for any M2
Compositional verification of C code – Collaboration with CMU– Uses predicate abstraction to extract FSM’s from C components
More info on my webpage– http://ase.arc.nasa.gov/people/pcorina/
Applications
Support for compositional verification– Property decomposition
– Assumptions for assume-guarantee reasoning
Assumptions may be used for component documentation
Software patches– Assumption used as a “patch” that corrects a component errors
Runtime monitoring of environment– Assumption monitors actual environment during deployment
– May trigger recovery actions
Interface synthesis
Component retrieval, component adaptation, sub-module construction, incremental re-verification, etc.
Related Work
Assume-guarantee frameworks– Jones 83; Pnueli 84; Clarke, Long & McMillan 89; Grumberg & Long 91; …– Tool support: MOCHA; Calvin (static checking of Java); …
We were the first to propose learning based assume guarantee reasoning; since then, other frameworks were developed:
– Alur et al. 05, 06 – Symbolic BDD implementation for NuSMV (extended with hyper-graph partitioning for model decomposition)
– Sharygina et al. 05 – Checks component compatibility after component updates– Chaki et al. 05 – Checking of simulation conformance (rather than trace inclusion)– Sinha & Clarke 07 – SAT based compositional verification using lazy learning– …
Interface synthesis using learning: Alur et al. 05Learning with optimal alphabet refinement
– Developed independently by Chaki & Strichman 07CEGAR – counterexample guided abstraction refinement
– Our alphabet refinement is similar in spirit– Important differences:
• Alphabet refinement works on actions, rather than predicates• Applied compositionally in an assume guarantee style• Computes under-approximations (of assumptions) rather than behavioral over-approximations
Permissive interfaces – Hezinger et al. 05– Uses CEGAR to compute interfaces
…
Conclusion and Future Work
Learning based assume guarantee reasoningUses L* for automatic derivation of assumptionsApplies to FSMs and safety propertiesAsymmetric, symmetric, and circular rules
– Can accommodate other rulesAlphabet refinement to compute small assumption alphabets that are sufficient for verificationExperiments
– Significant memory gains– Can incur serious time overhead
Should be viewed as a heuristic– To be used in conjunction with other techniques, e.g. abstraction
Future workLook beyond safety (learning for infinitary regular sets)Optimizations to overcome time overhead
– Re-use learning results across refinement stagesCEGAR to compute assumptions as abstractions of environmentsMore experiments
Top Related