Algebra of Concurrent Programming

97
Algebra of Concurrent Programming Tony Hoare Cambridge 2011

description

Algebra of Concurrent Programming. Tony Hoare Cambridge 2011. With ideas from. Ian Wehrman John Wickerson Stephan van Staden Peter O’Hearn Bernhard Moeller Georg Struth Rasmus Petersen …and others. Subject matter: designs. - PowerPoint PPT Presentation

Transcript of Algebra of Concurrent Programming

Page 1: Algebra of Concurrent Programming

Algebra of Concurrent Programming

Tony Hoare

Cambridge 2011

Page 2: Algebra of Concurrent Programming

With ideas from

• Ian Wehrman• John Wickerson• Stephan van Staden• Peter O’Hearn• Bernhard Moeller• Georg Struth• Rasmus Petersen• …and others

Page 3: Algebra of Concurrent Programming

Subject matter: designs

• variables (p, q, r) stand for computer programs, designs, specifications,…

• they all describe what happens inside/around a computer that executes a given program.

• The program itself is the most precise.• The specification is the most abstract.• Designs come in between.

Page 4: Algebra of Concurrent Programming

Examples

• Postcondition:– execution ends with array A sorted

• Conditional correctness:– if execution ends, it ends with A sorted

• Precondition: – execution starts with x even

• Program: x := x+1 – the final value of x one greater than the initial

Page 5: Algebra of Concurrent Programming

Examples

• Safety:– There are no buffer overflows

• Termination:– execution is finite (ie., always ends)

• Liveness:– no infinite internal activity (livelock)

• Fairness:– a response is always given to each request

• Probability:– the ration of a’s to b’s tends to 1 with time

Page 6: Algebra of Concurrent Programming

Unification

• Same laws apply to programs, designs, specifications

• Same laws apply to many forms of correctness.

• Tools based on the laws serve many purposes.• Distinctions can be drawn later– when the need for them is apparent

Page 7: Algebra of Concurrent Programming

Refinement: p ⊑ q• Everything described by pis also described by q , e.g.,– spec p implies spec q– prog p satisfies spec q– prog p more determinate than prog q

• stepwise development of a spec is– spec ⊒ design ⊒ program

• stepwise analysis of a program is– program ⊑ design ⊑ spec

Page 8: Algebra of Concurrent Programming

Various terminologyp ⊑ q

• below• lesser• stronger• lower bound• more precise• …deterministic• included in • antecedent =>

• above• greater• weaker• upper bound• more abstract• ...non-deterministic• containing (sets)• consequent (pred)

Page 9: Algebra of Concurrent Programming

Law: ⊑ is a partial order

•⊑ is transitive• p ⊑ r if p ⊑ q and q ⊑ r• needed for stepwise development/analysis

• ⊑ is antisymmetric • p = r if p ⊑ r and r ⊑ p• needed for abstraction

•⊑ is reflexive– p ⊑ p•for convenience

Page 10: Algebra of Concurrent Programming

Binary operator: p ; q

• sequential composition of p and q•each execution of p;q consists of– all events x from an execution of p – and all events y from an execution of q•subject to ordering constraint, either– strong -- weak– interruptible -- inhibited

Page 11: Algebra of Concurrent Programming

alternative constraints on p;q •strong sequence: – all x from p must precede all y from q•weak sequence: – no y from q can precede any x from p•interruptible: – other threads may interfere between x and y•separated: – updates to private variables are protected.• all our algebraic laws will apply to each alternative

Page 12: Algebra of Concurrent Programming

Hoare triple: {p} q {r} • defined as p;q ⊑ r – starting in the final state of an execution of p,

q ends in the final state of some execution of r– p and r may be arbitrary designs.

•example: {..x+1 ≤ n} x:= x + 1 {..x ≤ n} • where ..b (finally b) describes all executions that

end in a state satisfying a single-state predicate b .

Page 13: Algebra of Concurrent Programming

monotonicity

• Law: ( ; is monotonic wrto ⊑) :– p;q ⊑ p’;q if p ⊑ p’ – p;q ⊑ p;q’ if q ⊑ q’– compare: addition of numbers

• Rule (of consequence):– p’ ⊑ p & {p} q {r} & r ⊑ r’ implies {p’} q {r’}

• Rule is interprovable with first law

Page 14: Algebra of Concurrent Programming

associativity

• Law (; is associative) :– (p;q);q’ = p;(q;q’)

• Rule (sequential composition):– {p} q {s} & {s} q’ {r} implies {p} q;q’

{r}

• half the law interprovable from rule

Page 15: Algebra of Concurrent Programming

Unit(skip):

• a program that does nothing• Law ( is the unit of ;):– p; = p = ;p

• Rule (nullity)– {p} {p}

• a quarter of the law is interprovable from Rule

Page 16: Algebra of Concurrent Programming

concurrent composition: p | q

• each execution of (p|q) consists of – all events x of an execution of p,– and all events y of an execution of q

• same laws apply to alternatives:– interleaving: x precedes or follows y– true concurrency: x neither precedes nor

follows y.– separation: x and y independent

• Laws: | is associative, commutative and monotonic

Page 17: Algebra of Concurrent Programming

Separation Logic• Law (locality of ; wrto |):– (s|p) ; q ⊑ s |(p;q) (left locality )– p ; (q|s) ⊑ (p;q) | s (right locality)

• Rule (frame) :– {p} q {r} implies {p|s} q {r|s}

• Rule interprovable with left locality

Page 18: Algebra of Concurrent Programming

Concurrency law• Law (; exchanges with *)– (p|q) ; (p’|q’) ⊑ (p;p’) | (q;q’)– like exchange law of category theory

• Rule (| compositional)– {p} q {r} & {p’} q’ {r’} implies

{p|p’} q|q’ {r|r’}

• Rule interprovable with the law

Page 19: Algebra of Concurrent Programming

p|q ; p’|q’

p p’q’q

by columns

Page 20: Algebra of Concurrent Programming

p|q ; p’|q’ ⊑

p p’q’q p;p’ | q;q’

by rows

Page 21: Algebra of Concurrent Programming

Regular language model

• p, q, r,… are languages– descriptions of execution of fsm.

• p ⊑ q is inclusion of languages• p;q is (lifted) concatenation of strings– i.e., {st| s ∊ p & t ∊ q}

• p|q is (lifted) interleaving of strings• = {< >} (only the empty string)• “c” = {<c>} (only the string “c”)

Page 22: Algebra of Concurrent Programming

Left locality

•Theorem: (s|p) ; q ⊑ s | (p;q)•Proof:

in lhs: s interleaves with just p , and all of q comes at the end.in rhs: s interleaves with all of p;q

so lhs is a special case of rhs

• p s s ; q q q ⊑ p s q s q q

Page 23: Algebra of Concurrent Programming

Exchange

• Theorem: (p|q) ; (p’|q’) ⊑ (p;p’) | (q;q’)– in lhs: all of p and q comes before

all of p’ and q’ .– in rhs: end of p may interleave with q’

or start of p’ with qthe lhs is a special case of the rhs.

p q p ; q’ p’ q’ ⊑ p q q’ p p’ q’

Page 24: Algebra of Concurrent Programming

Conclusion

• regular expressions satisfy all our laws for ⊑ , ; , and |

• and for other operators introduced later

Page 25: Algebra of Concurrent Programming

Part 2. More Program Control Structures

• Non-determinism, intersection• Iteration, recursion, fixed points• Subroutines, contracts, transactions• Basic commands

Page 26: Algebra of Concurrent Programming

Subject matter

• variables (p, q, r) stand for programs, designs, specifications,…

• they are all descriptions of what happens inside and around a computer that is executing a program.

• the differences between programs and specs are often defined from their syntax.

Page 27: Algebra of Concurrent Programming

Specification syntax includes

• disjunction (or, ⊔) to express abstraction, or to keep options open

– ‘it may be painted green or blue’• conjunction (and, ⊓) combines requirements– it must be cheaper than x and faster than y

• negation (not) for safety and security– it must not explode

• implication (contracts)– if the user observes the protocol, so will the system

Page 28: Algebra of Concurrent Programming

Program syntax excludes

• disjunction– non-deterministic programs difficult to test

• conjunction– inefficient to find a computation satisfying both

• negation– incomputable

• implication– which side of contract?

Page 29: Algebra of Concurrent Programming

programs include

• sequential composition (;)• concurrent composition (|)• interrupts• iteration, recursion• contracts (declarations)• transactions• assignments, inputs, outputs, jumps,…

• So include these in our specifications!

Page 30: Algebra of Concurrent Programming

Bottom

•An unimplementable specification – like the false predicate•A program that has no execution– the compiler stops it from running•Define as least solution of: _ ⊑ _ • Theorem: ⊑ r– satisfies every spec, – but cannot be run (Dijkstra’s miracle)

Page 31: Algebra of Concurrent Programming

Algebra of

• Law ( is the zero of ;) :– ; p = = p ;

• Theorem : {p} {q}• Quarter of law provable from

theorem

Page 32: Algebra of Concurrent Programming

Top ⊤• a vacuous specification,– satisfied by anything, – like the predicate true

• a program with an error– for which the programmer is responsible– e.g., subscript error, violation of

contract…• define ⊤ as greatest solution of: _ ⊑ _

Page 33: Algebra of Concurrent Programming

Algebra of ⊤• Law: none• Theorem: none– you can’t prove a program with this

error– it might admit a virus!

• A debugging implementation may supply useful laws for ⊤

Page 34: Algebra of Concurrent Programming

Non-determinism (or): p ⊔ q• describes all executions that either

satisfy p or satisfy q .• The choice is not (yet) determined.• It may be determined later– in development of the design– or in writing the program– or by the compiler – or even at run time

Page 35: Algebra of Concurrent Programming

lub (join): ⊔• Define p⊔q as least solution of

p ⊑ _ & q ⊑ _• Theorem– p ⊑ r & q ⊑ r iff p⊔q ⊑ r

• Theorem– ⊔ is associative, commutative,

monotonic, idempotent and increasing– it has unit ⊥ and zero ⊤

Page 36: Algebra of Concurrent Programming

glb (meet): ⊓• Define p⊓q as greatest solution of

_ ⊑ p & _ ⊑ q

Page 37: Algebra of Concurrent Programming

Distribution

• Law ( ; distributive through ⊔ )– p ; (q⊔q’) = p;q ⊔ p;q’– (q⊔q’) ; p = q;p ⊔ q’;p

• Rule (non-determinism)– {p} q {r} & {p} q’ {r} implies {p}

q⊔q’ {r}– i.e., to prove something of q⊔q’ prove the same thing of both q and q’

• quarter of law interprovable with rule

Page 38: Algebra of Concurrent Programming

Conditional: p if b else p’• Define p ⊰b⊱ p’ as

b.. ⊓ p ⊔ not(b).. ⊓ p’– where b.. describes all executions that

begin in a state satisfying b .• Theorem. p ⊰b⊱ p’ is associative,

idempotent, distributive, and– p ⊰b⊱ q = q ⊰not(b)⊱ p (skew

symm)– (p ⊰b⊱ p’ ) ⊰c⊱ (q ⊰b⊱ q’) = (p

⊰c⊱ q) ⊰b⊱ (p’ ⊰c⊱ q’) (exchange)

Page 39: Algebra of Concurrent Programming

Transaction

• Defined as (p ⊓..b) ⊔ (q ⊓..c)– where ..b describes all executions that

end satisfying single-state predicate b .• Implementation:– execute p first– test the condition b afterwards– terminate if b is true– backtrack on failure of b– and try alternative q with condition c.

Page 40: Algebra of Concurrent Programming

Transaction (realistic)

• Let r describe the non-failing executions of a transaction t .– r is known when execution of t is complete.– any successful execution of t is committed – a single failed execution of t is undone, – and q is done instead.

• Define: (t if r else q) = t if t ⊑ r = (t ⊓ r) ⊔ q otherwise

Page 41: Algebra of Concurrent Programming

Contracts

• Let q be the body of a subroutine• Let s be its specification• Let (q .. s) assert that q meets s• Programmer error (⊤) if not so • Caller of subroutine may assume

that s describes all its calls• Implementation may just execute q

Page 42: Algebra of Concurrent Programming

Least upper bound

• Let S be an arbitrary set of designs•Define ⊔S as least solution of

∀s∊ S . s ⊑ _ – ( ∀s∊ S . s ⊑ r ) ⇒ ⊔S ⊑ r (all r)

• everything is an upper bound of { } , so ⊔ { } = – a case where ⊔S ∉ S

Page 43: Algebra of Concurrent Programming

similarly

• ⊓S is greatest lower bound of S• ⊓ { } = ⊤

Page 44: Algebra of Concurrent Programming

Subroutine with contract: q .. s

• Define (q..s) as glb of the setq ⊑ _ & _ ⊑ s

• Theorem: (q.. s) = q if q ⊑ s = ⊤ otherwise

Page 45: Algebra of Concurrent Programming

Iteration (Kleene *)

• q* is least solution of – (ɛ ⊔ (q; _) ) ⊑ _

• q* =def ⊔{s| (ɛ ⊔ q; s) ⊑ s} – ɛ ⊔ q; q* ⊑ q* – ɛ ⊔ q; q’ ⊑ q’ implies q* ⊑ q’– q* = ⊔ {qⁿ | n ∊ Nat} (continuity)

• Rule (invariance):– {p}q*{p} if {p}q{p}

Page 46: Algebra of Concurrent Programming

Infinite replication

• !p is the greatest solution of _ ⊑ p|_– as in the pi calculus

• all executions of !p are infinite– or possibly empty

Page 47: Algebra of Concurrent Programming

Recursion

• Let F(_) be a monotonic function between programs.

• Theorem: all functions defined by monotonic operators are monotonic.

• μF is strongest solution of F(_) ⊑ _• νF is weakest solution of _ ⊑ F(_)• Theorem (Knaster-Tarski): These

solutions exist.

Page 48: Algebra of Concurrent Programming

Basic statements/assertions

• skip • bottom • top ⊤• assignment: x := e(x)• assertion: assert b• assumption: assume b• finally ..b• initially b..

Page 49: Algebra of Concurrent Programming

more

• assign thru pointer: [a] := e• output: c!e• input: c?x• points to: a|-> e– a |-> _ =def exists v . a|-> v

• throw, catch• alloc, dispose

Page 50: Algebra of Concurrent Programming

Laws(examples)

• assume b =def b..⊓• assert b =def b..⊓ ⊔ not(b).. • x:=e(x) ; x:=f(x) = x :=

f(e(x))– in a sequential language

Page 51: Algebra of Concurrent Programming

more

• (p|-> _ ); [p] := e ⊑ p|-> e– in separation logic

• c!e | c?x = x := e– in CSP but not in CCS or Pi

• throw x ; (catch x; p) = p

Page 52: Algebra of Concurrent Programming

Part 3Unifying Semantic Theories

• Six familiar semantic definition styles. • Their derivation from the algebra• and vice versa.

Page 53: Algebra of Concurrent Programming

operational rules

algebraic laws

deduction rules

Page 54: Algebra of Concurrent Programming

Hoare Triple

• a method for program verification• {p} q {r} ≝ p;q ⊑ r– one way of achieving r is by first doing p and then doing q

• Theorem (sequential composition):– {p} q {s} & {s} q’ {r} implies {p}

q;q’ {r}– proved by associativity

Page 55: Algebra of Concurrent Programming

Plotkin reduction

• a method for program execution• <p , q> -> r =def p ; q ⊒ r– if p describes state before execution of q

then r describes a possible final state, eg.–<..(x2 = 18) , x := x+1> -> ..(x = 37)

• Theorem (sequential composition):• <p, q> -> s & <s, q’> -> r

implies <p, q;q’> r

Page 56: Algebra of Concurrent Programming

Milner transition

• method of execution for processes• p – q -> r ≝ p ⊒ q;r– one of the ways of executing p is by first

executing q and then executing r .– e.g., (x := x+3) –(x:=x+1)-> (x:=x+2)

• Theorem (sequential composition):– p –q-> s & s –q’-> r => p –(q;q’)-> r(big-step rule for ; )

Page 57: Algebra of Concurrent Programming

partial correctness

• describes what may happen• p[q]r =def p ⊑ q;r– if p describes a state before execution of

q, then execution of q may achieve r• Theorem (sequential composition):• p [q] s & s [q’] r implies p [q;q’] r• useful if r describes error states, and q

describes initial states from which a test execution of q may end in error.

Page 58: Algebra of Concurrent Programming

Summary

• {p} q {r} =def p;q ⊑ r– Hoare triple

• <p,q>->r =def p;q ⊒ r– Plotkin reduction

• p –q->r =def p ⊒ q;r–Milner transition

• p [q] r =defp ⊑ q;r– test generation

Page 59: Algebra of Concurrent Programming

Sequential composition

• Law: ; is associative• Theorem: sequence rule is valid for all four

triples.

• the Law is provable from the conjunction of all of them

Page 60: Algebra of Concurrent Programming

Skip

• Law: p ; = p = ; p

• Theorems: {p} {p} p [] p

p − → p <p, > –>p

• Law follows from conjunction of all four theorems

Page 61: Algebra of Concurrent Programming

Left distribution ; through ⊔• Law: p;(q ⊔ q’) = p;q ⊔ p;q’ • Theorems:– {p} (q⊔q’) {r} if {p}q{r} and {p}q’{r} – <p,q⊔q’>-> r if <p,q>-> r or <p, q’>-> r – p [q⊔q’] r if p [q] r or p [q’] r – p -(q⊔q’)-> r if p –q->r and p -q’->r(not used in CCS)

• law provable from either and rule together with either or rule.

Page 62: Algebra of Concurrent Programming

locality and frame

• left locality (s|p) ; q ⊑ s | (p;q)• Hoare frame: {p} q {r} ⇒ {s|p} q {s|r}

• right locality p ; (q|s) ⊑ (p;q) | s• Milner frame: p -q-> r ⇒(p|s) - q-> (r|s)

• Full locality requires both frame rules

Page 63: Algebra of Concurrent Programming

Separation logic

•Exchange law: – (p | p’) ; (q| q’) (p ; q) | (p’;q’)•Theorems– {p} q {r} & {p’} q’ {r’} ⇒ {p|p’} q|q’ {r|

r’}– p -q -> r & p’–q’-> r’ => p|p’ –q|q’-> r|r’

• the law is provable from either theorem• For the other two triples, the rules are

equivalent to the converse exchange law.

Page 64: Algebra of Concurrent Programming

usual restrictions on triples

• in {p} q {r} , p and r are of form ..b, ..c

• in p [q] r , p and r are of form b.., c..• in <p,q>->r, p and r are of form ..b, ..c• in p –q->r, p and r are programs • in p –q->r (small step), q is atomic • (in all cases, q is a program)

• all laws are valid without these restrictions

Page 65: Algebra of Concurrent Programming

Weakest precondition (-;)•(q -; r) =def

the weakest solution of ( _ ;q ⊆ r)– the same as Dijkstra’s wp(q, r)– for backward development of programs

Page 66: Algebra of Concurrent Programming

Weakest precondition (-;)

• Law (-; adjoint to ;)– p ⊑ q -; r iff p;q ⊑ r (galois)

• Theorem– (q -; r) ; q ⊑ r– p ⊑ q -; (p ; q)

• Law provable from the theorems– cf. (r div q) q ≤ r– r ≤ (rq) div q

Page 67: Algebra of Concurrent Programming

Theorems

• q’ ⊑ q & r ⊑ r’ => q-;r ⊑ q’-;r’• (q;q’)-;r ⊑ q-;(q’-;r)• q-;r ⊑ (q;s) -; (r;s)

Page 68: Algebra of Concurrent Programming

Specification statement (;-)

•(p ;- r) =def the weakest solution of ( p ; _ ⊆ r)

– Back/Morgan’s specification statement– for stepwise refinement of designs– same as p⇝r in RGSep– same as (requires p; ensures r) in VCC

Page 69: Algebra of Concurrent Programming

Law of consequence

Page 70: Algebra of Concurrent Programming

Frame laws

Page 71: Algebra of Concurrent Programming

Part 4Denotational Models

A model is a mathematical structure that satisfies the axioms of an algebra, and realistically describes a useful application, for example, program execution.

Page 72: Algebra of Concurrent Programming

Modelsdenotational models

algebraic laws

Page 73: Algebra of Concurrent Programming

Some Standard Models:

• Boolean algebra( {0,1}, ≤, , , (1 - _) )

• predicate algebra (Frege, Heyting)– (ℙS,├, , , (S - _), => , ∃, ∀)

• regular expressions (Kleene):– (ℙA*, ⊆, ∪, ; , ɛ , {<a>} , | )

• binary relations (Tarski):– (ℙ(SS), ⊆, ∪, ∩, ; , Id , not(_), converse(_))

• algebra of designs is a superset of these

Page 74: Algebra of Concurrent Programming

Model: (EV, EX, PR)

• EV is an underlying set of events (x, y, ..) that can occur in any execution of any program

• EX are executions (e, f,…), modelled as sets of events

• PR are designs (p, q, r,…), modelled as sets of executions.

Page 75: Algebra of Concurrent Programming

Set concepts

• ⊑ is (set inclusion)• ⊔ is (set union) • ⊓ is (intersection of sets)• is { } (the empty set)• ⊤ is EV (the universal set)

Page 76: Algebra of Concurrent Programming

With (|)

• p | q = {e ∪ f | e ε p & f ε q & e∩f = { } }

– each execution of p|q is the disjoint union of an execution of p and an execution of q

– p|q contains all such disjoint unions• | generalises many binary operators

Page 77: Algebra of Concurrent Programming

Introducing time

• TIM is a set of times for events– partially ordered by ≤

•Let when : EV -> TIM – map each event to its time of occurrence.

Page 78: Algebra of Concurrent Programming

Definition of <

•x < y =def not(when(y) ≤ when(x))– x < y & y < x means that x and y occur ‘in

true concurrency’.• e < f =def ∀x,y . x∊e & y∊f => x < y– no event of f occurs before an event of e– hence e<f implies ef = { }

•If ≤ is a total order, – there is no concurrency, – executions are time-ordered strings

Page 79: Algebra of Concurrent Programming

Sequential composition (then)

• p ; q = {ef | e∊p & f∊q & e<f}

• special case: if ≤ is a total order, – e < f means that ef is concatenation

(e⋅f) of strings– ; is the composition of regular

expressions

Page 80: Algebra of Concurrent Programming

Theorems

• These definitions of ; and | satisfy the locality and exchange laws.

•(s|p) ; q ⊑ s |(p;q)•(p|q) ; (p’|q’) ⊑ (p;p’) | (q;q’)– Proof: the lhs describes fewer

interleavings than the rhs.

• special case: regular expressions satisfy all our laws for ⊑ , ⊔ , ; , and |

Page 81: Algebra of Concurrent Programming

Disjoint concurrency (||)

• p||q =def (p ; q) (q ; p)– all events of p concurrent with all of q .– no interaction is possible between them.

• Theorems: (p||q) ; r p || (q ; r) (p||q) ; (p’||q’) (p;p’) || (q;q’)

– Proof: the rhs has more disjointness constraints than the lhs .

– the wrong way round!• So make the programmer responsible for

disjointness, using interfaces!

Page 82: Algebra of Concurrent Programming

Interfaces

• Let q be the body of a subroutine• Let s be its specification• Let (q .. s) assert that q is correct • Caller may assume s• Implementer may execute q

Page 83: Algebra of Concurrent Programming

Solution

• p*q =def (p|q .. p||q) = p|q if p|q ⊑ p||q ⊤ otherwise

– programmer is responsible for absence of interaction between p and q .

• Theorem: ; and * satisfy locality and exchange.– Proof: in cases where lhs ≠ rhs, rhs = ⊤

Page 84: Algebra of Concurrent Programming

Problem

• ; is almost useless in the presence of arbitrary interleaving (interference).

• It is hard to prove disjointness of p||q• We need a more complex model– which constrains the places at which a

program may make changes.

Page 85: Algebra of Concurrent Programming

Separation

• PL is the set of places at which an event can occur

• each place is ‘owned’ by one thread,– no other thread can act there.

• Let where:EV -> PL map each event to its place of occurrence.

• where(e) =def {where(x) | x ∊ e }

Page 86: Algebra of Concurrent Programming

Separation principle

• events at different places are concurrent

• events at the same place are totally ordered in time

• ∀x,y ∊ EV . where(x) = where(y) iff x≤y or y≤x

Page 87: Algebra of Concurrent Programming

Picture

time

space

Page 88: Algebra of Concurrent Programming

Theorem

• p || q = {ef | e ∊ p & f ∊ q& where(e) where(f) = { }

}• proved from separation principle

Page 89: Algebra of Concurrent Programming

Convexity Principle

• Each execution contains every event that occurs between any of its events.

• ∀e ∊ EX , y ∊ EV. ∀x, z ∊ e .when(x) ≤ when(y) ≤ when(z) => y ∊ e – no event from elsewhere can interfere

between any two events of an execution

Page 90: Algebra of Concurrent Programming

A convex execution of p;q

time

space

p q

Page 91: Algebra of Concurrent Programming

A non-convex ‘execution’ of p;q

time

space

p q

Page 92: Algebra of Concurrent Programming

Conclusion:in Praise of Algebra

• Reusable• Modular• Incremental• Unifying

• Discriminative• Computational• Comprehensible• Abstract

• Beautiful!

Page 93: Algebra of Concurrent Programming

Algebra likes pairs

• Algebra chooses as primitives– operators with two operands + , – predicates with two places = , – laws with two operators & v , + – algebras with two components rings

Page 94: Algebra of Concurrent Programming

Tuples

• Tuples are defined in terms of pairs.– Hoare triples– Plotkin triples– Jones quintuples – seventeentuples …

Page 95: Algebra of Concurrent Programming

Semantic Links

deductions transitions

denotations

algebra

Page 96: Algebra of Concurrent Programming

Increments

algebra

Page 97: Algebra of Concurrent Programming

Filling the gaps

algebra