PARTIALLY SYNCHRONOUS ALGORITHMS

21
PARTIALLY SYNCHRONOUS ALGORITHMS PRESENTED BY: BINAMRA DUTTA

description

PARTIALLY SYNCHRONOUS ALGORITHMS. PRESENTED BY: BINAMRA DUTTA. Deepest Distinctions among models based on timing assumptions, namely Synchronous Models Lock Step Synchronization with executions proceeding in synchronous steps. - PowerPoint PPT Presentation

Transcript of PARTIALLY SYNCHRONOUS ALGORITHMS

Page 1: PARTIALLY SYNCHRONOUS ALGORITHMS

PARTIALLY SYNCHRONOUS

ALGORITHMS

PRESENTED BY:

BINAMRA DUTTA

Page 2: PARTIALLY SYNCHRONOUS ALGORITHMS

Deepest Distinctions among models based on timing assumptions, namely

Synchronous Models

Lock Step Synchronization with executions proceeding in synchronous steps.

Impossible or inefficient to implement the synchronous model in Dist. Sys.

Asynchronous Models

Separate Components take steps in an arbitrary order, arbitrary relative speeds.

General and portable, runs correctly in networks with arbitrary relative speeds.

Partially Synchronous (timing based) Model

Between the two extremes

Components have some information about time though not exact with some restrictions on the relative timing of events, execution not completely lock-step as in Synchronous.

More realistic than either, since real systems typically do use some timing information.

Page 3: PARTIALLY SYNCHRONOUS ALGORITHMS

MMT MODEL

Simple variant of the I/O automaton model.

Obtained by replacing the fairness conditions with lower and upper bounds on time.

Correctness depends crucially on the restrictions on executions imposed by them.

Especially good for modeling computer systems at a low level ( system components)

Upper Bound- from any initial index for a task C if time ever passes beyond the specified upper bound for C, in the interim, either an action in C occurs or else C becomes disabled. ( 0< upper( C ) )

Lower Bound- from any initial index for C no action in C can occur before the specified lower bound. ( 0 lower( C )< ) & lower( C ) upper ( C )

Admissibility – time advances normally and processing does not stop if the automaton is scheduled to perform some more work.

- timed execution an infinite sequence, then time of actions approach .

- timed execution a finite sequence then in final state if task C enabled, then

upper ( C ) = .

Various notations like texecs(B), atexecs(B),ttrace(), attraces(B).

Page 4: PARTIALLY SYNCHRONOUS ALGORITHMS

Channel MMT automaton

Di,j = (Ci,j , b) based on universal reliable FIFO send/receive channel automaton

upper bound d, fixed positive real on the delivery time for the oldest message

no lower bound

Examples on pg. 739.

Timeout MMT automaton

MMT automaton P2 awaits reciept of message from p1, if not received in certain amount of time performs a timeout action.

time measured by counting fixed number of k 1 own steps observing the lower and upper bounds l1 and l2.

decrement operation has bounds l1 & l2 and timeout has bounds 0 & l.

timeout performed at most time l after count reaches 0. Hence, [kl1, kl2+l]

timeout occurs then no previous receive

admissible timed execution of P2, if no receive occurs then timeout does in fact occur. *( processing does not stop as the automaton has to perform some more work)

Page 5: PARTIALLY SYNCHRONOUS ALGORITHMS

Two-task Force

main task increments a counter count as long as a Boolean flag is false and the

int task sets flag:= true.

the main task then decrements count until it reaches 0, then reports completion.

main task has bound l1 & l2 , int task has upper bound of l.

every admissible timed execution of Race , a report is eventually generated.

report must occur by time l+l2+Ll, where L= l2/l1

( L measure of timing uncertainty)

Page 6: PARTIALLY SYNCHRONOUS ALGORITHMS

OPERATIONS

Composition

actions having the same name in different automata are composed.

defined for a finite collection of MMT automata as it is allowed to have a finite

number of tasks.

composition (A,b) = iI ( Ai , bi ) of a finite compatible collection of MMT automata

{( Ai , bi )} iI defined as

- A= iI Ai , A is the composition of the underlying I/O automata Ai for all the

components.

- For each task C of A, b’s lower and upper bounds for C are the same as those of

bi , where Ai is the unique component I/O automaton having task C.

Page 7: PARTIALLY SYNCHRONOUS ALGORITHMS

Theorem 23.1 – An admissible timed execution or admissible timed trace of a composition projects to yield admissible timed executions or admissible timed traces of the component automata. ( ref pg. 743 )

Theorem 23.2 – Under certain conditions admissible timed executions of component MMT automata can be pasted together to form an admissible timed execution of the composition.

Theorem 23.3 - Under certain conditions admissible timed traces of component MMT automata can be pasted together to form an admissible timed trace of the composition.

HIDING

“hides” output actions of an MMT automaton by reclassifying them as internal ones

prevents them from being used for further communication and means that they are no longer included in traces.

Page 8: PARTIALLY SYNCHRONOUS ALGORITHMS

General Timed Automata

Automata has no external timing restrictions- all timing constraints are explicitly encoded into their states and transitions.

MMT a special case of GTA.

Definitions

(t) time passage action denotes the passage of time by the amount t.

timed signature S a quadruple consisting of input actions in(S), output actions out(S), the internal actions int(S) and the time passage actions.

types of actions namely visible , external , discrete , locally controlled , all the actions of S acts(S).

Components

sig(A), a timed signature.

states(A), a set of states.

start(A), nonempty subset of states(A) known as start states.

trans(A), a state transition relation where trans(A) states (A) acts(sig(A)) states(A). ( no tasks(A) components like in MMT or I/O automata)

Page 9: PARTIALLY SYNCHRONOUS ALGORITHMS

one timed execution fragment is a time passage refinement of another timed execution fragment ’, provided they both are identical except for the fact that, in some of the time passage steps of ’ are replaced with finite sequences of time passage steps with the same initial and final states and the same total amount of time passage.

General timed automaton corresponding closely to MMT (ref. pg.746)

encodes the timing restriction of earlier MMT automaton – upper bound d on the time to deliver the oldest message in the channel – into its states and transitions.

variable now for keeping explicit track of the current time.

last for keeping track of latest time at which the next message delivery can occur.

when a send occurs, if no previously scheduled message delivery, last is set to now+d

when a receive occurs, last bound is reset to now+d if queue still nonempty or if emptied then last is set to .

A non-MMT general timed automaton

In this the time bound of d is required for every message in the channel, not only the oldest.

message delivery deadlines are stored along with the messages on the queue instead of in separate last components. ( not physically implementable )

Page 10: PARTIALLY SYNCHRONOUS ALGORITHMS

A general timed automaton with no admissible timed executions

“process automaton” A sends the same message m infinitely many times.

no admissible timed executions as the successive sending times are closer and closer together, approaching a time limit of 1. ( for infinite sequence times of action should approach to be admissible)

Transforming MMT Automata into General Timed Automata

Involves building time deadlines into the state and not allowing time to pass beyond those deadlines while they are still in force.

now for explicitly keeping track of the current time. ( has to be first( C ) )

first( C ) and last ( C ) components for denoting the earliest and latest times at which the next action in task C is allowed to occur. Get updated according to lower and upper bounds specified by the boundmap b.

time passage actions (t) cannot pass beyond any of the last( C ) values.

Page 11: PARTIALLY SYNCHRONOUS ALGORITHMS

The transitions are

If acts(A), then (s, ,s’) trans(A’) exactly if all the following conditions hold

1. (s.basic, ,s’.basic) trans(A)

2. s’.now = s.now

3. For each C tasks(A)

(a) If C, then s.first(C) s.now

(b) If C is enabled in both s.basic and s’.basic and C, then s.first(C) = s’.first(C)

and s.last( C ) = s’.last ( C ).

(c) If C is enabled in s’.basic and either C is not enabled in s.basic or C, then s’.first( C) = s.now + lower( C ) and s’.last( C ) = s.now + upper( C )

(d)) If C is not enabled in s’.basic, then s’.first( C ) =0 and s’.last( C ) =

If = (t), then (s, ,s’) trans(A’) exactly if all the following conditions hold:

1. s’.basic = s.basic

2. S’.now = s.now+t

3. For each C tasks(A)

Page 12: PARTIALLY SYNCHRONOUS ALGORITHMS

(a) s’.now s.last(C)

(b) s’.first( C ) = s.first ( C ) and s’.last ( C ) = s.last ( C ).

Theorem – If (A,b) is any MMT timed automaton, then gen(A,b) is a general timed automaton. Moreover, attraces(A,b) = attraces(gen(A,B)).

Lemma – Following holds in any reachable state of gen(A,b) and for any task C of A

1. now last ( C )

2. If C is enabled, then last (C) now + upper ( C )

3. First ( C ) now + lower ( C )

4. First ( C ) last(C)

Page 13: PARTIALLY SYNCHRONOUS ALGORITHMS

Operations

Composition: It is an operation for GTA, generalizing the composition operation already defined for MMT automata. A finite collection of {Si}iεI

of timed signatures is defined to be compatible if for all i, j ε I, i j, we have:1. int(Si) acts (Sj) = 2. Out(Si) out (Sj) =

A collection of GTAs is compatible if their timed signatures are compatible.

The composition S = iεI Si of a finite compatible collection of timed signatures {Si}iεI is defined to be the timed signature with 1. out(S) = iεI out(Si)2. int(S) = iεI int(Si)3. int(S) = iεI int(Si) - iεI out(S)

Page 14: PARTIALLY SYNCHRONOUS ALGORITHMS

The composition A = iεI Ai of a finite compatible collection of GTAs is defined as follows:1. sig(A) = iεI sig(Ai)2. states(A) = iεI states(Ai)3. start(A) = iεI start(Ai)4. trans(A) is the set of triples (s, , s1) such that for all i ε I, if ε acts (Ai), then (si, , s1

i) ε trans (Ai) ; otherwise si = s1i

Theorem 23.6 The composition of a compatible collection of general timed automata is a general timed automation

Composition versus gen: For a given compatible collection of MMT automata, it turns out that it does not matter whether we compose first and then apply the gen transformation to the composition, or first apply the gen transformation to the components and then compose. The resulting GTAs are the same, up to isomorphism (of the reachable portions of the machines)

Page 15: PARTIALLY SYNCHRONOUS ALGORITHMS

Properties and Proof Methods

The correctness and performance of timing based algorithms depends on timing assumptions. In the asynchronous setting, drastic changes of behavior of timing based algorithms can result from small changes in timing assumptions. Two important proof techniques for timing based algorithms are the method of invariant assertions, timed trace property and the method of simulation relations.

Invariant Assertions It is defined for a general timed automation A and this is a property which is true for all reachable states of A. The difference between the the definition of invariant assertions in synchronous and asynchronous system is that in the asynchronous system, the state typically consists of ordinary data like the values of local and shared variables and sequences of messages in transit. Whereas in the synchronous system the state in addition to the above information also contains timing information such as the current time and scheduled deadlines for future events.The proof method for invariant assertions is still induction.

Page 16: PARTIALLY SYNCHRONOUS ALGORITHMS

Example 23.3.1: Invariant for the timeout systemHere the timeout system A1 of Example 23.2.4 with the assumption that kl1 > l+d. The three assertions ahead are used to prove that the system only performs a timeout in the case that the contained process P1 is actually deadAssertion 23.3.1 In any reachable state of A1 if status1 = alive, then count2 > 0. This assertion can be proved with induction and some auxiliary assertions.Assertion 23.3.2 In any reachable state of A1 if status2 = done, then count2 = 0. This is a strengthened version of Assertion 23.3.1 and can also be proved by induction. It is worth noting that this assertion involves statements about first and last time components of the state.Assertion 23.3.3 In any reachable state of A1 if if status1 = alive, then the following conditions are true

1. count2 > 0 : This is nothing but a restatement of Assertion 23.3.1.

2. Either last(send) + d< first( dec ) + (count2 – 1) l1 , queue is nonempty or status2 = disabled. This condition says that either a message is scheduled to be sent, in sufficient time to arrive before count2 reaches 0 or else a message is already in transit or else one has already arrived.

Page 17: PARTIALLY SYNCHRONOUS ALGORITHMS

3. If queue is nonempty, then either last( rec ) < first( dec ) + (count2 – 1) l1 or status2 = disabled. . It says that if a message is in transit then either some message will arrive before the count2 reaches 0 or else one has already arrived.

Timed Trace PropertiesThe properties of timed systems can be formulated as properties of their admissible timed traces. A timed trace property P is defined to consist of the following:1. sig(P), a timed signature containing no internal actions.2. ttraces(P), a set of sequences of (action, time) pairs; the time components in each sequence must be monotone nondecreasing, and if the sequence is infinite, they must be unbounded.

We intrepret the statement that a GTA A satisfies a trace property P to mean that in(A) = in(P), out(A) = out(P) and attraces (A) ttraces (P). The example of Timed trace property is on page 760, example 23.3.2

Page 18: PARTIALLY SYNCHRONOUS ALGORITHMS

Simulations

This method can be used for reasoning about timing based systems as well as synchronous and asynchronous systems. Next the notion of a “ timed simulation relation “ between states of two general timed automata. The conditions to be satisfied in simulation method are:

1. All the start states in A have corresponding start states in B.2. We require that the correspondence preserves the timed trace, that is the sequence of visible actions, each paired with is time of occurrence, plus the total amount of time passage.3. If is a visible action, then must consist of a step step, possibly with some preceding and/or following internal steps: If is an internal action then must consist of internal steps only. If = v(t), then must consist of time passage steps interspersed with internal steps, with the total amount of time passage equal to t.

The theorem which gives the key property of timed simulation relations says that “ If there is a timed simulation from A to B, then attraces (A) attraces (B) ”.

Page 19: PARTIALLY SYNCHRONOUS ALGORITHMS

Example 23.3.3 Simulation Proof of time bounds for a timeout process

This example shows that P2, the timeout MMT automaton of Example 23.1.2, must perform a timeout within the interval [kl1, kl2 + l], if no messages are received. We define a variant of P2, that does not even have a receive action in its signature. Automation A simply counts down from k to 0 and then performs a timeout. A single timeout occurs between the time interval [kl1, kl2 + l].

The following conditions hold:1. s.now = u.now2. s.status = u.status3. u.last(timeout) s.last(dec) + (s.count-1). l2 + l if s.count > 0s.last(timeout) otherwise

In this condition the u.last(timeout) value (in gen(B)) is constrained to beat least as large as a certain quantity that is calculated in terms of the state of gen(A). There are two cases :

Page 20: PARTIALLY SYNCHRONOUS ALGORITHMS

If count > 0, then this time is bounded by the last time at which the first decrement can occur, plus the additional time required to do count – 1 additional decrement steps followed by a timeout step; since each of these count steps can take at most time l2 and the timeout can take at most time l, this additional time is at most (count –1 ).l2 + l. If count = 0 then this time is bounded by the last time at which the timeout can occur.

4. U.first(timeout) s.first(dec) + (s.count-1). l1 if s.count > 0s.first(timeout) otherwise.

The interpretation of the first(timeout) inequality is symmetric – the values of first(timeout) should be no larger than a calculated lower bound on the earliest time until a timeout action is performed by gen(A).

Assertion 23.3.4 In any reachable state of gen(A), if count > 0, then status = active.The proof proceed in the usual way for simulations, verifying the start condition and the step condition.

Page 21: PARTIALLY SYNCHRONOUS ALGORITHMS

Two – task race

This outlines a simulation proof in which l + l2 + Ll is an upper bound on the time until the Race automation. Within time l, the int task sets the flag to true. During this time the largest value that count could reach is l/l1 . Then it takes time at most l/l1 l2 = Ll for the main task to decrement count to 0, and then an additional time at most l2 to perform a report.The following conditions hold for two-task force:1.s. now = u.now2. S.reported = u.reported3. U.last(report) s.last(int) + (s.count + 2) l2 + L(s.last(int) – s.first)main)) if s.flag = false and s.first(main) s.last(int)s.last(main) + (s.count) l2

otherwise.

The third condition says that if flag = true then the time remaining until report is just the time for the main task to do the remaining decrement steps, followed by the final report. The same reasoning holds if flag is still false, but most become true before there is time for another increment to occur, that is, if s.first(main) >s.last(int). Otherwise s.flag = false an s.first(main) less than equal to s.last(int).