From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey...

11
Reinventing Shakey Murray Shanahan Department of ComputerScience, Queen Maryand Westfield College, Mile End Road, London E1 4NS, England. [email protected] Abstract This paper presents analogous logical characterisations of perception and planning in a mobile robot, and describes their implementation via an abductive logic programming recta- interpreter. In addition, the incorporation of the perception and planning processes in a sense-plan- act cycle is described. Introduction In the late Sixties, when the Shakey project started at SRI, the vision of robot design based on logical representation seemed both attractive and attainable. Through the Seventies and early Eighties, however, the desire to build working robots led researchers away from logic to more practical but ad hoc approaches to representation. This movement away from logical representation reached its apogee in the late Eighties and early Nineties when Brooks jettisoned the whole idea of representation, along with the so-called sense-model-plan-act architecture epitomised by Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey workedby building an internal model of the world, using a logic-based representational formalism. This model was used by the STRIPS planner to find a sequenceof actions for achieving a desired goal. Having constructed such a plan, Shakey proceeded to execute it via a plan execution module, which possessed some ability to recover from action failure and had a limited re-planning capacity. According to Brooks, robots constructed in the style of Shakey are handicapped by the need for all information from the sensors to pass through the modelling and planning modulesbefore having any effect on the robot’s motors. Hence Shakey’s success was confined to highly engineered, static environments. The Shakey approach, so the argument goes, is bound to fail in a more realistic, dynamic environment, in which there isn’t time to construct a plan in response to ongoing, unexpected events. Onthe other hand, the Shakey style of architecture, having an overtly deliberative component, seems to offer researchers a direct path to robots with high-level cognitive skills, such as planning, reasoning about other agents, and possibly language. Accordingly, many roboticists have attempted to reconcile the deliberative aspect of the sense- model-plan-act architecture with the sort of reactive architecture favoured by Brooksand his followers. Other researchers have instigated a more thoroughgoing Shakey revival, and are aiming to achieve robots with the sort of high-level cognitive skills listed above by using logic as a representational medium [Lesptrance, et al., 1994]. Robots whose design is based on logical representation have the virtue of producing behaviour which can be accounted for in terms of correct reasoning and correct representation. Moreover, where the Shakey project compromised on the representation of the effects of actions, using the STRIPSlanguage instead of logic, contemporary researchers, armed with modern solutions to the frame problem [Shanahan, 1997a], expect to go one better than their predecessors and use logic throughout. The work reported here shares the aims of both the above groups of researchers. In particular, this paper presents correctly implemented logical accounts of ¯ planning, ¯ perception, and ¯ the interleaving of sensing, planning and acting. The third bullet is critical for achieving the hoped-for combination of deliberation and reactivity. A logic proga-amming approach is taken to the problem of correctly implementing the logical accounts in question, since it offers a suitable compromise between the unattainable ideal of directly using a general purpose theorem prover, and the use of efficient special purpose algoithms whose intermediate computational states may have no declarative meaning. 1 Representing Action To supply the required logical accounts of perception and planning, a formalismfor reasoning about action is needed. The formalism presented here is based on the circumscriptive event calculus [Shanahan, 1997a]. Because this material is presented in considerable de.tail elsewhere, the description here will be kept fairfy’~rief. --? " ~ ~ . -.-- A many sorted language is assumed, with variables for fluents, actions (or events), and time points. We have the following axioms, whose conjunction will be denoted EC. Their main purpose is to constrain the predicate HoldsAt. 125 From: AAAI Technical Report FS-98-02. Compilation copyright © 1998, AAAI (www.aaai.org). All rights reserved.

Transcript of From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey...

Page 1: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

Reinventing ShakeyMurray Shanahan

Department of Computer Science,Queen Mary and Westfield College,

Mile End Road,London E1 4NS,

[email protected]

Abstract

This paper presents analogous logicalcharacterisations of perception and planning in amobile robot, and describes their implementationvia an abductive logic programming recta-interpreter. In addition, the incorporation of theperception and planning processes in a sense-plan-act cycle is described.

IntroductionIn the late Sixties, when the Shakey project started at SRI,the vision of robot design based on logical representationseemed both attractive and attainable. Through theSeventies and early Eighties, however, the desire to buildworking robots led researchers away from logic to morepractical but ad hoc approaches to representation. Thismovement away from logical representation reached itsapogee in the late Eighties and early Nineties when Brooksjettisoned the whole idea of representation, along with theso-called sense-model-plan-act architecture epitomised byShakey.

Let’s briefly review the Shakey project [Nilsson, 1984] andBrooks’s critique of it [Brooks, 1991]. Shakey worked bybuilding an internal model of the world, using a logic-basedrepresentational formalism. This model was used by theSTRIPS planner to find a sequence of actions for achievinga desired goal. Having constructed such a plan, Shakeyproceeded to execute it via a plan execution module, whichpossessed some ability to recover from action failure andhad a limited re-planning capacity.

According to Brooks, robots constructed in the style ofShakey are handicapped by the need for all informationfrom the sensors to pass through the modelling andplanning modules before having any effect on the robot’smotors. Hence Shakey’s success was confined to highlyengineered, static environments. The Shakey approach, sothe argument goes, is bound to fail in a more realistic,dynamic environment, in which there isn’t time toconstruct a plan in response to ongoing, unexpected events.

On the other hand, the Shakey style of architecture, havingan overtly deliberative component, seems to offerresearchers a direct path to robots with high-level cognitiveskills, such as planning, reasoning about other agents, andpossibly language. Accordingly, many roboticists have

attempted to reconcile the deliberative aspect of the sense-model-plan-act architecture with the sort of reactivearchitecture favoured by Brooks and his followers.

Other researchers have instigated a more thoroughgoingShakey revival, and are aiming to achieve robots with thesort of high-level cognitive skills listed above by usinglogic as a representational medium [Lesptrance, et al.,1994]. Robots whose design is based on logicalrepresentation have the virtue of producing behaviourwhich can be accounted for in terms of correct reasoningand correct representation. Moreover, where the Shakeyproject compromised on the representation of the effects ofactions, using the STRIPS language instead of logic,contemporary researchers, armed with modern solutions tothe frame problem [Shanahan, 1997a], expect to go onebetter than their predecessors and use logic throughout.

The work reported here shares the aims of both the abovegroups of researchers. In particular, this paper presentscorrectly implemented logical accounts of

¯ planning,¯ perception, and¯ the interleaving of sensing, planning and acting.

The third bullet is critical for achieving the hoped-forcombination of deliberation and reactivity.

A logic proga-amming approach is taken to the problem ofcorrectly implementing the logical accounts in question,since it offers a suitable compromise between theunattainable ideal of directly using a general purposetheorem prover, and the use of efficient special purposealgoithms whose intermediate computational states mayhave no declarative meaning.

1 Representing ActionTo supply the required logical accounts of perception andplanning, a formalism for reasoning about action is needed.The formalism presented here is based on thecircumscriptive event calculus [Shanahan, 1997a]. Becausethis material is presented in considerable de.tail elsewhere,

the description here will be kept fairfy’~rief. --? " ~ ~ . -.--

A many sorted language is assumed, with variables forfluents, actions (or events), and time points. We have thefollowing axioms, whose conjunction will be denoted EC.Their main purpose is to constrain the predicate HoldsAt.

125

From: AAAI Technical Report FS-98-02. Compilation copyright © 1998, AAAI (www.aaai.org). All rights reserved.

Page 2: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

HoldsAt(f,t) represents that fluent f holds at time Throughout the paper, all variables are universallyquantified with maximum scope, unless otherwiseindicated.

HoldsAt(f,t) ~-- Initiallyp(f) A -’-, Clipped(0,f,t) (EC1)

HoldsAt(f, t3) <---- (EC2)Happens(a,tl,t2) A Initiates(a,f, tl)

t2 < t3 A --’, Clipped(tl,f, t3)

Clipped(t 1,f, t4) (EC3)::1 a,t2,t3 [Happens(a,t2,t3) A tl < t3 A t2 < t4 A

[Terminates(a,f, t2) v Releases(a,f,t2)]]

---, HoldsAt(f,t) ~-- (EC4)InitiallyN(f) ^ --, Declipped(0,f,t)

--, HoldsAt(f, t3) (EC5)Happens(a,t I ,t2) A Terminates(a,f,t 1)

t2 < t3 A --, Declipped(tl,f, t3)Declipped(t l,f, t4) ~-> (EC6)

3 a,t2,t3 [Happens(a,t2,t3) A tl < t3 ,’, t2 < t4 [Initiates(a,f, t2) v Releases(a,f, t2)]]

Happens(a,tl,t2) ---> tl < t2 (EC7)A particular domain is described in terms of Initiates,Terminates, and Releases formulae. Initiates(a,f,t)represents that fluent f starts to hold after action a at time t.Conversely, Terminates(a,f,t) represents that f starts not hold after action a at t. Releases(a,f,t) represents that fluentf is no longer subject to the common sense law of inertiaafter action a at t.

A particular narrative of events is described in terms ofHappens and Initially formulae. The formulae Initiallyp(f)and InitiallyN(f) respectively represent that fluent f holds time 0 and does not hold at time 0. Happens(a,tl,t2)represents that action or event a occurs, starting at time tland ending at time t2.

A two-argument version of Happens is defined as follows.

Happens(a,t) ~def Happens(a,t,t)

Formulae describing triggered events are allowed, and willgenerally have the form

Happens(ct,x) ~ I7.

As we’ll see in Section 5, similar formulae can be used todefine compound events.

The frame problem is overcome through circumscription.Given a conjunction Z of Initiates, Terminates, andReleases formulae describing the effects of actions, aconjunction A of Initially, Happens and temporal orderingformulae describing a narrative of actions and events, and aconjunction f2 of uniqueness-of-names axioms for actionsand fluents, we’re interested in

CIRC[Z ; Initiates, Terminates, Releases] ACIRC[A ; Happens] A EC^ X’-2.

By minimising Initiates, Terminates and Releases weassume that actions have no unexpected effects, and byminimising Happens we assume that there are nounexpected event occurrences. In all the cases we’reinterested in, Z and A will be conjunctions of Horn clauses,

and the circumscriptions will reduce to predicatecompletions.

Care must be taken when domain constraints and triggeredevents are included. The former must be conjoined to EC,while the latter are conjoined to A.

A detailed account of this formalism and the accompanyingsolution to the frame problem can be found in [Shanahan,1997a].

2 A Logical Account of PlanningPlanning can be thought of as the inverse operation totemporal projection, and temporal projection in the eventcalculus is naturally cast as a deductive task. Given Z, £’2and A as above, we’re interested in HoldsAt formulae Fsuch that,

CIRC[Z ; Initiates, Terminates, Releases] ACIRC[A ; Happens] A EC A ..Q I=: F.

Conversely, as first pointed out by Eshghi [1988], planningin the event calculus can be considered as an abductivetask. Given a domain description Z, a conjunction F ofgoals (HoldsAt formulae), and a conjunction N ofInitiallyp and Initially N formulae describing the initialsituation, a plan is a consistent conjunction Ap of Happensand temporal ordering formulae such that

CIRCLE ; Initiates, Terminates, Releases] ACIRC[AN A Ap ; Happens] A EC A f2 N F.

This logical characterisation of event calculus planning isanalogous to Green’s logical characterisation of situationcalculus planning [Green, 1969].

2.1 An Example Planning Problem

Figure 1 shows the layout of the rooms and doorways inthe environment used for the robot experiments that arebeing carried out to vindicate the ideas of this paper. We’llconsider the problem of planning a sequence of "gothrough door" actions to lead from the robot’s initiallocation to a goal location. This problem can, of course, berecast as a trivial ~aph search, but here it will be used asan example of general purpose planning.

1231 C16 C91(224robot

RI DI R2 D2 R.3C

c~lclscl°[c~26 C51C14 C13 D3 Cl2 Cll C22 C21 C20 C19

C25 C26 1227 C28 I34 C’2"~-’~

C36 C35 C34 C33D5 136

C37 C38 C39 C40 I C43 C44

115

[ []

R6

C42 C41 [ C482.

C32 6"31

C45 C46

C47

Figure 1: The Robot’s Pen

126

Page 3: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

The following formulae capture the connectivity of therooms.

Connects(D I,R 1,R2)

Connects(D2,R2,R3)

Connects(D3,R2,R4)

Connects(D4,R3,R4)

Connects(D5,R4,R5)

Conects(D6,R4,R6)

Connects(d,r 1 ,r2) e-- Connects(d,r2,r

(W2.1)

(W2.2)

(W2.3)

(W2.4)

(W2.5)

(W2.6)

(W2.7)

The robot can perform only one action, which is to gothrough a specified door d, denoted by the termGoThrough(d). The only fluent in the domain is InRoom(r)representing that the robot is in room r. We have thefollowing Initiates and Terminates formulae.

Initiates(GoThrough(d),InRoom(r 1),t) (R2.1)Connects(d,r2,rl) ,’, HoldsAt(InRoom(r2),t)

Terminates(GoThrough(d),InRoom(r),t) (R2.2)HoldsAt(InRoom(r),t)

Since there is only one action and only one fluent, thisexample doesn’t require any uniqueness-of-names axioms.

Suppose the robot is initially in room R3.

Initiallyp(InRoom(R3)) (N2.1)The goal is to get the robot to room R6.

HoldsAt(InRoom(R6),T) (G2.1)

Clearly one plan for achieving the goal is to go through D4then go through D6.

Happens(GoThrough(D4),T1) (P2.1)Happens(GoThrough(D6),T2) (P2.2)

T1 < T2 (P2.3)

T2 < T (P2.4)

The fact that this is a plan according to the abductivedefinition is expressed in the following proposition. Let

¯ Z be the conjunction of (R2.1) and (R2.2)

¯ AN be formula (N2.1),¯ Ap be the conjunction of (P2.1) to (P’2.4),

¯ q~ be the conjunction of (W2.1) to (W2.7),

¯ F be formula (G2.1).Proposition 2.1.

CIRCLE ; Initiates, Terminates, Releases] ̂CIRC[AN ̂ ~p ; Happens] ̂

ECAd~F. []

More complex examples of event calculus planning are tobe found in [Shanahan, 1997c], as well as in Section 5 ofthis paper.

3 Abductive Event Calculus Through LogicProgramming

This section presents a logic programming implementationof abductive event calculus planning. The computationcarried out by the resulting implementation is very similar

to that of a partial-order planning algorithm, such asUCPOP [Penberthy & Weld, 1992]. But, as we’ll see,essentially the same logic program also implements bothabductive sensor data assimilation and hierarchicalplanning.

The implementation is in essence a resolution basedabductive theorem prover, coded as a Prolog recta-interpreter. But this theorem prover is tailored for the eventcalculus by compiling the event calculus axioms into themeta-level, resulting in an efficient implementation.

In what follows, I will assume some knowledge of logicprogramming concepts and terminology. Followingconvention logic program variables will begin with upper-case letters, while constants and function symbols willbegin with lower-case letters.

Meta-interpreters are a standard part of the logicprogrammer’s toolkit. For example, the following "vanilla"meta-interpreter, when executed by Prolog, will mimicProlog’s own execution strategy.

demo ( [] ) .

demo ( [GIGsl] ) axiom(G,Gs2), append(Gs2,Gsl,Gs3),

demo (Gs3)

demo([not(G) IGs]) :- not demo([G]), demo(Gs).

The formula demo (Gs) holds if Gs follows from theobject-level program. If FI is a list of Prolog literals [ k3.,.... ~,n], then the formula axiom()~o,l-I) holds there is a clause of the following form in the object-levelprogram.

~,o :-Iz ..... ;haOne of the tricks we’ll employ here is to compile object-level clauses into the meta-level. For example, the aboveclause can be compiled into the definition of demo throughthe addition of the following clause.

demo ( [10 [Gsl] ) axiom (l I , GS2 ),append(Gs2, [~2 ..... ~nlGsl] ,Gs3) demo (Gs3)

The resulting behaviour is equivalent to that of the vanillarecta-interpreter with the object-level clause. Now considerthe following object-level clause, which corresponds toAxiom (EC2) of Section

holds_at (F, T3 ) : happens(A, Ti,T2), T2 < T3,

initiates (A, F, Tl), not clipped (TI, F, T2)

This can be compiled into the following recta-level clause,in which the predicate before is used to representtemporal ordering.

demo([holds_at(F,T3) lGsl] ) axiom (initiates (A, F, TI), Gs2 axiom (happens (A, T1, T2 ), Gs3

axiom(before (T2, T3 ) , [ ] demo( [not clipped(Ti,F, T3) ]

append (Gs3, Gs2, Gs4), append (Gs4, Gsl, Gs5 demo (Gs5)

127

Page 4: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

To represent Axiom (EC5), which isn’t in Horn clauseform, we introduce the function neg. Throughout our logicprogram, we replace the classical predicate calculusformula --, HoldsAt(f,t) with holds_at (neg (F), T). we obtain the following object-level clause.

holds_at(neg(F) ,T3) happens(A, Ti,T2), T2 < T3,terminates (A, F, T1 ),not declipped (TI, F, T2 )

This compiles into the following meta-level clause.

demo( [holds_at (neg(F) ,T3) lGsl] axiom ( terminates (A, F, T1 ) , Gs2 axiom (happens (A, TI, T2 ), Gs3 axiom (before (T2, T3 ) , [ ] demo ( [not declipped (TI, F, T3 ) ] append (Gs3, Gs2, Gs4 ), append (Gs4, Gsl, Gs5 demo (Gs 5 )

The Prolog execution of these two meta-level clausesdoesn’t mimic precisely the Prolog execution of thecorresponding object-level clause. This is because we havetaken advantage of the extra degree of control available atthe meta-level, and adjusted the order in which the sub-goals of holdsat are solved. For example, in the aboveclause, although we resolve on terminatesimmediately, we postpone further work on the sub-goals ofterminates until after we’ve resolved on happens andbefore. This manoeuvre is required to prevent looping.

The job of an abductive meta-interpreter is to construct aresidue of abducible literals that can’t be proved from theobject-level program. In the case of the event calculus, theabducibles will be happens and before literals. Here’sa "vanilla" abductive meta-interpreter, without negation-as-failure.

abdemo ([ ] ,R,R) .

abdemo ( [GIGs ], RI, R2) abducible(G) , abdemo(Gs, [GIRl] ,R2).

abdemo ( [G I Gsl ] , Rl, R2) axiom (G, Gs2 ), append (Gs2, Gsl, Gs3 abdemo (Gs3, RI, R2 )

The formula abdemo (Gs, RI, R2 ) holds ifGs followsfrom the conjunction of R2 with the object-level program.(R1 is the input residue and R2 is the output residue.)Abducible literals are declared via the abduciblepredicate. In top-level calls to abdemo, the secondargument will usually be [ ].

Things start to get tricky when we incorporate negation-as-failure. The difficulty here is that when we add to theresidue, previously proved negated goals may no longer beprovable. So negated goals (such as negated clippedgoals) have to be recorded and re-checked each time theresidue is modified. Here’s a version of abdemo whichhandles negation-as-failure.

abdemo ( [] ,R,R,N). (A3.1)

abdemo( [GIGs] ,RI,R3,N) (A3.2)abducible(G) , abdemo nafs (N, [GIRl] ,R2),abdemo (Gs, R2, R3, N)

abdemo( [GIGsl] ,RI,R2,N) (A3.3)axiom (G, Gs2 ) , append (Gs2, Gsl, Gs3 abdemo (Gs3, RI, R2, N)

abdemo([not(G) IGs],RI,R3,N) (A3.4)abdemo_naf ( [G] , RI, R2 ) abdemo (Gs,R2,R3, [ [G] IN] )

The last argument of the abdemo predicate is a list ofnegated goal lists, which is recorded for subsequentchecking (in Clause (A3.2)). If

[Y1,1- - -Yl,nl] ¯ ¯ ¯ [7m, 1- ¯ -Tm, n,,] ] is such a list,then its meaning, assuming a completion semantics for ourobject-level logic program, is,

--, (?1,1 ^ ... A ?Z, n0 A -’, (ym, 1A ... ,’, ym, ~).The formula abdemo nafs(N,Rl,R2) holds if theabove formula is provable from the (completion of the)conjunction of R2 with the object-level program. (In thevanilla version, abdemo__nafs doesn’t add to the residue.However, we will eventually require a version which does,as we’ll see shortly.)

abdemo_nafs (N, RI, R2 ) applies abdemo_naf each list of goals in N. abdemo_na f is defined in terms ofProlog’s findall, as follows.

abdemo_naf( [GIGsl] ,R,R) not resolve (G, R, Gs2 )

abdemo_naf( [GIIGsl ] ,RI,R2) :-findall (Gs2, (resolve (GI, RI, Gs3

append (Gs3, Gsl, Gs2) ), Gss),abdemo_nafs (Gss, R1, R2 )

resolve(G,R,Gs) :- member(G,R).

resolve(G,R, Gs) :- axiom(G, Gs).

The logical justification for these clauses is as follows. Inorder to show, ---, (Y1 A ... ^ ?n), we have to show that, forevery object-level clause % : - %1- - -Lm which resolveswith Y1, ~ (%1 ̂ -.. " ktn, T2 ^ ... A Tn)- If no clauseresolves with Y1 then, under a completion semantics, ~ Y1follows, and therefore so does ---, (?l A ... A Yr0-

However, in the context of incomplete information about apredicate we don’t wish to assume that predicate’scompletion, and we cannot therefore legitimately usenegation-as-failure to prove negated goals for thatpredicate.

The way around this is to trap negated goals for suchpredicates at the recta-level, and give them specialtreatment. In general, if we know --, d~ ~ ~, then in order toprove ---, dp, it’s sufficient to prove ~. Similarly, if we know--, qb ~-~ ~, then in order to prove ---, dp, it’s both necessaryand sufficient to prove ~.

In the present case, we have incomplete information aboutthe before predicate. Accordingly, when the meta-interpreter encounters a goal of the form no tbefore (X,Y), which it will when it comes to prove (orre-prove) a negated clipped goal, it attempts to provebefore (Y,X). One way to achieve this is to addbefore (Y, X) to the residue, first checking that theresulting residue is consistent.

128

Page 5: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

Similar considerations affect the treatment of theholdsat: predicate, which inherits the incompleteness ofbefore. When the meta-interpreter encounters a notholds at(F,T) goal, where F is a ground term, attempts to prove holds at(neg(F) ,T), conversely, when it encounters notholds_at (neg(F) ,T), it attempts to proveholds_at (F, T). In both cases, this can result in furtheradditions to the residue.

Note that these techniques for dealing with negation in thecontext of incomplete information are general in scope.They’re generic theorem proving techniques, and their useisn’t confined to the event calculus. For further details ofthe implementation of abdemo_naf, the reader shouldconsult the (electronic) appendix.

As with the demo predicate, we can compile the eventcalculus axioms into the definition of abdemo andabdemo naf via the addition of some extra clauses,giving us a finer degree of control over the resolutionprocess. Here’s an example.

abdemo([holds_at(F,T3) IGslI,RI,R4,N) :- (A3.5)

axiom ( initiates (A, F, TI), Gs2

abdemo_na fs (N,[happens (A, TI, T2 ), before (T2, T3 ) I RI], R2

abdemo_nafs ( [ c i ipped (TI, F, T3 ) ] , R2, R3

append (Gs2, Gsl, Gs3 )

demo(Gs3,R3,R4, [clipped(Ti,F,T3)IN]).

Now, to solve a planning problem, we simply describe theeffects of actions directly as Prolog initiates,terminates and releases clauses, we present a list ofholds_at goals to abdemo, and the returned residue,comprising happens and before literals, is a plan.

Soundness and completeness results for thisimplementation are presented elsewhere, but they shouldcome as no surprise since, as shown in [Shanahan, 1997c],the computation performed by the planner is in essence thesame as that carried out by UCPOP, for which such resultsare well-known [Penberthy & Weld, 1992].

4 A Logical Account of PerceptionThis section offers a logical account of sensor dataassimilation (perception) which mirrors the logical accountof planning in Section 2. The need for such an accountarises from the fact that sensors do not deliver facts directlyinto the robot’s model of the world. Rather they provideraw data from which facts can be inferred.

The methodology for supplying the required logicalaccount is as follows. First, using a suitable formalism forreasoning about actions, construct a theory Z of the effectsof the robot’s actions on the world and the impact of theworld on the robot’s sensors. Second, consider sensor dataassimilation as abduction with this theory. Roughlyspeaking, given a narrative A of the robot’s actions, and adescription F of the robot’s sensor data, the robot needs tofind some W such that,

Z A A A W I= 1-’.

In event calculus terms, F might comprise Happens and/orHoldsAt formulae describing sensor events or values, andq" might comprise Initially N and Initiallyp formulaedescribing the environment’s initial configuration and/orHappens formulae describing the intervening actions ofother agents which have modified that configuration.

To illustrate this, we’ll pursue the mail delivery domainused in Section 2. But we must begin with a look at thesensory capabilities of the Khepera robots which are beingused to test the ideas presented in this paper.

O7 6

Figure 2: The Khepera Robot

The Khepera robot is a miniature robot platform with twodrive wheels and a suite of eight infra-red proximitysensors around its circumference. The infra-red sensors arearranged in pairs, as shown in Figure 2.The Khepera can be straightforwardly programmed tonavigate around the environment of Figure 1. Using itsproximity sensors, it can follow walls and detect inner andouter corners. If all the doors are open, the GoThroughaction of Section 2 can be executed, assuming the robot’sinitial location is known, by counting inner and outercorners until the robot reaches the required door, thenpassing through it.

If any of the doors is closed, however, this approach toexecuting the GoThrough action will fail, because theinfra-red proximity sensors cannot detect a closed door,which looks to them just like a continuation of the wall.

This, in an extreme form, is the predicament facing anyperceptual system. Inference must be carried out on rawsensor data in order to produce knowledge. In this case, therobot can abduce the fact that a door is closed as the onlypossible explanation of its unexpected arrival at an innercorner instead of the outer corner of the doorway.

Our aim here is to give a formal account of this sort ofinference that gels with the formal account of planningalready supplied. Indeed, as we’ll see, in the final system,the same knowledge, expressed using the same formalism,is used for both planning and sensor data assimilation.Furthermore, as already emphasised, both planning andsensor data assimilation are viewed as abductive tasks witha very similar character. This means that the same logicpro~amming implementation techniques, indeed the verysame abductive meta-interpreter, can be applied to bothtasks.

4.1 The Robot’s EnvironmentReturning to the example at hand, the representation of therobot’s environment, as depicted in Figure 1, now needs toinclude corners, which were neglected in the planning

129

Page 6: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

example. The formula NextCorner(r,cl,c2) represents thatcorner c2 is the next inneror outer corner in room r aftercorner cl, in a clockwise direction. For room RI alone, wehave the following formulae.

NextCorner(RI,C I,C2)

NextCorner(R 1 ,C2,C3)

NextCorner(R 1 ,C3,C4)

NextCorner(R 1 ,C4,C5)

NextCorner(R 1 ,C5,C6)

NextCorner(R 1, C6,C 1)In addition, the formula Door(d,c 1,c2) represents that thereis a doorway between the two corners cl and c2. For eachdoor, there will be a pair of such formulae. Here they arefor door D1.

Door(D 1,C3,C4)

Door(D1,C15,C16)

Finally, the formulae Inner(c) and Outer(c) representrespectively that c is an inner corner and c is an outercorner. Again confining our attention to room R1, we havethe following.

Inner(C1) Inner(C2)

Outer(C3) Outer(C4)

Inner(C5) Inner(C6)

Each of these predicates will need to be minimised usingcircumscription, so that their completions are formed.

4.2 The Robot’s Effect on the WorldNow we can formalise the effects of the robot’s actions onthe world. To simplify the example, the following formulaeassume the robot always hugs the left wall, althoughparameters are provided which allow for it to hug the rightwall as well.

Again, a finer grain of detail is required than for theplanning example. Instead of a single GoThrough action,the robot’s repertoire now comprises three actions:FollowWall, Turn(s), and GoStraight, where s is either Leftor Right. These actions affect three fluents. The fluentAtCorner(c,s) holds if the robot is at (inner or outer) cornerc, with c in direction s, where s is Left or Right. The fluentBesideWall(w,s) holds if the robot is adjacent to wall w direction s, where s is Left or Right. The fluentInDoorway(d,r) holds if the robot is in doorway d, with itsback to room r. (By convention, the three fluents aremutually exclusive.)

Let’s formalise the effects of the three actions in turn. Eachaction is assumed to be instantaneous, an assumption whichhas no practical implications in the present example. Theterm Wall(cl,c2) denotes the wall between corners cl andc2. First, if the robot follows a wall, it ends up at the nextvisible corner.

Initiates(FollowWall,AtCorner(c3,Left),t) (K4.1)HoldsAt(B esideWall(Wall(c 1 ,c2),Left),t)

NextVisibleCorner(c 1,c3,Left,t)

Terminates(FollowWall,BesideWall(w,s),t) (K4.2)The formulae NextVisibleCorner(cl,c2,s,t) means that, time t, c2 is the next visible corner after cl, where the wallin question is in direction s. The corner of a doorwaywhose door is closed is invisible.

NextVisibleCorner(c 1,c2,Left,t) ~-- (K4.3)NextCorner(r,c 1,c2) A "--, InvisibleCorner(c2,t)

NextVisibleCorner(c 1,c3,Left,t) ~-- (K4.4)NextCorner(r,c i,c2) A InvisibleCorner(c2,t)

NextVisibleCorner(c2,c3,Left,t)NextVisibleCorner(c 1,c2,s,t) (K4.5)

NextVisibleCorner(c 1 ,c3,s,t) ~ c2 = InvisibleCorner(c 1,t) (K4.6)

3 d,c2 [[Door(d,cl,c2) v Door(d,c2,cl)] --, HoldsAt(DoorOpen(d),t)]

Next we have the GoStraight action, which the robotexecutes to bypass a doorway, travelling in a straight linefrom the near corner of the doorway and coming to restwhen it detects the far corner.

Initiates(GoStraight, (K4.7)B esideWall(Walt(c2,c3),Left),t)

HoldsAt(AtCorner(cl,Left),t) Door(d,cl,c2) A NextCorner(r, c2,c3)

Initiates(GoStraight, (K4.8)Beside Wall(Wall(c2,c l ),Right),t)

HoldsAt(AtCorner(c3,Right),t) Door(d,c2,c3) A NextCorner(r, c2,c

Terminates( GoS trai ght,AtCorner( c,s ),t (K4.9)Finally we have the Turn action. Since the robot has to hugthe left wall, it always turns left (or goes straight) at outercorners, and always turns right at inner corners. If it turnsleft at the corner of a doorway, it ends up in the doorway.

Initiates(Turn(Left),InDoorway(d,r),t) (K4.10)HoldsAt(AtCorner(cl,Left),t) ̂ Door(d,c 1,c2)

HoldsAt(DoorOpen(d),t) ~ NextCorner(r,c

If the robot turns left when in a doorway, it ends upalongside a wall in the next room.

Initiates(Turn(Left), (K4.11)B esideWall(Wall(c2,c3),Left),t)

HoldsAt(InDoorway(d,rl),t) A Connects(d,rl,r2) Door(d,c 1,c2) A NextCorner(r2,c 1,c2)

If the robot turns right at an inner corner, it ends up next toa new wall.

Initiates(Turn(Right), (K4.12)BesideWall(Wall(c l,c2),Left),t)

HoldsAt(AtCorner(c 1),t) Inner(c2) a NextCorner(r,c 1,c2)

The mutual exclusivity of the AtCorner, InDoorway andBesideWall fluents is preserved by the following formulae.

Terminates(Turn(s 1),AtCorner(c,s2),t) (K4.13)

Terminates(rurn(s),InDoorway(d,r),t) (K4.14)HoldsAt(InDoorway(d,r),t)

130

Page 7: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

4.3 The Effect of the World on the Robot

Having axiomatised the effects of the robot’s actions on theworld, now we need to formalise the impact the world hason the robot’s sensors. For this purpose, we introduce twonew types of event. The event GoesHigh(s) occurs if theaverage value of the two sensors in direction s exceeds athreshold 51, where s is Left, Right or Front. Similarly theevent GoesLow(s) occurs if the average value of the twosensors in direction s goes below a threshold 51. (Bymaking 51 > 52, we avoid a chatter of GoesHigh andGoesLow events when the robot approaches an obstacle.)

Happens(GoesHigh(Front),t) ($4.1)Happens(FollowWall,t)

Initiates(FollowWall,AtCorner(c,s),t) A Inner(c)Happens(GoesLow(Front),t) ($4.2)

HoldsAt(AtCorner(c,Left),t) Inner(c) ^ Happens(Turn(Right),t)

Happens(GoesHigh(s),t) ($4.3)HoldsAt(AtCorner(c,s),t) ̂ Outer(c)

[Happens(GoStraight,t) v Happens(Turn(s),t)]Happens(GoesLow(s),t) ($4.4)

Happens(FollowWall,t) Initiates(FollowWall,AtCorner(c,s),t) ̂ Outer(c)

Our overall aim, of course, is to use abduction to explainthe occurrence of GoesHigh and GoesLow events. In thepresent example, if the doors are all initially open andnever subsequently closed, every sensor event is predictedby the theory as it stands, so no explanation is required.The interesting case is where there are sensor events whichcan only be explained by a closed door. Accordingly, weneed to introduce the events OpenDoor(d) andCloseDoor(d), with the obvious meanings.

Initiates(OpenDoor(d),DoorOpen(d),t) (K4.15)Terminates(CloseDoor(d),DoorOpen(d),t) (K4.16)

Finally, we need some uniqueness-of-names axioms.

UNA[FollowWaI1, GoStraight, Turn, (U4.3)GoesHigh, GoesLow, OpenDoor, CloseDoor]

UNA[BesideWall, AtCorner, InDoorway, (U4.4)DoorOpen]

4.4 An Example Narrative

Now let’s examine a particular narrative of robot actionsand sensor events. The following formulae describe theinitial situation.

Initiallyp(DoorOpen(d)) (N4.1)Initiallyp(BesideWall(w,s)) (N4.2)

w = WalI(C18,C19) ̂ s = Left

InitiallyN(AtCorner(c,s)) (N4.3)InitiallyN(InDoorway(d,r)) (N4.4)

The robot follows the wall to its left until it arrives atcomer C19, where it turns right and follows the wall to itsleft again.

Happens(FollowWall,T1) (N4.5)

Happens(Tum(Right),T2) (N4.6)HappensfFollowWall,T3) (N4.7)T1 < T2 (N4.8)T2 < T3 (N4.9)

Now let’s suppose someone closes door D4 shortly afterthe robot sets out, and consider the incoming sensor events.The robot’s front sensors go high at time T1, when itarrives at corner C19. (Recall that the FollowWall action isconsidered instantaneous.) They go low when it turns, then(unexpectedly) go high again, when it arrives at cornerC22, having bypassed door D4.

Happens(GoesHigh(Front),T1) (D4.1)Happens(GoesLow(Front),T2) (D4.2)Happens(GoesHigh(Front),T3) (D4.3)

The above formulae only describe the sensor events that dooccur. But in general, we want explanations of sensor datato exclude those sensor events that have not occurred.Hence we have the following definition, which captures thecompletion of the Happens predicate for sensor events.

Definition 4.1.

COMP[hu] W-.def[Happens(a,t)

[a = GoesHigh(s) v a = GoesLow(s)]]

V [a= o~^ t =x]( ~,x)e l-I

where FI = {(cc,’c) I Happens(o~,’c) []The following formula is one possible explanation of theabove sensor events.

Happens(CloseDoor(D4),t) < t < T30 rE4.1)This is expressed by the following proposition. Let

¯ T. be the conjunction of (K4.1) to (K4.16),¯ AN be the conjunction of (N4.1) to (N4.9),¯ AT be the conjunction of(S4.1) to ($4.4),¯ AE be formula (E4.1),¯ qb be the conjunction of the formulae representing the

robot’s environment, as described in Section 4.1,¯ ~ be the conjunction of (U4.1) and (U4.2),

¯ F be the conjunction of (D4.1) to (D4.3).

Proposition 4.1.

CIRCLE ; Initiates, Terminates, Releases] ACIRC[~N ̂ AT^ AE ; Happens] A

EC ̂ X"2 A qb ~ COMP[1-’]. []In general, a collection of sensor data can have manyexplanations. Explanations can be ordered using aprefernce criterion, such as one which favours explanationswith few events. But there can still be many mimimalexplanations. In these circumstances, the robot cannot doany better than simply to proceed on the assumption thatthe first explanation it finds is the true explanation. It’sreasonable to expect that, if the explanation is indeed false,

131

Page 8: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

the processing of subsequent sensor data will reveal this.But obviously this topic merits further inverstigation.

For discussion of a more complex example of abductivesensor data assimilation, involving both sensor and motornoise, see [Shanahan, 1997b], which shows how the sameapproach can be taken to map-building, where theincompleteness of the robot’s knowledge of itsenvironment is more radical than in the example used here.

Let me finish this section with a few words onimplementation. The strong similarities between theabductive account of planning and the abductive account ofperception, means that the meta-interpreter of Section 3 canbe used to implement both. Furthermore, the sameknowledge, in particular the formulae describing the effectsof actions, is used by both tasks.

5 Interleaving Sensing, Planning, and ActingThe impression given by the foregoing accounts is thatperception and planning are isolated tasks which run tocompletion, uninterrupted and in their own time. Of course,this is not true. To meet the criticisms of Brooks, outlinedin the opening section, we need to show how perception,planning and plan execution can be smoothly integrated ina robot in such a way that the robot performs in real-timeand reacts in a timely fashion to ongoing events.

For this reason, the robot cannot be allowed to devote anarbitrary amount of computation to planning, or toassimilating old sensor data, before it consults its sensorsagain, in case the world is presenting it with an urgentthreat or opportunity. Rather, sensing, planning and actinghave to be interleaved. The robot’ sensors are read, then abounded amount of time is allotted to processing sensordata before the robot moves on to planning. Then abounded amount of time is given over to planning beforethe robot proceeds to act. Having acted, the robot consultsits sensors again, and the cycle is repeated.To realise this idea, however, the planner described inSections 2 and 3 must be augmented. As it stands, theplanner produces actions in recession order. That is to say,the last action to be carried out is generated first. Thismeans that, if interrupted, the planner’s partial results areuseless. What we require instead is a progression planner

one that generates the earliest action of a plan first. If apro~ession planner is interrupted, its partially constructedplan will contain actions that can be executed immediately.

One way to generate plans in progression order is viahierarchical planning. This is the approach adopted here.It’s a surprisingly straightforward matter to extend theforegoing logical treatment of partial order planning toplanning via hierarchical decomposition. Therepresentation of compound actions and events in the eventcalculus is very natural, and is best illustrated by example.The following formulae axiomatise a robot mail deliverydomain that is an extension of the example from Section 2.

5.1 An Example of Hierarchical Planning

First we formalise the effects of the primitive actions. Theterm Pickup(p) denotes the action of picking up package the term PutDown(p) denotes the action of putting downpackage p. The fluent Got(p) holds if the robot is carryingthe package p, and the fluent In(p,r) holds if package p is room r.

Initiates(Pickup(p),Got(p),t) (RS. 1)p ~ Robot ̂ HoldsAt(InRoom(r),t)

HoldsAt(In(p,r),t)

Releases(Pickup(p),In(p,r),t) (R5.2)p :~ Robot A HoldsAt(InRoom(r),t)

HoldsAt(In(p,r),t)

Initiates(PutDown(p),In(p,r),t) (R5.3)p ~ Robot A HoldsAt(Got(p),t)

HoldsAt(InRoom(r),t)

Formulae (R2.1) and (R2.2), which deal with GoThrough action and InRoom fluent, are inherited fromSection 2.Next we have our first example of a compound actiondefinition. Compound actions have duration, whileprimitive actions will usually be represented asinstantaneous. The term ShiftPack(p,r) denotes the actionof retrieving and delivering package p to room r. Itcomprises a RetrievePack action and a DeliverPack action,both of which are themselves compound actions. The termRetrievePack(p,r) denotes the action of retrieving packagep from room r, and the term DeliverPack(p,r) denotes theaction of delivering package p to room r.

Happens(ShiftPack(p,r 1 ,r2),t 1,t4) <---- (H5.1)Happens(RetrievePack(p,rl),tl,t2) .,, t2 <

Happens(DeliverPack(p,r 1,r2),t3,t4) Clipped(t2,Got(p),t3)

Initiates(ShiftPack(p,r 1,r2),In(p,r2),t) (R5.4)HoldsAt(In(p,rl),t)

Note the need for the final ~ Clipped condition to ensurethe correctness of the definition in (H5.1). TheRetrievePack and DeliverPack actions are defined in termsof Pickup, PutDown and GoToRoom actions. TheGoToRoom action is another compound action.

Happens(RetrievePack(p,r 1),t 1,t3) (H5.2)HoldsAt(InRoom(r2),tl)/,,

Happens(GoToRoom(r2,r 1),t 1,t2) Happens(Pickup(p),t3) ^ t2 <

Clipped(t2,InRoom(r l),t3)

Initiates(RetrievePack(p,r),Got(p),t) (R5.5)HoldsAt(In(p,r),0

Happens(DeliverPack(p,r 1,r2),t 1,t3) (H5.3)Happens(GoToRoom(rl,r2),tl,t2) A t2 <

Happens(PutDown(p),t3) ---, Clipped(t2,InRoom(r2),t3)

Initiates(DeliverPack(p,r l,r2),In(p,r2),t) (R5.6)HoldsAt(InRoom(rl),t) ,,, HoldsAt(Got(p),t)

132

Page 9: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

The effects of compound actions should follow from theeffects of their sub-actions, as can be verified in this caseby inspection. Next we have the definition of theGoToRoom action, where the term GoToRoom(rl,r2)denotes the action of going from room rl to room r2.

Happens(GoToRoom(r,r),t,t) (H5.4)Happens(GoToRoom(r 1 ,r3),tl ,t3) (H5.5)

Connects(d,rl,r2) Happens(GoThrough(d),t 1)

Happens(GoToRoom(r2,r3),t2,t3) tl < t2 A --, Clipped(tl,InRoom(r2),t2)

Initiates(GoToRoom(r 1,r2),InRoom(r2),t) (R5.7)HoldsAt(InRoom(rl),t)

In effect, when implemented on a recta-interpreter such asthat of Section 3, these clauses carry out a forward search,in contrast to the backward search effected by the clausesin Section 2. A more sophisticated set of clauses wouldincorporate a heuristic to give more direction to the search.

Formulae (H5.4) and (H5.5) illustrate both conditionaldecomposition and recursive decompostion: a compoundaction can decompose into different sequences of sub-actions depending on what conditions hold, and acompound action can be decomposed into a sequence ofsub-actions that includes a compound action of the sametype as itself. A consequence of this is that the eventcalculus with compound actions is formally as powerful asany programming language. In this respect, it can be usedin the same way as GOLOG [Levesque, et aL, 1997]. Note,however, that we can freely mix direct programming withplanning from first principles.

We need the following uniqueness-of-names axioms.

UNA[Pickup, PutDown, GoThrough, (U5.1)ShiftPack, GoToRoom]

UNA[Got, In, InRoom] (U5.2)The definition of the planning task from Section 2 isunaffected by the inclusion of compound events. However,it’s convenient to distinguish fully decomposed plans,comprising only primitive actions, from those that includecompound actions.

The following formulae represent the initial situationdepicted in Figure 1.

Initially(InRoom(R3)) (N5.1)Initially(In(P 1,R6)) (N5.2)

The following HoldsAt formula represents the goal state.

HoldsAt(In(PI,R4),T) (G5.1)

The following narrative of actions obviously constitutes aplan.

Happens(GoThrough(D4),T1)

Happens(GoThrough(D6),T2)

Happens(Pickup(P 1),T3)

Happens(GoThrough(D6),T4)

Happens(PutDown(P 1),T5)

T1 <T2

(P5.1)(P5.2)(p5.3)(P5.4)0:’5.5)(P5.6)

T2 < T3 (P5.7)T3 < T4 (P5.8)T4 < T5 (P5.9)T5 < T (P5.10)

Let

Zp be the conjunction of (R2.1), (R2.2) and (R5.1) (R5.3) (the effects of the primitive actions),

¯ YC be the conjunction of (R5.4) and (R5.7) effects of the compound actions),

¯ AC be the conjunction of(H5.1) to (H5.5),¯ A0 be the conjunction of (N5.1) and (N5.2),¯ ~p be the conjunction of (P5.1) to (P5.10),¯ ¯ be the conjunction of (W2.1) to (W2.7)

Section 2.

¯ x’-2 be the conjunction of(U5.1) and (U5.2),

¯ F be formula (G5.1).Now we have. for example, the following proposition.

Proposition 5.1.

CIRC[Zp A ZC ; Initiates, Terminates, Releases] ACIRC[z~3 A Ap A AC ; Happens] A EC A X’2 A ~

Happens(ShiftPack(P1,R6,R4),T0,T4). []We also have the following.

Proposition 5.2.

CIRC[Zp A ZC ; Initiates, Terminates, Releases] ACIRC[t30 A Ap A AC ; Happens] A

EC A ~’2A qb b F. []

So Ap constitutes a plan. Furthermore, we have thefollowing.Proposition 5.3.

CIRC[Zp ; Initiates, Terminates, Releases] ACIRC[z_k0 A Ap ; Happens] A EC A X"2 A ~ ~ F. []

SO Ap also constitutes a plan in the context of only theprimitive actions.

5.2 Suspending the Planning ProcessPropositions 5.2 and 5.3 show that, in this example as inmany, the introduction of compound events has added littlelogically. The same set of HoldsAt formulae follows fromany given narrative of primitive actions, with or without thecompound event definitions. As we’ll see shortly, thebenefit of introducing compound events is only apparentwhen we look at implementation issues in the context ofthe sense-plan-act cycle.

First, note that an abductive meta-interpreter can be builtusing the principles outlined in Section 3 which, whenpresented with compound event definitions of the aboveform, automatically performs hierarchical decomposition.Whenever a happens goal is reached for a compoundaction, that goal is resolved against the clause defining thecompound action, yielding further happens sub-goals,and this process continues until primitive actions arereached, which are added to the residue. The current

133

Page 10: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

implementation incorporates an iterative deepening searchstrategy, which avoids looping problems that arise withcertain examples of compound actions.

In general we will rarely require our planner to find fullydecomposed plans, and we need to be able to suspend theplanning process before a fully decomposed plan has beenfound and still have a useful result in the form of a partiallydecomposed plan. (The suspension of planning can beachieved in a logic programming implementation with aresource-bounded meta-interpreter such as that describedby Kowalski [1995]. The same techniques can be used in aresource-bounded implementation of abductive sensor dataassimilation.)

Let’s see how this works with the above example. Themeta-interpreter reduces the initial goal list

[holds__at (in(pl, r4, tl

tO

[ happens (go_to_room ( r6 ) , t I, t2 happens(pickup(pl),t3), t2 < t3, t3 <

happens (deliver_pack (pl, r6, r4 ) , t4, t

via the shift_pack compound event. Then thego_to_room action is decomposed, yielding

[ happens (go_through ( d4 ) , t i happens(go through(d6),t2), tl <

happens(pickup(pl),t3), t2 < t3, t3 < happens (deliver_pack (pl, r6, r4 ), t4, t)

At this point, although a fully decomposed plan hasn’t beenproduced, the computation has yielded an action, namelygo through(d4), that can be immediately executed,since the following criteria have been met.

¯ The goal list comprises only happens and temporalordering goals.

¯ There is a primitive action in the goal list which istemporally before all other actions.

In the following section, we’ll see how action execution isinte~ated with planning and sensor data assimilation, howevents abduced from incoming sensor data can conflictwith a plan, and how this precipitates replanning.

5.3 The Sense-Plan-Act Cycle

The motivation for suspending the planning process andacting on its partial results is the need for reactivity. If thehierarchy of compound actions is appropriately defined, therobot will be able to quickly generate an executable actionin circumstances of imminent threat or opportunity. Whereno high-level compound action is defined to achieve agiven goal, the robot has to resort to search to find a plan.So it’s up to the robot’s builder to ensure a judicious mix ofdirect programming (via compound actions) and search.

In order to detect imminent threat or opportunity, planningis embedded in a tight feedback loop, the sense-plan-actcyle. Within this cycle, abductive planning and abductivesensor data assimilation behave like co-routines. But it’snot just the need to respond to threats and opportunities that~ves rise to the need for this feedback loop. Often in the

course of executing a plan, it becomes apparent that certainassumptions on which the plan was based are false.

We can see this with a mail delivery example thatcombines the examples of Sections 4.4 and 5.1. As inSection 5.1’s example of hierarchical planning, let’ssuppose the robot is initially in room R3, and wants tomove package P1 from room R6 to room R4 (see Figure 1).

Using hierarchical planning as described above, the robotstarts to form the obvious plan, represented by formulae(PS.1) to (P5.10). To flesh out this example fully wouldrequire Happens formulae defining GoThrough as acompound action comprising a number of FollowWall,GoStraight and Turn actions. Let’s take these formulae forgranted, and suppose the robot decomposes theGoThrough(D4) action into the appropriate sequence sub-actions, beginning with FollowWall.

As soon as it comes up with this FollowWall action, therobot can start executing the plan. It sets out alongWall(C18,C19), then turns corner C18 and proceeds alongWalI(C19,C20), all the while continuing to fill out the planby decomposing remaining compound actions, and all thewhile continuing to process incoming sensor data.

Now, as in Section 4.4’s example of sensor dataassimilation, suppose someone closes door D3 before therobot reaches the doorway. With the the doorway nowinvisible to its sensors, the robot carries on until it reachescorner C22. Until it reaches corner C22, the processing ofincoming sensor data has been a trivial task, as none of ithas required an explanation. But the only way to explainthe GoesHigh(Front) event that occurs when corner C22 encountered is with a CloseDoor action, as in formula(E4.1).This leads us to the question of how conflicts with plansare detected, and how they’re overcome. Let’s reconsiderthe internal mechanics of the planning process whenimplemented via an abductive meta-interpreter, asdescribed in Section 3. Recall that, as actions are added tothe residue (represented as a set of happens and beforeformulae), they must be checked against negatedclipped goals that have been previously proved andrecorded. Now, as exemplified by formula (E4.1), theultimate product of the sensor data assimilation process,when similarly implemented, is also a set of happens andbefore formulae. So checking whether incoming sensordata indicates a conflict with the current plan is the same asre-proving negated clipped goals when new actions areadded to the residue during planning itself, and the samemechanism can be re-used.

So, to return to the example, the robot reaches corner C22at, say time T’, and detects a conflict between theCloseDoor action it has abduced to explain its unexpectedencounter with corner C22 and the formula--, Clipped(0,DoorOpen(D4),T’) on which the success of plan depends.

The robot’s designer can handle this sort of situation in twoways. The subtler option is to incorporate some sort of planrepair mechanism, which with the least possible work

134

Page 11: From: AAAI Technical Report FS-98-02. Compilation ... · Shakey. Let’s briefly review the Shakey project [Nilsson, 1984] and Brooks’s critique of it [Brooks, 1991]. Shakey worked

transforms the old failed plan into a new working one. Butthe present implementation opts for the simpler method ofreplanning from scratch whenever such a conflict isdetected.

The rationale for this is as follows. The assumption behindthe adoption of hierarchical planning was that compoundaction definitions can be written in such a way as to ensurethat the planning process generates a first executable actionquickly, whenever immediate action is required. If weaccept this assumption, there is no need for a clever planrepair mechanism. Replanning from scratch is viable.

In the present example, the robot responds to the detectedconflict by forming a new plan. The obvious plan takes itthrough door D2 into room R2, then through door D3 intoroom R4, then through door D6 into room R6 where it canpick up the package.

Concluding RemarksIt’s important to see how different the robot described hereis from the image of Shakey caricatured by Brooks andothers, in which the robot sits motionless constructing aplan for minutes on end, then doggedly executes that plan,clinging to it come what may. In the present work,reactivity is achieved through the use of compound eventsand hierarchical planning, which permit the use of directprogramming as well as planning, and also enable the robotto quickly generate an immediately executable action whenit needs to. But at the same time, the logical integrity of thedesign has been preserved.

There are obviously many important areas for furtherinvestigation. Two particularly significant topics aremaintenance goals, that is to say fluents which have to bealways kept true, and so-called knowledge producingactions. In the present account, although any action can ineffect be knowledge producing, as it can lead to theacquisition of new sensor data which in turn leads to anincrease in the robot’s knowledge, there is no way to planto increase knowledge, since there is no representation ofan action’s knowledge increasing effects.

References

[Brooks, 199 la] R.A.Brooks, Intelligence Without Reason,Proceedings IJCA191, pages 569-595.

[Eshghi, 1988] K.Eshghi, Abductive Planning with EventCalculus, Proceedings of the Fifth InternationalConference on Logic Programming (1988), pp. 562-579.

[Green 1969] C.Green, Applications of Theorem Provingto Problem Solving, Proceedings IJCA169, pp. 219-240.

[Kowalski, 1995] R.A.Kowalski, Using Meta-Logic toReconcileReactive with Radonal Agents, in Meta-Logicsand Logic Programming, ed. K.R.Apt and F.Turini, MITPress (1995), pp. 227-242.

[Lesp6rance, et al., 1994] Y.Lesp6rance, H.J.Levesque,F.Lin, D.Marcu, R.Reiter, and R.B.Scherl, A LogicalApproach to High-Level Robot Programming: AProgress Report, in Control of the Physical World by

Intelligent Systems: Papers from the 1994 AAAI FallSymposium. ed. B.Kuipers, New Orleans (1994), pp. 79-85.

[Levesque, et al., 1997] H.Levesque, R.Reiter,Y.Lesp6rance, F.Lin and R.B.Scherl, GOLOG: A LogicProgramming Language for Dynamic Domains, TheJournal of Logic Programming, vol. 31 (1997), pp. 59-83.

[Nilsson, 1984] N.J.Nilsson, ed., Shakey the Robot, SRITechnical Note no. 323 (1984), SRI, Menlo Park,California.

[Penberthy & Weld, 1992] J.S.Penberthy and D.S.Weld,UCPOP: A Sound, Complete, Partial Order Planner forADL, Proceedings KR 92, pp. 103-114.

[Shanahan, 1997a] M.P.Shanahan, Solving the FrameProblem: A Mathematical Investigation of the CommonSense Law of Inertia, MIT Press (1997).

[Shanahan, 1997b] M.P.Shanahan, Noise, Non-Determinism and Spatial Uncertainty, Proceedings AAAI97, pp. 153-158.

[Shanahan, 1997c] M.P.Shanahan, Event CalculusPlanning Revisited, Proceedings 4th EuropeanConference on Planning (ECP 97), Springer LectureNotes in Artificial Intelligence no. 1348 (1997), pp. 390-402.

135