Approximate

download Approximate

of 8

Transcript of Approximate

  • 8/22/2019 Approximate

    1/8

    APPROXIMATE OBJECTS AND APPROXIMATE THEORIES

    John McCarthyComputer Science Department

    Stanford UniversityStanford, CA 94305

    [email protected]

    http://www-formal.stanford.edu/jmc/

    Abstract

    We propose to extend the ontology of logicalAI to include approximate objects, approx-imate predicates and approximate theories.Besides the ontology we treat the relationsamong different approximate theories of thesame phenomena.

    Approximate predicates cant have completeif-and-only-if definitions and usually donteven have definite extensions. Some approx-imate concepts can be refined by learningmore and some by defining more and someby both, but it isnt possible in general to

    make them well-defined. Approximate con-cepts are essential for representing commonsense knowledge and doing common sensereasoning. Assertions involving approximateconcepts can be represented in mathematicallogic.

    A sentence involving an approximate con-cept may have a definite truth value evenif the concept is ill-defined. It is definitethat Mount Everest was climbed in 1953 eventhough exactly what rock and ice is includedin that mountain is ill-defined. Likewise, it

    harms a mosquito to be swatted, althoughwe havent a sharp notion of what it meansto harm a mosquito. Whatif(x,p), which de-notes what x would be like ifp were true, isan important kind of approximate object.

    The article treats successively approximateobjects, approximate theories, and for-malisms for describing how one object or the-ory approximates another.

    Our discussion will be adequate if

    it has as much clearness as the

    subject matter admits of, for pre-

    cision is not to be sought for alike

    in all discussions, any more than

    in all the products of the crafts.

    Aristotle, Nicomachean Ethics

    1 Introduction

    We propose to extend the ontology of logical AI toinclude approximate objects, approximate predicatesand approximate theories. Besides the ontology wediscuss relations among different approximations tothe same or similar phenomena.

    The article will be as precise as we can make it. We

    apply Aristotles remark to the approximate theoriesthemselves. The article treats three topics.

    Approximate objects These are not fully defined,e.g. the wishes of the United States. They maybe approximations to more fully defined objects orthey may be intrinsically approximate (partial).We give lots of examples, because we dont haveprecise definitions.

    Approximate theories These often involve neces-sary conditions and sufficient conditions but lackconditions that are both necessary and sufficient.

    Relations among approximate entities It is of-ten necessary to relate approximate entities, e.g.objects or theories, to less approximate entities,e.g. to relate a theory in which one block is juston another to a theory in which a block may bein various positions on another.

    In principle, AI theories, e.g. the original proposalsfor situation calculus, have allowed for rich entitieswhich could not be fully defined. However, almost

  • 8/22/2019 Approximate

    2/8

    all theories used in existing AI research has not takenadvantage of this generality. Logical AI theories haveresembled formal scientific theories in treating well-defined objects in well-defined domains. Human-levelAI will require reasoning about approximate entities.

    Approximate predicates cant have complete if-and-only-if definitions and usually dont even have def-inite extensions. Moreover, approximate entities of-ten dont have equality conditions. Some approximateconcepts can be refined by learning more and some bydefining more and some by both, but it isnt possiblein general to make them well-defined. Approximateconcepts are essential for representing common senseknowledge and doing common sense reasoning. In thisarticle, assertions involving approximate concepts arerepresented in mathematical logic.

    A sentence involving an approximate concept mayhave a definite truth value even if the concept is ill-defined. It is definite that Mount Everest was climbedin 1953 even though exactly what rock and ice is in-cluded in that mountain is ill-defined. We discuss theextent to which we can build solid intellectual struc-tures on such swampy conceptual foundations.

    Quantitative approximation is one kind consideredbut not the most interesting or the kind that re-quires logical innovation. Fuzzy logic involves a semi-quantitative approximation, although there are exten-sions as mentioned in [Zad99].

    For AI purposes, the key problem is relating differentapproximate theories of the same domain. For thiswe use mathematical logic fortified with contexts asobjects. Further innovations in logic may be requiredto treat approximate concepts as flexibly in logic aspeople do in thought and language.

    Looked at in sufficient detail, all concepts are approx-imate, but some are precise enough for a given pur-pose. McCarthys weight measured by a scales is pre-cise enough for medical advice, and can be regardedas exact in a theory of medical advice. On the otherhand, McCarthys purposes are approximate enoughso that almost any discussion of them is likely to bump

    against their imprecision and ambiguity.

    Many concepts used in common sense reasoning areimprecise. Here are some questions and issues thatarise.

    1. What rocks and ice constitute Mount Everest?

    2. When can it be said that one block is on anotherblock, so thatOn(Block1,Block2) may be asserted?

    Let there be an axiomatic theory in situation cal-culus in which it can be shown that a sequence ofactions will have a certain result. Now supposethat a physical robot is to observe that one blockis or is not on another and determine the actions

    to achieve a goal using situation calculus. It is im-portant that the meanings of On(Block1,Block2)used in solving the problem theoretically and thatused by the robot correspond well enough so thatcarrying out a plan physically has the desired ef-fect. How well must they correspond?

    3. What are the logical relations between differentlogical specifications of an approximate object?

    4. What are the relations between different approx-imate logical theories of a domain?

    We claim

    1. The common sense informatic situationoften in-volves concepts which cannot be made precise.This is a question of the information available andnot about computation power. It is not a specif-ically human limitation and will apply to com-puters of any possible power. This is not a claimabout physics; it may be that a discoverable setof physical laws will account for all phenomena.It is rather a question of the information actuallyavailable about particular situations by people orrobots with limited opportunities to observe and

    compute.

    2. Much pedantry in science and in philosophy re-sults from demanding if-and-only-if definitionswhen this is inappropriate.

    3. None of the above objects and predicates admitsa completely precise if-and-only-ifdefinition.

    4. Formalized scientific theories, e.g. celestial me-chanics and the blocks world often have preciseconcepts. They are necessary tools for under-standing and computation. However, they areimbedded in common sense knowledge about howto apply them in world of imprecise concepts.Moreover, the concepts are precise only within thetheory. Most AI theories have also had this char-acter.

    5. Almost all concepts are approximate in the fullcommon sense informatic situation. However,many are precise, i.e. have if-and-only-if defini-tions in particular contexts. Thus in the contextof a particular grocery, an object is a lemon if andonly if it is a small yellow fruit.

  • 8/22/2019 Approximate

    3/8

    6. The human reasoning processes involving com-mon sense knowledge correspond only partly tothe mathematical and logical reasoning processesthat humans have used so successfully within sci-entific and mathematical theories.

    7. Nevertheless, mathematical logic is an ap-propriate tool for representing this knowl-

    edge and carrying out this reasoning in in-

    telligent machines. The use of logic will cer-tainly require adaptations, and the logic itselfmayrequiremodifications.

    8. Key tools for common sense reasoning by ma-chines will be approximate concepts and approxi-mate theories. These are in addition toformalizednon-monotonic reasoning and formal theories ofcontext.

    9. The most important notion of approximation forcommon sense reasoning is not the familiar nu-merical approximation but a new kind of logicalapproximation appropriate for common sense rea-soning. These mostly differ from the approxima-tions of fuzzy logic.

    In the subsequent sections of this article, tools will beproposed for reasoning with approximate concepts.

    The article treats successivelyapproximate objects,ap-proximate theories, and formalisms for describing howone object or theory approximates another.

    2 What kinds of approximate

    concepts are there?

    Concepts can be approximate in at least two ways.

    On one hand, a concept may approximate anothermore definite but incompletely known concept. Thissituation is prevalent with natural kinds. Lemons area definite species, but no-one knows all about them.In particular, a child may be barely able to tell lemonsfrom other yellow fruit but nevertheless has a concept

    of lemon that may be adequate to find the lemons inthe supermarket. The child can improve its conceptof lemon by learning more. The fact that there isnt acontinuum of fruits ranging from lemons to grapefruitis an important part of the fact that lemons form anatural kind. This fact also makes it possible for bi-ologists to learn specific facts about lemons, e.g. tosequence lemon DNA.

    On the other hand, the legal concept of a personstaxable income is refined by defining more. Taxableincomeis partly a natural kind. A persons concept of

    his own taxable income is an approximation to the lessapproximate legal concept. He could learn more or itcould be defined more as the legal concept is definedmore. However, learning more about the legal concepteventually reaches a point where there is no further re-

    finement on which people thinking independently willagree. There isnt a true notion of taxable income foreconomists to discover.

    My concept of my taxable income and even my taxaccountants concept of my taxable income has bothaspects. More can be learned about what deductionsare allowed, and also the concept gets refined by thecourts.

    Here are some examples.

    2.1 What if?

    Whatif(p, x) is what x would be if p were true. Ex-amples: (1) John McCarthy if he had gone to Harvardrather than to Caltech as an undergraduate. (2) Mycar if it hadnt been backed into today. (3) The cake Iwould have baked if I had known you were coming. (4)What the world would be like today if Picketts chargehad been successful. (5) What would have happenedif another car had come over the hill when you passedthat Mercedes just now.

    Whatif(p, x) is an intrinsically approximate concept.How approximate depends on p and x. What if an-other car had come over the hill when you passedis much less approximate thanWhat if wishes werehorses. [CM99] treats useful counterfactual condi-tional sentences and gives many examples.

    Whatif can serve as the foundation for other con-cepts, including counterfactual conditional sentencesand statements of causality. [CM99] treats usefulcounterfactual conditional sentences and gives manyexamples.

    Fiction provides an interesting class of approximateobjects and theories, especially historical fiction, inwhich the author tries to fit his characters and theirlives into a background of historical fact. Commonsense knowledge tells us that Sherlock Holmes wouldhave had a mother, but Conan Doyle does not provideus with a name. A definite address is given, but therewas never a house there corresponding to Doyles de-scription. The author need only define his world toa point that lets the reader answer the questions theauthor wants the reader to ask.

  • 8/22/2019 Approximate

    4/8

    2.2 Mount Everest.

    It is clear that we cannot hope to formulate a usefuldefinition of Mount Everest that would tell about everyrock whether it is part of the mountain. However, we

    might suppose that there is a truth of the matter forevery rock even though we cannot know it. Our axiomswould then be weaker than the truth. The questionwould be settled for some rocks and not for others.

    Not even this is appropriate. The concept of theterritory of Mount Everest may be further refined inthe futureand refined in incompatible ways by differ-ent people. If we suppose that there is a truth aboutwhat rocks are part of the mountain, then the peoplerefining it in different ways would get to argue fruit-lessly about which definition is getting closer to thetruth. On the other hand, there is a truth of the mat-

    ter, which may someday be discovered, about whetherMallory and Irvine reached the summit in 1924.

    Consider two theories of mountain climbing, T1 andT2. Besides these theories, there is T3 based onplate tectonics that tells us that Everest is still get-ting higher.

    In the simpler theory T1, there is a list of names ofmountains paired with lists of climbing expeditions ornames of climbers. As a logical theory it would havesentences like.

    Climbed(Everest, 1953, {Hillary,Tenzing}). (1)

    The larger theoryT2 contains routes up the mountainof the various parties. Routes are approximate entities.

    T1 is an approximation to T2, but T1, may be re-garded as not approximate at all. In particular, it canbe complete, e.g. it decides any sentence in its limitedlanguage.

    T1 and T2 may be related using formalized contextsas in [McC93a] or [MB97], but we wont do that here.

    One approximate theory may be less approximate than

    another. We want to discuss relations between sen-tences in a theory and sentences in a less approximatetheory. It makes the ideas neater if we imagine thatthere are true and complete theories of the world evenif theyre not finitely expressible and none of the lan-guage of any of them is known. This permits regardingapproximating the world as a case of one theory ap-proximating another. If this is too platonic for yourtaste, you can regard approximating the world as adifferent notion than that of one theory approximat-ing another.

    There are other approximate theories involving MountEverest.

    One such theory that lists names of mountains andthe continents containing them. Thus we have

    In(Annapurna,Asia). A less approximate theorygives countries, e.g. In(Annapurna,Nepal). Astill less approximate theory gives locations, e.g.Location(Annapurna) = (2832, 8353).

    2.3 The wants and actions of the United

    States

    In 1990 the United States wanted Iraq to withdrawfrom Kuwait. Evidence for this proposition was pro-vided by the statements of U.S. officials and sendingtroops to Saudi Arabia. It was correctly inferred fromthis proposition that the U.S. would do something to

    implement its desires. This inference was made withonly an approximate notion of the US wants.

    Nevertheless, the facts can be expressed by formulaslike the following.

    Says(President(USA) , x) Says(USA, x), (2)

    Says(President(USA) ,Wants(USA,Leaves(Iraq,Kuwait))),

    (3)

    Says(USA,Wants(USA,Leaves(Iraq,Kuwait))), (4)

    Says(entity,Wants(entity,x))wants(entity,x),(5)

    wants(USA,Leaves(Iraq,Kuwait)), (6)

    wants(x, y) (z)(Does(x, z) Achieves(z, y)). (7)

    From these we infer

    (z)(Does(USA, z)Achieves(z, Leaves(Iraq,Kuwait)).(8)

    We have not introduced all the necessary qualifica-tions, and we have not used a proper theory of actions.There also should be some more theory of Wants, Says,and Does.

  • 8/22/2019 Approximate

    5/8

    Someone with a sufficiently detailed knowledge ofevents in the Middle East and of the American de-cision making community might not need The USwants . . . , because he could work directly with thevarious decision makers and the motivations and ef-

    fects of their actions. The rest of us must make dowith more approximate concepts. This will apply evenmore to future students of 20th century history.

    A fuzzy logic theory would take The US wantsfor granted and concentrate on The US moderatelywants and The US strongly wants.

    2.4 The Blocks World as an Approximate

    Theory

    The usual AI situation calculus blocks world has apropositional fluent On(x, y) asserting that block x is

    on block y. We can assert Holds(On(x, y), s) aboutsome situation s and have the action Move(x, y) thatmoves block xon top of block y.

    Suppose this formalism is being used by a robot actingin the real world. The concepts denoted by On(x, y),etc. are then approximate concepts, and the theoryis an approximate theory. Our goal is to relate thisapproximate theory to the real world. Similar consid-erations would apply if we were relating it to a morecomprehensive but still approximate theory.

    We use formalized contexts as in [McC93b] and[MB97]. and let Cblocks be a blocks world context

    with a language allowing On(x, y), etc.

    Holds(On(x, y), s) is approximate in at least the fol-lowing respects.

    In the original intended interpretation of the sit-uation calculus, a situation s is a snapshot of theworld at a given time. According to the theory ofrelativity, distant simultaneity is ill-defined. Thisis the least of our worries.

    Whether block x is on block y may be ambigu-ous in the real world. Block x may be partly on

    and partly off. We can handle the relation bysentences

    Cond1(s) Ist(Cblocks,Holds(On(x, y), s))Cond2(s) Ist(Cblocks,Holds(Not On(x, y), s)).

    (9)

    Cond1(s) and Cond2(s) are respectively condi-tions in the outer context on the situations thatxshall be ony andx shall not be ony in the contextCblocks. These need not be the negations of each

    other, so it can happen that it isnt justified to sayeither thatx is ony or that it isnt. Cond1(s) andCond2(s) need not be mutually exclusive. In thatcase the theory associated with Cblocks would beinconsistent. However, unless there are stronglift-

    ing rulesthe inconsistency within Cblocks cannotinfect the rest of the reasoning.

    Notice that the theory in the context Cblocks approx-imates a theory in which blocks can be in differentorientations on each other or in the air or on the tablein quite different sense than numerical approximation.

    2.5 Relating two blocks world theories

    Our previous blocks world theory T1 usesHolds(On1(b1, b2), s). Our less approximate newtheory T2 uses Holds(On2(b1, b2, d), ) where d is a

    displacement of b1 from being centered on b2. SinceT2 has another parameter for On, many situations can correspond to a single situation s in T1.

    We may have the relations

    Holds(On2(b1, b2, d)), )Holds(On1(b1, b2)), St1())Holds(On1(b1, b2)), St1())(d)Holds(On2(b1, b2, d)), ).

    (10)

    Here St1() is the T1-situation corresponding to .For simplicity we are assuming that every T2-situation

    has a corresponding T1-situation.

    T2 is a tiny step from T1 in the direction of the realworld.

    Suppose a robot uses T1 as a theory of the blocksworld and takes actions accordingly, but the real worldcorresponds to T2. This is quite a simplification, butmaybe it has enough of the right formal properties.

    The simplest case is where there are two blocks, andthe initial situation is represented by

    {Holds(On1(B1,Table), S0),Holds(On1(B2,Table), S0)}(11)

    in T1, but the real world facts are

    {Holds(On2(B1,Table, 3.0), 2.1 cm),Holds(On(B2,Table, 4.5), 3.2cm)}

    (12)

    whereS0 = St1(0). (13)

    The goal is Holds(On(B1, B2), which might alsobe written as (s)Holds(On(B1, B2), s). Anyway

  • 8/22/2019 Approximate

    6/8

    the robot infers that the appropriate action isMove(B1, B2) and infers that

    Holds(On1(B1, B2),Result(Move1(B1, B2), S0)),(14)

    where we are omitting various qualifications.

    In T2, the form of an action is Move2(b1, b2, d), andthe effect of a move action is given by

    Holds(On(b1, b2, d),Result2(Move2(b1, b2, d), )).(15)

    The translation of a move action in T1 to a move actionin T2 may be given by

    Action12(Move1(b1, b2), St1())= Move2(b1, b2,Displacement(b1, b2, )).

    (16)

    The key point is that the move in T2 corresponding toa move in T1 depends on the blocks being moved andalso on the situation.

    The success of the one step plan worked out in T1 inthe less approximate world T2 is expressed by

    St1(Result2(Action12(Move1(B1, B2), St1(0))))= Result1(Move(B1, B2), St1(0)).

    (17)

    The success of multi-step plans would be expressed bylonger correspondence formulas.

    These are commutativity relations.

    2.6 Temporary entities; wealth and welfare

    In natural language, the present tense of the verb tobe is used for asserting an intrinsic property of anentity and for asserting a property that is expected tohold long enough to provide a constant background for

    other events under discussion.The wealth or welfare of a human or animal is such atemporary property.

    The welfare of a mosquito over a short time is defin-able. It harms the mosquito to be squashed and helpsit if it finds exposed skin from which to extract blood.Over a year the welfare of an individual mosquito isnot definable. If it is to be defined, the concepts, e.g.descendants, will be quite different.

    This suggest using contexts. We have

    c(Today) : Harms(Swat(M1073543907),M1073543907)(18)

    as a proposition about this particular mosquito.

    A persons wealth at a given time can be measuredas an amount of money. His wealth increases as he ispaid and decreases as he spends money. However, overa period of 10,000 years, the wealth or welfare of thisindividual is undefined.

    Nevertheless, wealth and welfare are useful concepts.

    Fiction provides an interesting class of approximateobjects and theories, especially historical fiction, inwhich the author tries to fit his characters and theirlives into a background of historical fact. Commonsense knowledge tells us that Sherlock Holmes would

    have had a mother, but Conan Doyle does not provideus with a name. A definite address is given, but therewas never a house there corresponding to Doyles de-scription. The author need only define his world toa point that lets the reader answer the questions theauthor wants the reader to ask.

    2.7 States of motion

    These present a general reason for using approximateconcepts.

    Suppose a robot is walking from here to there in dis-

    crete steps.

    Holds(Walking(R2D2,Here,There), s)

    describes a situation that can persist. Sentences givingthe robots position at a given time must be frequentlyupdated. An important human reason for forming anapproximate concept is to get a fluent that will per-sist. This reason will also apply to robots but perhapsto a lesser extent, since computers can perform morefrequent updating than can people.

    2.8 Folk physics and folk psychology as

    approximate theories

    For example, the concept of X believes P is approxi-mate both in the criterion for belief and in what is theobject of a belief. These notions will also be approxi-mate for robots.

    Much of the criticism of folk psychology may comefrom demanding that it be more precise than is rea-sonable. Aristotles aphorism applies here.

  • 8/22/2019 Approximate

    7/8

    3 Propositional approximate theories

    Many topics take an especially simple form when oneuses propositions instead of predicatesand acceptsthe reduced expressivity.

    Here is one approach to defining approximate propo-sitional theories.

    Let reality, e.g. the situation in a room, be given bythe values of the propositional variablesr1, . . . , rn. As-sume that reality is not directly observable. n may bevery large, e.g. like Avogadros number.

    Let the values of the propositions o1, . . . , ok be observ-able. They are functions of reality given by

    oi = Oi(r1, . . . , rn),

    where k is a modest number corresponding to howmany bits we can actually observe.

    We suppose that we want to know the values ofq1, . . . , q l, which are related to reality by

    qi= Qi(r1, . . . , rn),

    where l is also a modest number.

    An approximate theory AT is given by functionsQ

    i(o1, . . . , ok), i.e. AT undertakes to give what wewant to know in terms of the observations.

    If we are lucky in how reality turns out, the Q

    func-tions correspond to theQ functions, i.e.

    Lucky(r1, . . . , rn) qi= Q

    i(o1, . . . , ok)

    for i= 1, . . . , l, i.e.

    Lucky(r1, . . . , rn) [Qi(r1, . . . , rn) Q

    i(O1(r1, . . . , rn), . . .Ok(r1, . . . , rn))].

    If we are very fortunate we may be able to know whenwe are lucky, and we have

    KnowLucky(o1, . . . , ok) Lucky(r1, . . . , rn).

    At the moment, we have no useful propositional ap-proximate theories in mind, and the reader shouldremember Einsteins dictum Everything should bemade as simple as possiblebut not simpler.

    3.1 Approximate Theories of Digital Circuits

    Consider first combinational circuits built from logicelements, i.e. without bridges and other sneak paths.

    The logical elements are treated as boolean functions,defined by their truth tables. The theory defines thebehavior of any circuit in terms of composition of thefunctions associated with the logical elements. Theoutputs of a circuit are given by the theory for any

    combination of boolean inputs. Fan-in and fan-outrestrictions are outside the logical theory, as are timingconsiderations.

    Now consider sequential circuits including flipflops.Now what happens is defined only for some combi-nations of inputs. For example, the behavior of a Dflipflop is not defined when its 0 and 1 inputs are giventhe same value, whether that value be 0 or 1. The be-havior is only defined when the inputs are opposite.

    The manufacturer does not say what will happen whenthese and other restrictions are not fulfilled, does notwarrant that two of his flipflops will behave the sameor that a flipflop will retain whatever behavior it hasin these forbidden cases.

    This makes the concept of D flipflop itself approxi-mate, perhaps not in the same sense as some otherapproximate theories.

    Thus the theory of sequential circuits is an approxi-mate theory, and it is not an approximation to a defi-nite less approximate theory of purely digital circuits.This is in spite of the fact that there is (or can be)an electronic theory of these digital circuits which de-scribes their behavior. In that theory one D flipflop is

    different from another and changes its behavior as itages.

    4 When an approximate concept

    becomes precise

    Suppose an approximate concept represented by apredicatep(x) has a sufficient condition suff(x) anda necessary condition nec(x). Thus we have

    (x)(suff(x) p(x)), and(x)(p(x) nec(x)).

    (19)

    In general the sufficient and the necessary conditionswill not coincide, i.e. we will not have

    (x)(nec(x)suff(x)). (20)

    However, they made coincide with some restriction onx, i.e. we may have

    (x)(special(x) (nec(x) suff(x)). (21)

  • 8/22/2019 Approximate

    8/8

    Another way an approximate concept may become def-inite is by a mapping from the space in which it is firstformalized into more restricted space. Well combinespecialization with mapping in

    (x)(special(x) (nec(f(x)) suff(f(x)), (22)

    where the function f maps a subset of the originaldomain into a specialized domain in which the conceptp(x) becomes definite.

    5 Conclusions, remarks, and

    acknowledgements

    The present paper is exploratory. The large numberof approximate concepts we discuss are approximatein different ways, but we dont have a classification.

    Many of the applications of approximate objects willresult from theories connecting different levels andkinds of approximation.

    I thank Eyal Amir, Johan van Benthem, Sasa Buvac,Tom Costello, Aarati Parmar for helpful comments.

    This research has been partly supported by ARPAcontract no. USC 621915, the ARPA/Rome Labora-tory planning initiative under grant (ONR) N00014-94-1-0775 and ARPA/AFOSR under (AFOSR) grant# F49620-97-1-0207.

    References

    [CM99] Tom Costello and John McCarthy. UsefulCounterfactuals1. Electronic Transactionson Artificial Intelligence, 1999. submitted1999 July.

    [MB97] John McCarthy and Sasa Buvac. For-malizing context (expanded notes). InA. Aliseda, R.J. van Glabbeek, andD. Westerstahl, editors,Computing Natural

    Language. Center for the Study of Languageand Information, Stanford University, 1997.

    [McC90] John McCarthy. Formalizing CommonSense: Papers by John McCarthy. AblexPublishing Corporation, 1990.

    [McC93a] John McCarthy. Notes on Formalizing Con-text2. InIJCAI-93, 1993.

    1http://www-formal.stanford.edu/jmc/counterfactuals.html2http://www-formal.stanford.edu/jmc/context.html

    [McC93b] John McCarthy. Notes on for-malizing context. In IJCAI93,1993. Available as http://www-formal.stanford.edu/jmc/context.html.

    [Zad99] Lotfi A. Zadeh. From computing with num-bers to computing with wordsfrom ma-nipulation of measurements to manipula-tion of perceptions. IEEE Transactions onCircuits and SystemsI. Fundamental the-

    ory and applications, 45(1):105119, Jan-uary 1999.