Expert Systems 1

47
Expert Systems Chapter 6 Introduction to Artificial Intelligence MEngg (Computer Systems)

description

E S

Transcript of Expert Systems 1

  • Expert SystemsChapter 6Introduction to Artificial IntelligenceMEngg (Computer Systems)

  • Inferences and Explanation

  • Categories of ReasoningDeductive ReasoningInductive ReasoningAbductive ReasoningAnalogical ReasoningFormal ReasoningProcedural Numeric ReasoningGeneralization & AbstractionMeta Level Reasoning

  • Deductive ReasoningUse General Premises to obtain Specific InferenceDevelop new knowledge from given one.Three parts:

    Major PremiseMinor PremiseConclusionUsed in Logic, Rule-based & Frame-based Systems

  • Deductive Reasoning - ExampleMajor Premise

    I do not jog when the temperature exceeds 90oMinor Premise

    Today the temperature is 93oConclusion

    Therefore, I will not Jog TodayA => B A ------------BModus Ponens

  • Inductive ReasoningUses Established Facts to draw Generalized ConclusionsMay be Difficult to arrive at ConclusionConclusions can change if new facts are discoveredElement of UncertaintyUsed in Logic, Rule-based & Frame-based Systems

  • Inductive Reasoning - ExamplePremises:

    Faulty Diodes cause electronic equipment failureDefective Transistors cause electronic equipment failureDefective ICs cause electronic equipment malfunctionConclusion:

    Therefore, Defective Semiconductor devices are a cause of electronic equipment failureWhenever A then B-------------Possibly A => B

  • Abductive ReasoningAbduction is a reasoning process that tries to form plausible explanations for abnormal observations

    Abduction is distinctly different from deduction and inductionAbduction is inherently uncertainUncertainty is an important issue in abductive reasoning

    A => B B-------------Possibly A

  • Abductive ReasoningSome major formalisms for representing and reasoning about uncertainty

    Mycins certainty factors (an early representative)Probability theory (esp. Bayesian belief networks)Fuzzy logicTruth maintenance systemsNonmonotonic reasoning

  • Comparing abduction, deduction, and inductionDeduction: major premise: All balls in the box are blackminor premise: These balls are from the boxconclusion: These balls are black

    Induction: case:These balls are from the boxobservation: These balls are blackhypothesized rule:All ball in the box are blackA => B A ---------BWhenever A then B-------------Possibly A => B

  • Comparing abduction, deduction, and inductionAbduction: rule:All balls in the box are blackobservation: These balls are blackexplanation: These balls are from the boxA => B B-------------Possibly ADeduction reasons from causes to effectsInduction reasons from specific cases to general rulesAbduction reasons from effects to causes

  • Characteristics of Abductive ReasoningConclusions are hypotheses, not theorems (may be false even if rules and facts are true)

    E.g., misdiagnosis in medicine

    There may be multiple plausible hypotheses

    Given rules A => B and C => B, and fact B, both A and C are plausible hypotheses Hypotheses can be ranked by their plausibility (if it can be determined)

  • Characteristics of Abductive reasoning (cont.)Reasoning is often a hypothesize-and-test cycle

    Hypothesize: Postulate possible hypotheses, any of which would explain the given facts (or at least most of the important facts)Test: Test the plausibility of all or some of these hypothesesOne way to test a hypothesis H is to ask whether something that is currently unknownbut can be predicted from His actually trueIf we also know A => D and C => E, then ask if D and E are trueIf D is true and E is false, then hypothesis A becomes more plausible (support for A is increased; support for C is decreased)

  • Characteristics of Abductive reasoning (cont.)Reasoning is non-monotonic

    That is, the plausibility of hypotheses can increase/decrease as new facts are collected In contrast, deductive inference is monotonic: it never change a sentences truth value, once knownIn abductive (and inductive) reasoning, some hypotheses may be discarded, and new ones formed, when new observations are made

  • Analogical ReasoningAnswers derived by AnalogyRequires ability to recognize previously encountered experiencesWorks well with Semantic Networks

  • Analogical Reasoning - ExampleQuestion:

    What are the working hours of Engineers in the company?Reasoning:

    Engineers are white collar EmployeesWhite Collar Employees work from 9:00 to 5:00Conclusion/Answer:

    Engineers work from 9:00 to 5:00

  • Formal ReasoningInvolves Symbolic manipulation of Data StructuresExample

    Mathematical Logic used in theorem proving in geometryApproach of predicate calculus

  • Generalization & AbstractionUsed with Logical & Semantic Representation of KnowledgeExample:

    Known Facts:ALL companies have presidentsALL brokerages houses are considered companiesConclusion:Any Brokerage House will have president

  • Meta Level ReasoningInvolves Knowledge about KnowledgeExample:

    Knowing about importance & Relevance of Certain Facts

  • Uncertainty

  • Sources of uncertaintyUncertain inputs

    Missing dataNoisy data

    Uncertain knowledge

    Multiple causes lead to multiple effectsIncomplete enumeration of conditions or effectsIncomplete knowledge of causality in the domainProbabilistic/stochastic effects

  • Sources of uncertaintyUncertain outputs

    Abduction and induction are inherently uncertainDefault reasoning, even in deductive fashion, is uncertainIncomplete deductive inference may be uncertain

  • Decision making with UncertaintyRational behavior:

    For each possible action, identify the possible outcomesCompute the probability of each outcomeCompute the utility of each outcomeCompute the probability-weighted (expected) utility over possible outcomes for each actionSelect the action with the highest expected utility (principle of Maximum Expected Utility)

  • ConfidenceA variety of approaches:

    Certainty factors Dempster-Shafer theory Bayesian network Fuzzy logic

    file:///F|/teach/2005/MEngg/AI/ES/comp.htm

  • Certainty FactorUsed as a degree of confirmation of a piece of evidenceExample:

    If the light is green then OK to cross the street cf 0.9 They are easy to computePartly ad hoc

  • Dempster-Shafer Theory Does not force belief to be assigned to ignorance or refutation of a hypothesisExample

    belief of 0.7 in falling asleep in class does not mean that the chance of not falling asleep in class is 0.3

  • Bayesian reasoning P(E|H)P(H) P(H|E) = ----------------- P(E)

    Bayes Theorem gives the probability of event H given that event E has occurred

  • Other Uncertainty RepresentationsDefault reasoning

    Nonmonotonic logic: Allow the retraction of default beliefs if they prove to be falseEvidential reasoning

    Dempster-Shafer theory: Bel(P) is a measure of the evidence for P; Bel(P) is a measure of the evidence against P; together they define a belief interval (lower and upper bounds on confidence)Fuzzy reasoning

    Fuzzy sets: How well does an object satisfy a vague property?Fuzzy logic: How true is a logical statement?

  • Uncertainty tradeoffsBayesian networks:

    Nice theoretical properties combined with efficient reasoning make BNs very popular; limited expressiveness, knowledge engineering challenges may limit usesNonmonotonic logic:

    Represent commonsense reasoning, but can be computationally very expensive

  • Uncertainty tradeoffsCertainty factors:

    Not semantically well foundedDempster-Shafer theory:

    Has nice formal properties, but can be computationally expensive, and intervals tend to grow towards [0,1] (not a very useful conclusion)Fuzzy reasoning:

    Semantics are unclear (fuzzy!), but has proved very useful for commercial applications

  • Inferencing with Rules

  • Control SchemesTwo main kinds of Rule-based systems:

    Forward chainingBackward chaining

  • Forward ChainingForward chaining starts with the facts, and sees what rules apply (and hence what should be done) given the facts.Forward chaining systems have been used as:

    a model of human reasoningbasis for expert systems - various expert system shells based on this model, such as CLIPS.

  • Forward ChainingFacts are held in a working memoryCondition-action rules represent actions to take when specified facts occur in working memory. IF condition THEN action.Firing a Rule Typically the actions involve adding or deleting facts from working memory.

  • Forward Chaining InterpreterComprehensibleTrace of rule applications that lead to conclusion is explanation. Answers why. Algorithm

    RepeatApply all the rules to the current facts.Each rule firing may add new factsUntil no new facts are added.

  • Forward Chaining ExampleSimple fire example:

    R1: IF hot AND smoky THEN ADD fireR2: IF alarm_beeps THEN ADD smokyR3: IF fire THEN ADD switch_on_sprinklersWorking memory initially contains:

    F1: alarm_beepsF2: hotFollowing the algorithm: First cycle..

    Find all rules with satisfied conditions : R2Choose one: R2Perform actions: ADD smoky.Working memory now containsalarm_beeps, hot, smoky

  • Forward Chaining Example (Contd)Next cycle:

    Find all rules with conditions satisfied : R1Choose one and apply action: ADD fireWorking memory now contains alarm_beeps, hot, smoky, fire.Then

    Rules with conditions satisfied: R3apply action: ADD switch_on_sprinklers.

  • Conflict ResolutionOrder in which rules fire depends on facts in working memory, not order of rules.Sometimes more than one rule may apply. Suppose we also had:

    R4: IF dry THEN ADD humidifiers_onR5: IF sprinklers_on THEN DELETE dryAnd the fact dry in working memoryInitially both R2 and R4 apply. Which to choose?Choice will influence final contents of working memory. (whether humidifiers switched on).

  • Conflict ResolutionWhen more than one rule applies following preferences often applied:

    choose first rules that use facts recently added to working memory (recency)prefer to fire rules with more specific conditions (e.g., IF hot AND smoky THEN.. rather than just IF hot THEN..)Alternative conflict resolution strategies may also be applied (e.g., allowing user to specify a preference order on rules).

  • Backward ChainingBackward chaining starts with something to find out, and looks for rules that will help in answering it.

    This allows rather more focused style of reasoning.

    (Forward chaining may result in a lot of irrelevant conclusions added to working memory)

  • Backward ChainingSame rules/facts may be processed differently, using backward chaining interpreter.Basic algorithm:

    To prove goal G:If G is in the initial facts, it is proven.Otherwise, find a rule which can be used to conclude G, and try to prove each of that rules conditions.Start with possible hypothesis. Should I switch the sprinklers on?

  • Backward Chaining ExampleShould we switch on the sprinklers? Set as a goal.G1: switch_on_sprinklersIs it in initial facts? No. Is there a rule which adds this as a conclusion? Yes, R3Set condition of R3 as new goal to prove:G2: fire.Is it in initial facts? No. Rule? Yes, R1Set conditions as new goals: G3: hot, G4: smoky.

  • Backward Chaining ExampleTry to prove G3: hot. In initial facts.Try to prove G4: smoky. Conclusion of rule so..G5: alarm_beeps.In initial facts, so all doneProved hypothesis switch_on_sprinklers.

  • Application & ImplementationInterpreters for rule-based systems can be written in conventional languages.For backward chaining we use a stack of goals still to prove, popping goals off stack to try to prove it, adding new goals (conditions of rules) onto stack.Main goal/hypothesis succeeds if all goals removed from stack, and none fails.

  • Backward Chaining Systems & SearchWhat should be done if more than one rule has the same conclusion?We must try both.. Either might be used to validly prove the hypothesis.This is a search problem. How do we systematically go through all possibilities.

  • Forward vs Backward Chaining - The good and the badForward chaining allows you to conclude anythingForward chaining is expensiveBackward chaining requires known goals.Premises of backward chaining directs which facts (tests) are needed.Rule trace provides explanation.

  • Forward vs Backward Chaining - The good and the badChoice of method for reasoning on rule set depends on problem, and on properties of rule set.If you have clear hypotheses, backward chaining is likely to be better. But wasteful if many hypotheses to test.Forward chaining may be better if you have less clear hypothesis and want to see what can be concluded from current situation.

  • Forward vs Backward Chaining - The good and the badMedical diagnosis systems have often used backward chaining - well defined hypotheses.Expert systems for design/configuration tend to use forward chaining (starting with components/specs and seeing what can be done, rather than starting with all possible design hypotheses).

    file:///F|/teach/2005/MEngg/AI/ES/comp.htm

    (Forward chaining may result in a lot of irrelevant conclusions added to working memory)