Soft Computing Unit-1 by Arun Pratap Singh
-
Upload
arunpratapsingh -
Category
Documents
-
view
281 -
download
6
Transcript of Soft Computing Unit-1 by Arun Pratap Singh
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
1/100
UNIT : I
SOFT COMPUTINGII SEMESTER (MCSE 205)
PREPARED BY ARUN PRATAP SINGH
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
2/100
PREPARED BY ARUN PRATAP SINGH 1
1
INTRODUCTION OF SOFT COMPUTING :
Soft computingis a term applied to a field within computer science which is characterized by the use
of inexact solutions to computationally hard tasks such as the solution ofNP-completeproblems, forwhich there is no known algorithm that can compute an exact solution in polynomial time. Softcomputing differs from conventional (hard) computing in that, unlike hard computing, it is tolerant ofimprecision, uncertainty, partial truth, and approximation. In effect, the role model for soft computingis the human mind.
Constituents of SC :- Fuzzy systems => imprecision Neural networks => learning Probabilistic reasoning => uncertainty Evolutionary computing => optimization
UNIT : I
http://en.wikipedia.org/wiki/NP-completehttp://en.wikipedia.org/wiki/NP-completehttp://en.wikipedia.org/wiki/NP-completehttp://en.wikipedia.org/wiki/Polynomial_timehttp://en.wikipedia.org/wiki/Polynomial_timehttp://en.wikipedia.org/wiki/Polynomial_timehttp://en.wikipedia.org/wiki/NP-complete -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
3/100
PREPARED BY ARUN PRATAP SINGH 2
2
Soft Computing is a term used in computer science to refer to problems in computer science
whose solutions are unpredictable, uncertain and between 0 and 1. Soft Computing became a
formal area of study in Computer Science in the early 1990s. Earlier computational approaches
could model and precisely analyze only relatively simple systems. More complex systems arising
inbiology,medicine, the humanities,management sciences, and similar fields often remained
intractable to conventional mathematical and analytical methods. That said, it should be pointed
out that simplicity and complexity of systems are relative, and many conventional mathematical
models have been both challenging and very productive. Soft computing deals with imprecision,uncertainty, partial truth, and approximation to achieve practicability, robustness and low solution
cost. As such it forms the basis of a considerable amount ofmachine learning techniques. Recent
trends tend to involve evolutionary and swarm intelligence based algorithms and bio-inspired
computation.
There is a main difference between soft computing and possibility. Possibility is used when we
don't have enough information to solve a problem but soft computing is used when we don't have
enough information about the problem itself. These kinds of problems originate in the human mind
with all its doubts, subjectivity and emotions; an example can be determining a suitable
temperature for a room to make people feel comfortable.
SC today (Zadeh)
Computing with words (CW) Theory of information granulation (TFIG) Computational theory of perceptions (CTP)
http://en.wikipedia.org/wiki/Biologyhttp://en.wikipedia.org/wiki/Medicinehttp://en.wikipedia.org/wiki/Humanitieshttp://en.wikipedia.org/wiki/Management_sciencehttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Machine_learninghttp://en.wikipedia.org/wiki/Management_sciencehttp://en.wikipedia.org/wiki/Humanitieshttp://en.wikipedia.org/wiki/Medicinehttp://en.wikipedia.org/wiki/Biology -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
4/100
PREPARED BY ARUN PRATAP SINGH 3
3
Possible SC data & operations
Numeric data:5, about 5, 5 to 6, about 5 to 6
Linguistic data:cheap, very big, not high, medium or bad
Functions & relations:f(x), about f(x), fairly similar, much greater
Components of soft computing include:
Neural networks (NN)
Perceptron
Support Vector Machines (SVM)
Fuzzy logic (FL)
Evolutionary computation (EC), including: Evolutionary algorithms
Genetic algorithms
Differential evolution
Metaheuristic andSwarm Intelligence
Ant colony optimization
Particle swarm optimization
Firefly algorithm
Cuckoo search
Ideas aboutprobability including:
Bayesian network
Chaos theory
Generally speaking, soft computing techniques resemble biological processes more closely than
traditional techniques, which are largely based on formal logical systems, such as
sentential andpredicate logic,or rely heavily on computer-aided numerical analysis (as infinite
element analysis). Soft computing techniques are intended to complement each other.
Unlike hard computing schemes, which strive for exactness and full truth, soft computing
techniques exploit the given tolerance of imprecision, partial truth, and uncertainty for a particular
problem. Another common contrast comes from the observation thatinductive reasoning plays a
larger role in soft computing than in hard computing.
http://en.wikipedia.org/wiki/Neural_networkhttp://en.wikipedia.org/wiki/Perceptronhttp://en.wikipedia.org/wiki/Support_Vector_Machinehttp://en.wikipedia.org/wiki/Fuzzy_logichttp://en.wikipedia.org/wiki/Evolutionary_computationhttp://en.wikipedia.org/wiki/Evolutionary_algorithmhttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Swarm_Intelligencehttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Firefly_algorithmhttp://en.wikipedia.org/wiki/Cuckoo_searchhttp://en.wikipedia.org/wiki/Probabilityhttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Chaos_theoryhttp://en.wikipedia.org/wiki/Logical_systemhttp://en.wikipedia.org/wiki/Predicate_logichttp://en.wikipedia.org/wiki/Finite_element_analysishttp://en.wikipedia.org/wiki/Finite_element_analysishttp://en.wikipedia.org/wiki/Inductive_reasoninghttp://en.wikipedia.org/wiki/Inductive_reasoninghttp://en.wikipedia.org/wiki/Finite_element_analysishttp://en.wikipedia.org/wiki/Finite_element_analysishttp://en.wikipedia.org/wiki/Predicate_logichttp://en.wikipedia.org/wiki/Logical_systemhttp://en.wikipedia.org/wiki/Chaos_theoryhttp://en.wikipedia.org/wiki/Bayesian_networkhttp://en.wikipedia.org/wiki/Probabilityhttp://en.wikipedia.org/wiki/Cuckoo_searchhttp://en.wikipedia.org/wiki/Firefly_algorithmhttp://en.wikipedia.org/wiki/Particle_swarm_optimizationhttp://en.wikipedia.org/wiki/Ant_colony_optimizationhttp://en.wikipedia.org/wiki/Swarm_Intelligencehttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Differential_evolutionhttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Evolutionary_algorithmhttp://en.wikipedia.org/wiki/Evolutionary_computationhttp://en.wikipedia.org/wiki/Fuzzy_logichttp://en.wikipedia.org/wiki/Support_Vector_Machinehttp://en.wikipedia.org/wiki/Perceptronhttp://en.wikipedia.org/wiki/Neural_network -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
5/100
PREPARED BY ARUN PRATAP SINGH 4
4
HARD COMPUTING VS SOFT COMPUTING :
Hard computing Real-time constraints Need of accuracy and precision in calculations and outcomes Useful in critical systems
Soft computing Soft constraints Need of robustness rather than accuracy Useful for routine tasks that are not critical
1) Hard computing, i.e., conventional computing, requires a precisely stated analytical model
and often a lot of computation time. Soft computing differs from conventional (hard) computing
in that, unlike hard computing, it is tolerant of imprecision, uncertainty, partial truth, and
approximation. In effect, the role model for soft computing is the human mind.
2) Hard computingbased on binary logic, crisp systems, numerical analysis and crisp software
but soft computingbased on fuzzy logic, neural nets and probabilistic reasoning.
3) Hard computinghas the characteristics of precision and categoricity and the soft computing,
approximation and dispositionality. Although in hard computing, imprecision and uncertainty are
undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited
to achieve tractability, lower cost, high Machine Intelligence Quotient (MIQ) and economy of
communication
4) Hard computing requires programs to be written; soft computing can evolve its own
programs
5) Hard computinguses two-valued logic; soft computingcan use multivalued or fuzzy logic
6) Hard computingis deterministic; soft computingincorporates stochasticity
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
6/100
PREPARED BY ARUN PRATAP SINGH 5
5
7) Hard computing requires exact input data; soft computingcan deal with ambiguous and
noisy data
8) Hard computingis strictly sequential; soft computingallows parallel computations
9) Hard computingproduces precise answers; soft computingcan yield approximate answers
SOFT COMPUTING TECHNIQUES :
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
7/100
PREPARED BY ARUN PRATAP SINGH 6
6
1) FUZZY :
Fuzzy logic : - Fuzzy logic is a form of many-valued logic; it deals with reasoning that is
approximate rather than fixed and exact. Compared to traditional binary sets (where variables
may take on true or false values), fuzzy logic variables may have a truth value that ranges in
degree between 0 and 1. Fuzzy logic has been extended to handle the concept of partial truth,
where the truth value may range between completely true and completely false.[1]Furthermore,
when linguistic variables are used, these degrees may be managed by specific functions.
Irrationality can be described in terms of what is known as the fuzzjective. The term "fuzzy logic"
was introduced with the 1965 proposal offuzzy set theory byLotfi A. Zadeh.Fuzzy logic has been
applied to many fields, fromcontrol theory toartificial intelligence.Fuzzy logics however had been
studied since the 1920s as infinite-valued logics notably byukasiewiczandTarski.
Fuzzy logic temperature
http://en.wikipedia.org/wiki/Many-valued_logichttp://en.wikipedia.org/wiki/Reasoninghttp://en.wiktionary.org/wiki/binaryhttp://en.wikipedia.org/wiki/Two-valued_logichttp://en.wikipedia.org/wiki/Truth_valuehttp://en.wikipedia.org/wiki/Fuzzy_logic#cite_note-1http://en.wikipedia.org/wiki/Fuzzy_logic#cite_note-1http://en.wikipedia.org/wiki/Fuzzy_logic#cite_note-1http://en.wikipedia.org/wiki/Linguistichttp://en.wikipedia.org/wiki/Fuzzy_set_theoryhttp://en.wikipedia.org/wiki/Lotfi_A._Zadehhttp://en.wikipedia.org/wiki/Control_theoryhttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Jan_%C5%81ukasiewiczhttp://en.wikipedia.org/wiki/Jan_%C5%81ukasiewiczhttp://en.wikipedia.org/wiki/Jan_%C5%81ukasiewiczhttp://en.wikipedia.org/wiki/Alfred_Tarskihttp://en.wikipedia.org/wiki/Alfred_Tarskihttp://en.wikipedia.org/wiki/Jan_%C5%81ukasiewiczhttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Control_theoryhttp://en.wikipedia.org/wiki/Lotfi_A._Zadehhttp://en.wikipedia.org/wiki/Fuzzy_set_theoryhttp://en.wikipedia.org/wiki/Linguistichttp://en.wikipedia.org/wiki/Fuzzy_logic#cite_note-1http://en.wikipedia.org/wiki/Truth_valuehttp://en.wikipedia.org/wiki/Two-valued_logichttp://en.wiktionary.org/wiki/binaryhttp://en.wikipedia.org/wiki/Reasoninghttp://en.wikipedia.org/wiki/Many-valued_logic -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
8/100
PREPARED BY ARUN PRATAP SINGH 7
7
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
9/100
PREPARED BY ARUN PRATAP SINGH 8
8
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
10/100
PREPARED BY ARUN PRATAP SINGH 9
9
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
11/100
PREPARED BY ARUN PRATAP SINGH 10
10
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
12/100
PREPARED BY ARUN PRATAP SINGH 11
11
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
13/100
PREPARED BY ARUN PRATAP SINGH 12
12
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
14/100
PREPARED BY ARUN PRATAP SINGH 13
13
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
15/100
PREPARED BY ARUN PRATAP SINGH 14
14
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
16/100
PREPARED BY ARUN PRATAP SINGH 15
15
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
17/100
PREPARED BY ARUN PRATAP SINGH 16
16
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
18/100
PREPARED BY ARUN PRATAP SINGH 17
17
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
19/100
PREPARED BY ARUN PRATAP SINGH 18
18
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
20/100
PREPARED BY ARUN PRATAP SINGH 19
19
2) NEURAL NETWORK :-
These networks are simplified models of biological neuron system which is a massivelyparallel distributed processing system made up of highly interconnected neural computingelements. The neural networks have the ability to learn that makes them powerful and flexibleand thereby acquire knowledge and make it available for use. There networks are also calledneural net or artificial neural networks. In neural network there is no need to devise analgorithm for performing a special task. For real time systems, these networks are also wellsuited due to their computational times and fast response due to their parallel architecture.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
21/100
PREPARED BY ARUN PRATAP SINGH 20
20
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
22/100
PREPARED BY ARUN PRATAP SINGH 21
21
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
23/100
PREPARED BY ARUN PRATAP SINGH 22
22
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
24/100
PREPARED BY ARUN PRATAP SINGH 23
23
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
25/100
PREPARED BY ARUN PRATAP SINGH 24
24
3) GENETIC ALGORITHM :
In the computer science field of artificial intelligence, genetic algorithm (GA) is
asearchheuristic that mimics the process ofnatural selection.This heuristic (also sometimes
called a metaheuristic) is routinely used to generate useful solutions
to optimization and searchproblems.[1] Genetic algorithms belong to the larger class
of evolutionary algorithms (EA), which generate solutions to optimization problems using
techniques inspired by natural evolution, such as inheritance, mutation, selection,
and crossover. Genetic algorithms find application
in bioinformatics, phylogenetics, computational
science,engineering,economics,chemistry,manufacturing,mathematics,physics,pharmac
ometrics and other fields.
http://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Search_algorithmhttp://en.wikipedia.org/wiki/Heuristic_(computer_science)http://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Search_algorithmhttp://en.wikipedia.org/wiki/Genetic_algorithm#cite_note-FOOTNOTEMitchell19962-1http://en.wikipedia.org/wiki/Genetic_algorithm#cite_note-FOOTNOTEMitchell19962-1http://en.wikipedia.org/wiki/Genetic_algorithm#cite_note-FOOTNOTEMitchell19962-1http://en.wikipedia.org/wiki/Evolutionary_algorithmhttp://en.wikipedia.org/wiki/Heredityhttp://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Selection_(genetic_algorithm)http://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Bioinformaticshttp://en.wikipedia.org/wiki/Phylogeneticshttp://en.wikipedia.org/wiki/Computational_sciencehttp://en.wikipedia.org/wiki/Computational_sciencehttp://en.wikipedia.org/wiki/Engineeringhttp://en.wikipedia.org/wiki/Economicshttp://en.wikipedia.org/wiki/Chemistryhttp://en.wikipedia.org/wiki/Manufacturinghttp://en.wikipedia.org/wiki/Mathematicshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Pharmacometricshttp://en.wikipedia.org/wiki/Pharmacometricshttp://en.wikipedia.org/wiki/Pharmacometricshttp://en.wikipedia.org/wiki/Pharmacometricshttp://en.wikipedia.org/wiki/Physicshttp://en.wikipedia.org/wiki/Mathematicshttp://en.wikipedia.org/wiki/Manufacturinghttp://en.wikipedia.org/wiki/Chemistryhttp://en.wikipedia.org/wiki/Economicshttp://en.wikipedia.org/wiki/Engineeringhttp://en.wikipedia.org/wiki/Computational_sciencehttp://en.wikipedia.org/wiki/Computational_sciencehttp://en.wikipedia.org/wiki/Phylogeneticshttp://en.wikipedia.org/wiki/Bioinformaticshttp://en.wikipedia.org/wiki/Crossover_(genetic_algorithm)http://en.wikipedia.org/wiki/Selection_(genetic_algorithm)http://en.wikipedia.org/wiki/Mutation_(genetic_algorithm)http://en.wikipedia.org/wiki/Heredityhttp://en.wikipedia.org/wiki/Evolutionary_algorithmhttp://en.wikipedia.org/wiki/Genetic_algorithm#cite_note-FOOTNOTEMitchell19962-1http://en.wikipedia.org/wiki/Search_algorithmhttp://en.wikipedia.org/wiki/Search_algorithmhttp://en.wikipedia.org/wiki/Optimization_(mathematics)http://en.wikipedia.org/wiki/Metaheuristichttp://en.wikipedia.org/wiki/Natural_selectionhttp://en.wikipedia.org/wiki/Heuristic_(computer_science)http://en.wikipedia.org/wiki/Search_algorithmhttp://en.wikipedia.org/wiki/Artificial_intelligencehttp://en.wikipedia.org/wiki/Computer_science -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
26/100
PREPARED BY ARUN PRATAP SINGH 25
25
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
27/100
PREPARED BY ARUN PRATAP SINGH 26
26
COMPUTATIONAL INTELLIGENCE :
Computational intelligence (CI) is a set of nature-inspired computational methodologies andapproaches to address complex real-world problems to which traditional approaches, i.e., firstprinciples modeling or explicit statistical modeling, are ineffective or infeasible. Many such real-life problems are not considered to be well-posed problems mathematically, but nature provides
many counterexamples of biological systems exhibiting the required function, practically. Forinstance, the human body has about 200 joints (degrees of freedom), but humans have littleproblem in executing a target movement of the hand, specified in just three Cartesian dimensions.Even if the torso were mechanically fixed, there is an excess of 7:3 parameters to be controlledfor natural arm movement. Traditional models also often fail to handle uncertainty, noise and thepresence of an ever-changing context. Computational Intelligence provides solutions for such andother complicated problems and inverse problems. It primarily includes artificial neural networks,evolutionary computation and fuzzy logic. In addition, CI also embraces biologically inspiredalgorithms such as swarm intelligence and artificial immune systems, which can be seen as a partof evolutionary computation, and includes broader fields such as image processing, data mining,and natural language processing. Furthermore other formalisms: DempsterShafer theory, chaostheory and many-valued logic are used in the construction of computational models.
The characteristic of "intelligence" is usually attributed to humans. More recently, many productsand items also claim to be "intelligent". Intelligence is directly linked to the reasoning and decisionmaking. Fuzzy logic was introduced in 1965 as a tool to formalise and represent the reasoningprocess and fuzzy logic systems which are based on fuzzy logic possess many characteristicsattributed to intelligence. Fuzzy logic deals effectively with uncertainty that is common for humanreasoning, perception and inference and, contrary to some misconceptions, has a very formaland strict mathematical backbone ('is quite deterministic in itself yet allowing uncertainties to beeffectively represented and manipulated by it', so to speak).Neural networks,introduced in 1940s(further developed in 1980s) mimic the human brain and represent a computational mechanismbased on a simplified mathematical model of the perceptrons (neurons) and signals that theyprocess.Evolutionary computation,introduced in the 1970s and more popular since the 1990s
mimics the population-based sexual evolution through reproduction of generations. It also mimicsgenetics in so calledgenetic algorithms.
http://en.wikipedia.org/wiki/Neural_networkhttp://en.wikipedia.org/wiki/Evolutionary_computationhttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Genetic_algorithmhttp://en.wikipedia.org/wiki/Evolutionary_computationhttp://en.wikipedia.org/wiki/Neural_network -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
28/100
PREPARED BY ARUN PRATAP SINGH 27
27
PROBLEM SPACE AND SEARCHING :
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
29/100
PREPARED BY ARUN PRATAP SINGH 28
28
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
30/100
PREPARED BY ARUN PRATAP SINGH 29
29
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
31/100
PREPARED BY ARUN PRATAP SINGH 30
30
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
32/100
PREPARED BY ARUN PRATAP SINGH 31
31
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
33/100
PREPARED BY ARUN PRATAP SINGH 32
32
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
34/100
PREPARED BY ARUN PRATAP SINGH 33
33
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
35/100
PREPARED BY ARUN PRATAP SINGH 34
34
GRAPH SEARCHING :
Many AI problems can be cast as the problem of finding a path in a graph. A graph is made upof nodes and arcs. Arcs are ordered pairs of nodes that can have associated costs.
Suppose we have a set of nodes that we call "start nodes" and a set of nodes that we call "goalnodes", a solution is a path from a start node to a goal node.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
36/100
PREPARED BY ARUN PRATAP SINGH 35
35
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
37/100
PREPARED BY ARUN PRATAP SINGH 36
36
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
38/100
PREPARED BY ARUN PRATAP SINGH 37
37
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
39/100
PREPARED BY ARUN PRATAP SINGH 38
38
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
40/100
PREPARED BY ARUN PRATAP SINGH 39
39
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
41/100
PREPARED BY ARUN PRATAP SINGH 40
40
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
42/100
PREPARED BY ARUN PRATAP SINGH 41
41
To find a solution, we need to search for a path. We use the generic searching algorithm.The frontieris a set of paths from a start node (we often identify the path with the node at theend of the path). The nodes at the end of the frontier are outlined in green or blue. Initially the
frontier is the set of empty paths from start nodes. Intuitively the generic graph searching algorithmis:
Repeato select a path on the frontier. Let's call the path selected P.o if Pis a path to a goal node, stop and return P,o remove Pfrom the frontiero for each neighbor of the node at the end of P, extend Pto that neighbour and add
the extended path to the frontier Until the frontier is empty. When it is empty there are no more solutions.
To see how this works you can carry out the generic search algorithm selecting the nodesmanually. The frontier is initially all coloured in green. You can click on a node on the frontier toselect it. The node and the path to it turn red, and its neighbors (given in blue) are added to thefrontier. The new frontier is then the nodes outlined in blue and green; the blue outlined nodesare the newly added nodes, and the green outlined nodes are the other node on the frontier. Youcan keep clicking on nodes till you find a solution. Then you can reset the search to try a differentnode ordering.
There are a number of features that should be noticed about this:
http://artint.info/tutorials/search/search_1.html##http://artint.info/tutorials/search/search_1.html## -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
43/100
PREPARED BY ARUN PRATAP SINGH 42
42
For a finite graph without cycles, it will eventually find a solution no matter which order youselect paths on the frontier.
Some strategies for selecting paths from the frontier expand fewer nodes that otherstrategies.
As part of the definition of the algorithm a solution is only found when a goal node isselected from the frontier, not when it is added.
DIFFERENT SEARCHING ALGORITHMS :
BREADTH FIRST SEARCH :
Ingraph theory,breadth-first search(BFS) is astrategy for searching in a graph when searchis limited to essentially two operations: (a) visit and inspect a node of a graph; (b) gain access tovisit the nodes that neighbor the currently visited node. The BFS begins at a root node andinspects all the neighboring nodes. Then for each of those neighbor nodes in turn, it inspects theirneighbor nodes which were unvisited, and so on. Compare BFS with the equivalent, but morememory-efficientIterative deepening depth-first search and contrast withdepth-first search.
The algorithm uses aqueue data structure to store intermediate results as it traverses the graph,
as follows:
1. Enqueue the root node
2. Dequeue a node and examine it
If the element sought is found in this node, quit the search and return a result.
Otherwise enqueue any successors (the direct child nodes) that have not yet been
discovered.
3. If the queue is empty, every node on the graph has been examinedquit the search and
return "not found".
4. If the queue is not empty, repeat from Step 2.
http://en.wikipedia.org/wiki/Graph_theoryhttp://en.wikipedia.org/wiki/Graph_search_algorithmhttp://en.wikipedia.org/wiki/Iterative_deepening_depth-first_searchhttp://en.wikipedia.org/wiki/Depth-first_searchhttp://en.wikipedia.org/wiki/Queue_(data_structure)http://en.wikipedia.org/wiki/Queue_(data_structure)http://en.wikipedia.org/wiki/Depth-first_searchhttp://en.wikipedia.org/wiki/Iterative_deepening_depth-first_searchhttp://en.wikipedia.org/wiki/Graph_search_algorithmhttp://en.wikipedia.org/wiki/Graph_theory -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
44/100
PREPARED BY ARUN PRATAP SINGH 43
43
Breadth First Search (BFS) searches breadth-wise in the problem space. Breadth-First search islike traversing a tree where each node is a state which may a be a potential candidate for solution.Breadth first search expands nodes from the root of the tree and then generates one level of thetree at a time until a solution is found. It is very easily implemented by maintaining a queue ofnodes. Initially the queue contains just the root. In each iteration, node at the head of the queueis removed and then expanded. The generated child nodes are then added to the tail of the queue.
ALGORITHM: BREADTH-FIRST SEARCH
1. Create a variable called NODE-LIST and set it to the initial state.2. Loop until the goal state is found or NODE-LIST is empty.
a. Remove the first element, say E, from the NODE-LIST. If NODE-LIST was emptythen quit.
b. For each way that each rule can match the state described in E do:i) Apply the rule to generate a new state.ii) If the new state is the goal state, quit and return this state.iii) Otherwise add this state to the end of NODE-LIST
Since it never generates a node in the tree until all the nodes at shallower levels have beengenerated, breadth-first search always finds a shortest path to a goal. Since each node can begenerated in constant time, the amount of time used by Breadth first search is proportional to thenumber of nodes generated, which is a function of the branching factor b and the solution d. Sincethe number of nodes at level d is bd, the total number of nodes generated in the worst case is b +b2+ b3+ + bdi.e. O(bd) , the asymptotic time complexity of breadth first search.
Breadth First Search
Look at the above tree with nodes starting from root node, R at the first level, A and B at thesecond level and C, D, E and F at the third level. If we want to search for node E then BFS will
search level by level. First it will check if E exists at the root. Then it will check nodes at the secondlevel. Finally it will find E a the third level.
ADVANTAGES OF BREADTH-FIRST SEARCH-
1. Breadth first search will never get trapped exploring the useless path forever.2. If there is a solution, BFS will definitely find it out.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
45/100
PREPARED BY ARUN PRATAP SINGH 44
44
3. If there is more than one solution then BFS can find the minimal one that requires lessnumber of steps.
DISADVANTAGES OF BREADTH-FIRST SEARCH-
1. The main drawback of Breadth first search is itsmemory requirement. Since each level of
the tree must be saved in order to generate the next level, and the amount ofmemory isproportional to the number of nodes stored, the space complexity of BFS is O(bd). As aresult, BFS is severely space-bound in practice so will exhaust thememory available ontypical computers in a matter of minutes.
2. If the solution is farther away from the root, breath first search will consume lot of time.
DEPTH FIRST SEARCH :
Depth-first search(DFS) is analgorithm for traversing or searchingtree orgraph data structures.
One starts at the root(selecting some arbitrary node as the root in the case of a graph) andexplores as far as possible along each branch beforebacktracking.
For the following graph:
a depth-first search starting at A, assuming that the left edges in the shown graph are chosen
before right edges, and assuming the search remembers previously visited nodes and will not
repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E,
http://www.worldofcomputing.net/memory/computer-memory.htmlhttp://www.worldofcomputing.net/memory/computer-memory.htmlhttp://www.worldofcomputing.net/memory/computer-memory.htmlhttp://en.wikipedia.org/wiki/Algorithmhttp://en.wikipedia.org/wiki/Tree_data_structurehttp://en.wikipedia.org/wiki/Graph_(data_structure)http://en.wikipedia.org/wiki/Tree_(data_structure)#Terminologyhttp://en.wikipedia.org/wiki/Backtrackinghttp://en.wikipedia.org/wiki/File:Graph.traversal.example.svghttp://en.wikipedia.org/wiki/Backtrackinghttp://en.wikipedia.org/wiki/Tree_(data_structure)#Terminologyhttp://en.wikipedia.org/wiki/Graph_(data_structure)http://en.wikipedia.org/wiki/Tree_data_structurehttp://en.wikipedia.org/wiki/Algorithmhttp://www.worldofcomputing.net/memory/computer-memory.htmlhttp://www.worldofcomputing.net/memory/computer-memory.htmlhttp://www.worldofcomputing.net/memory/computer-memory.html -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
46/100
PREPARED BY ARUN PRATAP SINGH 45
45
C, G. The edges traversed in this search form a Trmaux tree, a structure with important
applications ingraph theory.
Performing the same search without remembering previously visited nodes results in visiting
nodes in the order A, B, D, F, E, A, B, D, F, E, etc. forever, caught in the A, B, D, F, E cycle and
never reaching C or G.
Iterative deepening is one technique to avoid this infinite loop and would reach all nodes.
Depth First Search (DFS) searches deeper into the problem space. Breadth-first search alwaysgenerates successor of the deepest unexpanded node. Depth First Search uses last-in first-outstack for keeping the unexpanded nodes. More commonly, depth-first search is implementedrecursively, with the recursion stack taking the place of an explicit node stack.
ALGORITHM: DEPTH FIRST SEARCH
1.If the initial state is a goal state, quit and return success.
2.Otherwise, loop until success or failure is signaled.a) Generate a state, say E, and let it be the successor of the initial state. If there is nosuccessor, signal failure.b) Call Depth-First Search with E as the initial state.c) If success is returned, signal success. Otherwise continue in this loop.
ADVANTAGES OF DEPTH-FIRST SEARCH
1. The advantage of depth-first Search is thatmemory requirement is only linear withrespect to the search graph. This is in contrast withbreadth-first search which requiresmore space. The reason is that the algorithm only needs to store a stack of nodes on the
path from the root to the current node.2. The time complexity of a depth-first Search to depth d is O(b^d) since it generates the
same set of nodes asbreadth-first search,but simply in a different order. Thuspractically depth-first search is time-limited rather than space-limited.
3. If depth-first search finds solution without exploring much in a path then the time andspace it takes will be very less.
DISADVANTAGES OF DEPTH-FIRST SEARCH
1. The disadvantage of Depth-First Search is that there is a possibility that it may go downthe left-most path forever. Even a finite graph can generate an infinite tree. One solutionto this problem is to impose a cutoff depth on the search. Although the ideal cutoff is the
solution depth d and this value is rarely known in advance of actually solving theproblem. If the chosen cutoff depth is less than d, the algorithm will fail to find a solution,whereas if the cutoff depth is greater than d, a large price is paid in execution time, andthe first solution found may not be an optimal one.
2. Depth-First Search is not guaranteed to find the solution.3. And there is no guarantee to find a minimal solution, if more than one solution exists.
http://en.wikipedia.org/wiki/Tr%C3%A9maux_treehttp://en.wikipedia.org/wiki/Graph_theoryhttp://en.wikipedia.org/wiki/Iterative_deepening_depth-first_searchhttp://intelligence.worldofcomputing.net/ai-search/breadth-first-search.htmlhttp://www.worldofcomputing.net/memory/computer-memory.htmlhttp://intelligence.worldofcomputing.net/ai-search/breadth-first-search.htmlhttp://intelligence.worldofcomputing.net/ai-search/breadth-first-search.htmlhttp://intelligence.worldofcomputing.net/ai-search/breadth-first-search.htmlhttp://intelligence.worldofcomputing.net/ai-search/breadth-first-search.htmlhttp://www.worldofcomputing.net/memory/computer-memory.htmlhttp://intelligence.worldofcomputing.net/ai-search/breadth-first-search.htmlhttp://en.wikipedia.org/wiki/Iterative_deepening_depth-first_searchhttp://en.wikipedia.org/wiki/Graph_theoryhttp://en.wikipedia.org/wiki/Tr%C3%A9maux_tree -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
47/100
PREPARED BY ARUN PRATAP SINGH 46
46
HEURISTIC SEARCHING TECHNIQUES :
Heuristic search is an AI search technique that employs heuristic for its moves. Heuristic is a ruleof thumb that probably leads to a solution. Heuristics play a major role in search strategiesbecause of exponential nature of the most problems. Heuristics help to reduce the number of
alternatives from an exponential number to a polynomial number. In Artificial Intelligence, heuristicsearch has a general meaning, and a more specialized technical meaning. In a general sense,the term heuristic is used for any advice that is often effective, but is not guaranteed to work inevery case. Within the heuristic search architecture, however, the term heuristic usually refers tothe special case of a heuristic evaluation function.
HEURISTIC INFORMATION-
In order to solve larger problems, domain-specificknowledge must be added to improve searchefficiency. Information about the problem include the nature of states, cost of transforming fromone state to another, and characteristics of the goals. This information can often be expressed inthe form ofheuristic evaluation function,say f(n,g), a function of the nodes n and/or the goals g.
Following is a list of heuristic search techniques.
Pure Heuristic Search
A* algorithm
AO* algorithm
Depth-First Branch-And-Bound
Heuristic Path Algorithm
Best-First Search
COMPLEXITY OF FINDING OPTIMAL SOLUTIONS-
The time complexity of a heuristic search algorithm depends on the accuracy of the heuristicfunction. For example, if theheuristic evaluation function is an exact estimator, then A* searchalgorithm runs in linear time, expanding only those nodes on an optimal solution path. Conversely,with a heuristic that returns zero everywhere,A* algorithm becomesuniform-cost search,whichhas exponential complexity.In general, the time complexity ofA* search andIDA* search is an exponential function of theerror in the heuristic function. For example, if the heuristic has constant absolute error, meaningthat it never underestimates by more than a constant amount regardless of the magnitude of theestimate, then the running time of A* is linear with respect to the solution cost. A more realisticassumption is constant relative error, which means that the error is a fixed percentage of thequantity being estimated. The base of the exponent, however, is smaller than the brute-forcebranching factor, reducing the asymptotic complexity and allowing larger problems to be solved.For example, using appropriate heuristic functions,IDA* can optimally solve random instance ofthe twenty-four puzzle and Rubiks Cube.
http://intelligence.worldofcomputing.net/knowledge-representation/what-is-knowledge.htmlhttp://intelligence.worldofcomputing.net/ai-search/heuristic-evaluation-function.htmlhttp://intelligence.worldofcomputing.net/ai-search/heuristic-evaluation-function.htmlhttp://intelligence.worldofcomputing.net/ai-search/a-star-algorithm.htmlhttp://intelligence.worldofcomputing.net/ai-search/uniform-cost-search.htmlhttp://intelligence.worldofcomputing.net/ai-search/a-star-algorithm.htmlhttp://intelligence.worldofcomputing.net/ai-search/iterative-deepening-a-star.htmlhttp://intelligence.worldofcomputing.net/ai-search/iterative-deepening-a-star.htmlhttp://intelligence.worldofcomputing.net/ai-search/iterative-deepening-a-star.htmlhttp://intelligence.worldofcomputing.net/ai-search/iterative-deepening-a-star.htmlhttp://intelligence.worldofcomputing.net/ai-search/a-star-algorithm.htmlhttp://intelligence.worldofcomputing.net/ai-search/uniform-cost-search.htmlhttp://intelligence.worldofcomputing.net/ai-search/a-star-algorithm.htmlhttp://intelligence.worldofcomputing.net/ai-search/heuristic-evaluation-function.htmlhttp://intelligence.worldofcomputing.net/ai-search/heuristic-evaluation-function.htmlhttp://intelligence.worldofcomputing.net/knowledge-representation/what-is-knowledge.html -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
48/100
PREPARED BY ARUN PRATAP SINGH 47
47
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
49/100
PREPARED BY ARUN PRATAP SINGH 48
48
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
50/100
PREPARED BY ARUN PRATAP SINGH 49
49
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
51/100
PREPARED BY ARUN PRATAP SINGH 50
50
BEST FIRST SEARCH :
A combination of depth first and breadth first searches. Depth first is good because a solution canbe found without computing all nodes and breadth first is good because it does not get trapped indead ends. The best first search allows us to switch between paths thus gaining the benefit ofboth approaches. At each step the most promising node is chosen. If one of the nodes chosen
generates nodes that are less promising it is possible to choose another at the same level and ineffect the search changes from depth to breadth. If on analysis these are no better then thispreviously unexpanded node and branch is not forgotten and the search method reverts to thedescendants of the first choice and proceeds, backtracking as it were.
Best First Search Algorithm:
1. Start with OPEN holding the initial state2. Pick the best node on OPEN3. Generate its successors4. For each successor Do
o If it has not been generated before evaluate it add it to OPEN and record itsparent
o If it has been generated before change the parent if this new path is better and inthat case update the cost of getting to any successor nodes
5. If a goal is found or no more nodes left in OPEN, quit, else return to 2.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
52/100
PREPARED BY ARUN PRATAP SINGH 51
51
A* ALGORITHM :
The A* algorithm combines features of uniform-cost search and pure heuristic search to efficientlycompute optimal solutions. A* algorithm is a best-first search algorithm in which the costassociated with a node is f(n) = g(n) + h(n), where g(n) is the cost of the path from the initial stateto node n and h(n) is the heuristic estimate or the cost or a path from node n to a goal. Thus, f(n)estimates the lowest total cost of any solution path going through node n. At each point a nodewith lowest f value is chosen for expansion. Ties among nodes of equal f value should be brokenin favor of nodes with lower h values. The algorithm terminates when a goal is chosen for
expansion.
A* algorithm guides an optimal path to a goal if the heuristic function h(n) is admissible, meaningit never overestimates actual cost. For example, since airline distance never overestimates actualhighway distance, and manhatten distance never overestimates actual moves in the gliding tile.
In computer science, A* (pronounced "A star") is a computer algorithm that is widely usedin pathfinding and graph traversal, the process of plotting an efficiently traversable path betweenpoints, called nodes. Noted for itsperformance and accuracy, it enjoys widespread use. However, in
http://en.wikipedia.org/wiki/Computer_sciencehttp://en.wikipedia.org/wiki/Computer_algorithmhttp://en.wikipedia.org/wiki/Pathfindinghttp://en.wikipedia.org/wiki/Graph_traversalhttp://en.wikipedia.org/wiki/Computer_performancehttp://en.wikipedia.org/wiki/Computer_performancehttp://en.wikipedia.org/wiki/Graph_traversalhttp://en.wikipedia.org/wiki/Pathfindinghttp://en.wikipedia.org/wiki/Computer_algorithmhttp://en.wikipedia.org/wiki/Computer_science -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
53/100
PREPARED BY ARUN PRATAP SINGH 52
52
practical travel-routing systems, it is generally outperformed by algorithms which can pre-process thegraph to attain better performance.
Now let us apply the algorithm on the above search tree and see what it gives us. We will go
through each iteration and look at the final output. Each element of the priority queue is writtenas [path,f(n)]. We will use h1 as the heuristic, given in the diagram below.
Initialization: { [ S , 4 ] }
Iteration1: { [ S->A , 3 ] , [ S->G , 12 ] }
Iteration2: { [ S->A->C , 4 ] , [ S->A->B , 10 ] , [ S->G , 12 ] }
Iteration3: { [ S->A->C->G , 4 ] , [ S->A->C->D , 6 ] , [ S->A->B , 10 ] , [ S->G , 12] }
Iteration4 gives the final output as S->A->C->G.
Things worth mentioning:
->The creation of the tree is not a part of the algorithm. It is just for visualization.
->The algorithm returns the first path encountered. It does not search for all paths.
->The algorithm returns a path which is optimal in terms of cost, if an admissible heuristic is
used(this can be proved).
The above example illustrates that A* Search gives the optimal path faster than Uniform Cost
Search. This is, however, true only if the heuristic is admissible. In general, the efficiency of the
algorithm depends on the quality of the heuristic. The nearer the heuristic is to the actual cost,
the better is the speed of the algorithm. Trivially, the heuristic can be taken to be 0, which gives
the Uniform Cost Search algorithm.
http://algorithmicthoughts.wordpress.com/2012/12/21/artificial-intelligence-search-heuristics/heuristic/ -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
54/100
PREPARED BY ARUN PRATAP SINGH 53
53
A star Algorithm-An Example-
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
55/100
PREPARED BY ARUN PRATAP SINGH 54
54
AO* ALGORITHM :
When a problem can be divided into a set of sub problems, where each sub problem can be solved
separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used for
representing the solution. The decomposition of the problem or problem reduction generates AND arcs.
One AND are may point to any number of successor nodes. All these must be solved so that the arc will
rise to many arcs, indicating several possible solutions. Hence the graph is known as AND - OR instead of
AND. Figure shows an AND - OR graph.
An algorithm to find a solution in an AND - OR graph must handle AND area appropriately. A* algorithm
can not search AND - OR graphs efficiently. This can be understand from the give figure.
http://4.bp.blogspot.com/_ZGzaqHb40vU/TEk1fyQdz8I/AAAAAAAAAGY/qiwz__Js55k/s1600/BestFirstSearch2.jpghttp://1.bp.blogspot.com/_ZGzaqHb40vU/TEk4aFB9nkI/AAAAAAAAAGg/wk43hjjHOoc/s1600/BestFirstSearch1.jpg -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
56/100
PREPARED BY ARUN PRATAP SINGH 55
55
FIGURE : AND - OR graphIn figure (a) the top node A has been expanded producing two area one leading to B and leading
to C-D . the numbers at each node represent the value of f ' at that node (cost of getting to the
goal state from current state). For simplicity, it is assumed that every operation (i.e. applying a
rule) has unit cost, i.e., each are with single successor will have a cost of 1 and each of its
components. With the available information till now , it appears that C is the most promising node
to expand since its f ' = 3 , the lowest but going through B would be better since to use C we must
also use D' and the cost would be 9(3+4+1+1). Through B it would be 6(5+1).
Thus the choice of the next node to expand depends not only n a value but also on whether that
node is part of the current best path form the initial mode. Figure (b) makes this clearer. In figure
the node G appears to be the most promising node, with the least f ' value. But G is not on the
current beat path, since to use G we must use GH with a cost of 9 and again this demands that
arcs be used (with a cost of 27). The path from A through B, E-F is better with a total cost of
(17+1=18). Thus we can see that to search an AND-OR graph, the following three things must be
done.
1. Traverse the graph starting at the initial node and following the current best path, andaccumulate the set of nodes that are on the path and have not yet been expanded.
2. Pick one of these unexpanded nodes and expand it. Add its successors to the graph and
computer f ' (cost of the remaining distance) for each of them.
3. Change the f ' estimate of the newly expanded node to reflect the new information produced
by its successors. Propagate this change backward through the graph. Decide which of the
current best path.
The propagation of revised cost estimation backward is in the tree is not necessary in A*
algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the currentbest path can be selected. The working of AO* algorithm is illustrated in figure as follows:
http://1.bp.blogspot.com/_ZGzaqHb40vU/TEkyIi1PzHI/AAAAAAAAAGQ/FwR_3J0S7qo/s1600/BestFirstSearch3.jpg -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
57/100
PREPARED BY ARUN PRATAP SINGH 56
56
Referring the figure. The initial node is expanded and D is Marked initially as promising node. D
is expanded producing an AND arc E-F. f ' value of D is updated to 10. Going backwards we can
see that the AND arc B-C is better . it is now marked as current best path. B and C have to be
expanded next. This process continues until a solution is found or all paths have led to dead ends,
indicating that there is no solution. An A* algorithm the path from one node to the other is always
that of the lowest cost and it is independent of the paths through other nodes.
The algorithm for performing a heuristic search of an AND - OR graph is given below. Unlike A*
algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single structure G.
G represents the part of the search graph generated so far. Each node in G points down to its
immediate successors and up to its immediate predecessors, and also has with it the value of h'
cost of a path from itself to a set of solution nodes. The cost of getting from the start nodes to the
current node "g" is not stored as in the A* algorithm. This is because it is not possible to compute
a single such value since there may be many paths to the same state. In AO* algorithm serves
as the estimate of goodness of a node. Also a there should value called FUTILITY is used. The
estimated cost of a solution is greater than FUTILITY then the search is abandoned as too
expansive to be practical.
For representing above graphs AO* algorithm is as follows -
AO* algorithmThis algorithm is applied to problems for which and/or graphs can be built.Problem DefinitionGiven [G,s,T] whereG: implicitly specified and/or graphS: start nodeT: set of terminalsh(n): heuristic function
Aim: To find minimum cost solution tree.
Algorithm AO*
1. Initialize set G* = {s}, f(s) = h(s), if s^T, label as solved
2. Terminate if s is solved , then terminate
3. Select select a non terminal leaf node n from the marked sub-tree
4. Expand Make explicit the successors of n . For each new successor,mset f(m) = h(m)
if m is terminal label as solved.
5. Cost revision call cost-revision (n)
6. loop go to step 2
Note: if there are no and nodes in graph the algorithm will exactly behave as A*algorithm.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
58/100
PREPARED BY ARUN PRATAP SINGH 57
57
OR
1. Let G consists only to the node representing the initial state call this node INTT. Compute
h' (INIT).
2. Until INIT is labeled SOLVED or hi (INIT) becomes greater than FUTILITY, repeat the
following procedure.
(I) Trace the marked arcs from INIT and select an unbounded node NODE.
(II) Generate the successors of NODE . if there are no successors then assign FUTILITY as
h' (NODE). This means that NODE is not solvable. If there are successors then for each one
called SUCCESSOR, that is not also an ancester of NODE do the following
(a) add SUCCESSOR to graph G
(b) if successor is not a terminal node, mark it solved and assign zero to its h ' value.
(c) If successor is not a terminal node, compute it h' value.
(III) Propagate the newly discovered information up the graph by doing the following. let S be a
set of nodes that have been marked SOLVED. Initialize S to NODE. Until S is empty repeat
the following procedure;
(a) select a node from S call if CURRENT and remove it from S.
(b) compute h' of each of the arcs emerging from CURRENT , Assign minimum h' to
CURRENT.
(c) Mark the minimum cost path a s the best out of CURRENT.
(d) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked
are have been labeled SOLVED.
(e) If CURRENT has been marked SOLVED or its h ' has just changed, its new status must
be propagate backwards up the graph . hence all the ancestors of CURRENT are added
to S.
AO* Search Procedure.
1. Place the start node on open.
2. Using the search tree, compute the most promising solution tree TP .
3. Select node n that is both on open and a part of tp, remove n from open and place it no closed.
4. If n is a goal node, label n as solved. If the start node is solved, exit with success where tp is
the solution tree, remove all nodes from open with a solved ancestor.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
59/100
PREPARED BY ARUN PRATAP SINGH 58
58
5. If n is not solvable node, label n as unsolvable. If the start node is labeled as unsolvable, exit
with failure. Remove all nodes from open ,with unsolvable ancestors.
6. Otherwise, expand node n generating all of its successor compute the cost of for each newly
generated node and place all such nodes on open.
7. Go back to step(2)
GAME PLAYING:
Game playing has been a major topic of AI since the very beginning. Beside the attraction of thetopic to people, it is also because itsclose relation to "intelligence", and its well-defined statesand rules.The most common used AI technique in game is search. In other problem-solving activities, state
change is solely caused by the action of the agent. However, in Multi-agent games, it alsodepends on the actions of other agents who usually have different goals.
A special situation that has been studied most is "two-person zero-sum game", where the twoplayers have exactly opposite goals. (Not all competition are zero-sum!)
There are perfect information games (such as Chess and Go) and imperfect information games(such as Bridge and games where a dice is used). Given sufficient time and space, usually anoptimum solution can be obtained for the former by exhaustive search, though not for the latter.However, for most interesting games, such a solution is usually too inefficient to be practicallyused.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
60/100
PREPARED BY ARUN PRATAP SINGH 59
59
MINIMAX SEARCH PROCEDURE :
Minimax (sometimes MinMax or MM) is a decision rule used in decision theory, gametheory,statistics andphilosophy for minimizing the possibleloss for a worst case (maximum loss)scenario. Alternatively, it can be thought of as maximizing the minimum gain(maximinor MaxMin). Originally formulated for two-playerzero-sumgame theory,covering boththe cases where players take alternate moves and those where they make simultaneous moves,it has also been extended to more complex games and to general decision making in the presenceof uncertainty.
The standard algorithm for two-player perfect-information games such as chess, checkers orOthello is minimax search withheuristic static evaluation.The minimax search algorithm searchesforward to a fixed depth in the game tree, limited by the amount of time available per move. Atthis search horizon, a heuristic function is applied to the frontier nodes. In this case, a heuristicevaluation is a function that takes a board position and returns a number that indicates howfavourable that position is for one player relative to the other. For example, a very simple heuristicevaluator for chess would count the total number of pieces on the board for one player,appropriately weighted by their relative strength, and subtract the weighted sum of the opponentsplaces. Thus, large positive values woul correspond to strange positions for one playercalled MAX, whereas large negative values would represent advantageous situation for theopponent called MIN.Given the heuristic evaluations of the frontier nodes, minimax search algorithm recursivelycomputes the values for the interior nodes in the tree according to the maximum rule. The valueof a node where it is MAXs turn to move is the maximum of the values of its children, while thevalue of the node where MIN is to move is the minimum of the values of its children. Thus atalternative levels of the tree, the maximum values of the children are backed up. This continuesuntil the values of the immediate children of the current position are computed at which point onemove to the child with the maximum or minimum value is made depending on whose turn it is tomove.
http://en.wikipedia.org/wiki/Decision_theoryhttp://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Statisticshttp://en.wikipedia.org/wiki/Philosophyhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Zero-sumhttp://en.wikipedia.org/wiki/Game_theoryhttp://intelligence.worldofcomputing.net/ai-search/heuristic-evaluation-function.htmlhttp://intelligence.worldofcomputing.net/ai-search/heuristic-evaluation-function.htmlhttp://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Zero-sumhttp://en.wikipedia.org/wiki/Loss_functionhttp://en.wikipedia.org/wiki/Philosophyhttp://en.wikipedia.org/wiki/Statisticshttp://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Game_theoryhttp://en.wikipedia.org/wiki/Decision_theory -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
61/100
PREPARED BY ARUN PRATAP SINGH 60
60
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
62/100
PREPARED BY ARUN PRATAP SINGH 61
61
For two-agent zero-sum perfect-information game, if the two players take turn to move,the minimax procedure can solve the problem given sufficient computational resources. Thisalgorithm assumes that each player takes the best option in each step.
First, we distinguish two types of nodes, MAX and MIN, in the state graph, determined by thedepth of the search tree.
Minimax procedure: starting from the leaves of the tree (with final scores with respect to oneplayer, MAX), and go backwards towards the root.
At each step, one player (MAX) takes the action that leads to the highest score, while the otherplayer (MIN) takes the action that leads to the lowest score.
All nodes in the tree will all be scored, and the path from root to the actual result is the one onwhich all nodes have the same score.
Example:
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
63/100
PREPARED BY ARUN PRATAP SINGH 62
62
Because of computational resources limitation, the search depth is usually restricted to aconstant, and estimated scores (generated by a heuristic function) will replace the actual score inthe above procedure.
Example: Tic-tac-toe, with the difference of possible win paths as the henristic function.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
64/100
PREPARED BY ARUN PRATAP SINGH 63
63
ALPHA-BETA CUTOFFS :
Alphabeta pruningis asearch algorithm that seeks to decrease the number of nodes that areevaluated by theminimax algorithmin itssearch tree.It is an adversarial search algorithm usedcommonly for machine playing of two-player games (Tic-tac-toe, Chess,Go, etc.). It stopscompletely evaluating a move when at least one possibility has been found that proves the move
to be worse than a previously examined move. Such moves need not be evaluated further. Whenapplied to a standard minimax tree, it returns the same move as minimax would, but prunes awaybranches that cannot possibly influence the final decision.
http://en.wikipedia.org/wiki/Search_algorithmhttp://en.wikipedia.org/wiki/Minimax#Minimax_algorithm_with_alternate_moveshttp://en.wikipedia.org/wiki/Game_treehttp://en.wikipedia.org/wiki/Tic-tac-toehttp://en.wikipedia.org/wiki/Chesshttp://en.wikipedia.org/wiki/Go_(board_game)http://en.wikipedia.org/wiki/Go_(board_game)http://en.wikipedia.org/wiki/Chesshttp://en.wikipedia.org/wiki/Tic-tac-toehttp://en.wikipedia.org/wiki/Game_treehttp://en.wikipedia.org/wiki/Minimax#Minimax_algorithm_with_alternate_moveshttp://en.wikipedia.org/wiki/Search_algorithm -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
65/100
PREPARED BY ARUN PRATAP SINGH 64
64
ALPHA-BETA cutoff is a method for reducing the number of nodes explored in the Minimaxstrategy. For the nodes it explores it computes, in addition to the score, an alpha valueand a betavalue.
ALPHA value of a node
It is a value never greater than the true score of this node. Initially it is the score of that node, if the node is a leaf, otherwise it is -infinity. Then at a MAX node it is set to the largest of the scores of its successors explored up to
now, and at a MIN node to the alpha value of its predecessor.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
66/100
PREPARED BY ARUN PRATAP SINGH 65
65
BETA value of a node
It is a value never smaller than the true score of this node. Initially it is the score of thatnode, if the node is a leaf, otherwise it is +infinity.
Then at a MIN node it is set to the smallest of the scores of its successors explored up to
now, and at a MAX node to the beta value of its predecessor.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
67/100
PREPARED BY ARUN PRATAP SINGH 66
66
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
68/100
PREPARED BY ARUN PRATAP SINGH 67
67
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
69/100
PREPARED BY ARUN PRATAP SINGH 68
68
ADDITIONAL REFINEMENTS:
Waiting for Quiescence: continue the search until no drastic change occurs fromone level to the next.
Secondary Search:after choosing a move, search a few more levels beneath it to be
sure it still looks good. Book Moves: for some parts of the game (especially initial and end moves), keep a
catalog of best moves to make.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
70/100
PREPARED BY ARUN PRATAP SINGH 69
69
ITERATIVE DEEPENING:
Iterative deepening (ID) has been adopted as the basic time management strategy in depth-firstsearches, but has proved surprisingly beneficial as far as move ordering is concerned in alpha-beta and its enhancements.
It has been noticed, that even if one is about to search to a given depth, that iterative deepeningis faster than searching for the given depth immediately. This is due to dynamic move orderingtechniques such as; PV-, hash- and refutation moves determined in previous iteration(s), as wellthe history heuristic.
Iterative deepening searchHow it Works-
It works as follows: the program starts with a one ply search, then increments the search depthand does another search. This process is repeated until the time allocated for the search is
exhausted. In case of an unfinished search, the program always has the option to fall back to themove selected in the last iteration of the search. Yet if we make sure that this move is searchedfirst in the next iteration, then overwriting the new move with the old one becomes unnecessary.This way, also the results from the partial search can be accepted - though in case of a severedrop of the score it is wise to allocate some more time, as the first alternative is often a badcapture, delaying the loss instead of preventing it.
Iterative deepening, using a TT, embed the depth-first algorithms like alpha-beta into a frameworkwith best-first characteristics.
An uninformed graph search algorithm which is a good compromise between the efficiencyofDepth-first Search and the admissibility ofBreadth-first Search.Iterative deepening performsa complete search of the Search Space (often using a depth-first search strategy) up to a
maximum depth . If no solutions can be found up to depth , the maximum search depth is
increased to , and the search space is traversed again (starting from the top node). Thisstrategy ensures that iterative deepening, like breadth-first search, always terminates in anoptimal path from the start node to a goal node whenever such a path exists (this is calledadmissibility); but it also allows implementation as an efficient depth-first search. Iterative
http://aicat.inf.ed.ac.uk/entry.php?id=383http://aicat.inf.ed.ac.uk/entry.php?id=340http://aicat.inf.ed.ac.uk/entry.php?id=572http://aicat.inf.ed.ac.uk/entry.php?id=572http://aicat.inf.ed.ac.uk/entry.php?id=340http://aicat.inf.ed.ac.uk/entry.php?id=383 -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
71/100
PREPARED BY ARUN PRATAP SINGH 70
70
deepening is optimal in both time and space complexity among all uninformed admissible searchstrategies. At first sight it might look as if iterative deepening is inefficient, since after increasing
the cut-off depth from to , it redoes all the work up to level in order to investigate
nodes at level
. However, since typical search spaces grow exponentially with
(because of a constant branching factor), the cost of searching up to depth is entirely
dominated by the search at the deepest level : If is the average branching rate of the
search space, there are nodes at depth , which is the same as the total number
of nodes up to depth . In fact, it can be shown that among all uninformed admissible search
strategies, iterative deepening has the lowest asymptotic complexity in both time ( ) and
space ( ). Breadth-first search on the other hand is only asymptotically optimal in time, andis very bad (exponential) in space. The actual time complexity of breadth-first search (as opposedto the asymptotic complexity) is of course lower than that for iterative deepening (namely by the
small constant factor ), but this is easily offset by the difference in space-complexity
in favour of iterative-deepening. Thus, iterative-deepening is asymptotically optimal in both timeand space, whereas breadth-first is asymptotically optimal only in time and really bad in space,while the actual complexities of iterative-deepening and breadth-first are very close. Iterativedeepening can also be applied to informed search strategies, such asA*.This modified versionof A* is again optimal in both time and space among all informed admissible search strategies.
http://aicat.inf.ed.ac.uk/entry.php?id=314http://aicat.inf.ed.ac.uk/entry.php?id=314 -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
72/100
PREPARED BY ARUN PRATAP SINGH 71
71
Search can be aborted at any time and the best move of the previous iteration ischosen.
Previous iterations can provide invaluable move-ordering constraints. Can be adapted for single-agent search. Can be used to combine the best aspects of depth-first search and breadth-first
search.
Depth-First Iterative Deepening (DFID)
1. Set SEARCH-DEPTH = 1
2. Conduct depth-first search to a depth of SEARCH-DEPTH. If a solution path isfound, then return it.
3. Increment SEARCH-DEPTH by 1 and go to step 2.
Iterative-Deepening-A* (IDA*)
1. Set THRESHOLD = heuristic evaluation of the start state
2. Conduct depth-first search, pruning any branch when its total cost exceedsTHRESHOLD. If a solution path is found, then return it.
3. Increment THRESHOLD by the minimum amount it was exceeded and go to step2.
STATISTICAL REASONING:
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
73/100
PREPARED BY ARUN PRATAP SINGH 72
72
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
74/100
PREPARED BY ARUN PRATAP SINGH 73
73
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
75/100
PREPARED BY ARUN PRATAP SINGH 74
74
PROBABILITY & BAYESTHEOREM:
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
76/100
PREPARED BY ARUN PRATAP SINGH 75
75
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
77/100
PREPARED BY ARUN PRATAP SINGH 76
76
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
78/100
PREPARED BY ARUN PRATAP SINGH 77
77
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
79/100
PREPARED BY ARUN PRATAP SINGH 78
78
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
80/100
PREPARED BY ARUN PRATAP SINGH 79
79
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
81/100
PREPARED BY ARUN PRATAP SINGH 80
80
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
82/100
PREPARED BY ARUN PRATAP SINGH 81
81
BAYES THEOREM:
Bayes theorem lets us calculate a conditional probability:
P(B) is the prior probability of B. P(B | A) is the posterior probability of B. Recall:
)(
)()|()|(
AP
BPBAPABP
)(
)()|(
AP
ABPABP
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
83/100
PREPARED BY ARUN PRATAP SINGH 82
82
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
84/100
PREPARED BY ARUN PRATAP SINGH 83
83
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
85/100
PREPARED BY ARUN PRATAP SINGH 84
84
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
86/100
PREPARED BY ARUN PRATAP SINGH 85
85
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
87/100
PREPARED BY ARUN PRATAP SINGH 86
86
CERTAINTY FACTOR AND RULE BASED SYSTEM:
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
88/100
PREPARED BY ARUN PRATAP SINGH 87
87
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
89/100
PREPARED BY ARUN PRATAP SINGH 88
88
BAYESIAN NETWORK:
A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directedacyclic graphical model is a probabilistic graphical model (a type of statistical model) thatrepresents a set of random variables and their conditional dependencies via a directed acyclicgraph (DAG). For example, a Bayesian network could represent the probabilistic relationshipsbetween diseases and symptoms. Given symptoms, the network can be used to compute the
probabilities of the presence of various diseases.
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
90/100
PREPARED BY ARUN PRATAP SINGH 89
89
A simple Bayesian network. Rain influences whether the sprinkler is activated, and bothrain and the sprinkler influence whether the grass is wet.
Bayesian nets (BN) are a network-based framework for representing andanalyzing models involving uncertainty;
BN are different from other knowledge-based systems tools becauseuncertainty is handled in mathematically rigorous yet efficient and simpleway
BN are different from other probabilistic analysis tools because of networkrepresentation of problems, use of Bayesian statistics, and the synergybetween these
Knowledge structure:
variables are nodes arcs represent probabilistic dependence between variables conditional probabilities encode the strength of the dependencies
Computational architecture:
computes posterior probabilities given evidence about some nodes exploits probabilistic independence for efficient computation
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
91/100
PREPARED BY ARUN PRATAP SINGH 90
90
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
92/100
PREPARED BY ARUN PRATAP SINGH 91
91
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
93/100
PREPARED BY ARUN PRATAP SINGH 92
92
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
94/100
PREPARED BY ARUN PRATAP SINGH 93
93
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
95/100
PREPARED BY ARUN PRATAP SINGH 94
94
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
96/100
PREPARED BY ARUN PRATAP SINGH 95
95
DEMPSTER SHAFER THEORY:
The DempsterShafer theory (DST) is a mathematical theory of evidence.[1] It allows one to
combine evidence from different sources and arrive at a degree of belief (represented by a belief
function) that takes into account all the available evidence. The theory was first developed
byArthur P. Dempster and Glenn Shafer.
In a narrow sense, the term DempsterShafer theory refers to the original conception of the
theory by Dempster and Shafer. However, it is more common to use the term in the wider sense
of the same general approach, as adapted to specific kinds of situations. In particular, many
authors have proposed different rules for combining evidence, often with a view to handling
conflicts in evidence better.
DempsterShafer theory is a generalization of the Bayesian theory of subjective probability;
whereas the latter requires probabilities for each question of interest, belief functions base
degrees of belief (or confidence, or trust) for one question on the probabilities for a relatedquestion. These degrees of belief may or may not have the mathematical properties of
probabilities; how much they differ depends on how closely the two questions are related.[5]Put
another way, it is a way of representing epistemic plausibilities but it can yield answers that
contradict those arrived at usingprobability theory.
Often used as a method of sensor fusion, DempsterShafer theory is based on two ideas:
obtaining degrees of belief for one question from subjective probabilities for a related question,
and Dempster's rule[6]for combining such degrees of belief when they are based on independent
items of evidence. In essence, the degree of belief in a proposition depends primarily upon the
number of answers (to the related questions) containing the proposition, and the subjectiveprobability of each answer. Also contributing are the rules of combination that reflect general
assumptions about the data.
In this formalism a degree of belief (also referred to as a mass) is represented as a belief
functionrather than aBayesianprobability distribution.Probability values are assigned to setsof
possibilities rather than single events: their appeal rests on the fact they naturally encode
evidence in favor of propositions.
DempsterShafer theory assigns its masses to all of the non-empty subsets of the entities that
compose a system
http://en.wikipedia.org/wiki/Evidencehttp://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH76-1http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH76-1http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH76-1http://en.wikipedia.org/wiki/Arthur_P._Dempsterhttp://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH02-5http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH02-5http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH02-5http://en.wikipedia.org/wiki/Epistemologyhttp://en.wikipedia.org/wiki/Probability_theoryhttp://en.wikipedia.org/wiki/Sensor_fusionhttp://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-DE68-6http://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-DE68-6http://en.wikipedia.org/wiki/Bayesianismhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Probability_distributionhttp://en.wikipedia.org/wiki/Bayesianismhttp://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-DE68-6http://en.wikipedia.org/wiki/Sensor_fusionhttp://en.wikipedia.org/wiki/Probability_theoryhttp://en.wikipedia.org/wiki/Epistemologyhttp://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH02-5http://en.wikipedia.org/wiki/Bayesian_probabilityhttp://en.wikipedia.org/wiki/Arthur_P._Dempsterhttp://en.wikipedia.org/wiki/Dempster%E2%80%93Shafer_theory#cite_note-SH76-1http://en.wikipedia.org/wiki/Evidence -
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
97/100
PREPARED BY ARUN PRATAP SINGH 96
96
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
98/100
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
99/100
PREPARED BY ARUN PRATAP SINGH 98
98
-
8/12/2019 Soft Computing Unit-1 by Arun Pratap Singh
100/100
99