2021-21 Odd ARTIFICAL INTELLIGENCE FOR KMC101/KMC201 …

38
Academic Content File Session Semester Branch Name of Subject Code(University) 2021-21 Odd ARTIFICAL INTELLIGENCE FOR ENGINEERS KMC101/KMC201 Name of Faculty: Sandeep Vishwakarma Department: Computer Science and Engineering Designation: Assistant Professor

Transcript of 2021-21 Odd ARTIFICAL INTELLIGENCE FOR KMC101/KMC201 …

Academic Content File

Session Semester Branch Name of Subject Code(University)

2021-21 Odd ARTIFICAL INTELLIGENCE FOR

ENGINEERS

KMC101/KMC201

Name of Faculty: Sandeep Vishwakarma

Department: Computer Science and Engineering

Designation: Assistant Professor

Vision and Mission of the Institute

Vision of the Institution

To be a leading educational institution recognized for excellence in engineering

education & research producing globally competent and socially responsible

technocrats.

Mission of the Institution

IM1: To provide state-of-the-art infrastructural facilities that support achieving

academic excellence.

IM2: To provide a work environment that is conducive for professional growth of

faculty & staff.

IM3: To collaborate with industry for achieving excellence in research, consultancy

and entrepreneurship development.

COURSE OUTCOMES

The students will be able to Blooms Taxonomy

CO1 Understand the evolution and various approaches of AI K2

CO2 Understand data storage, processing, visualization, and its use in regression, clustering etc.K2

CO3 Understand natural language processing and chatbots K2

CO4 Understand the concepts of neural networks K2

CO5 Understand the concepts of face, object, speech recognition and robots K2

PROGRAMME OUTCOMES (POs)

Program

Outcome Statement

PO1

Engineering knowledge: Apply the knowledge of mathematics, science, engineering

fundamentals, and an engineering specialization to the solution of complex computer

engineering problems.

PO2 Problem analysis: Identify, formulate, review research literature, and analyze complex

computer engineering problems reaching substantiated conclusions using first principles of

mathematics, natural sciences, and engineering sciences.

PO3

Design/development of solutions: Design solutions for complex computer engineering

problems and design system components or processes that meet the specific needs with

appropriate considerations for the public health and safety, and the cultural, societal, and

environmental considerations.

PO4

Conduct investigations of complex problems: Use research-based knowledge and

research methods including design of experiments, analysis and interpretation of data, and

synthesis of the information to provide conclusions

PO5

Modern tool usage: Create, select, and apply appropriate techniques, resources, and

modern engineering and IT tools including prediction and modeling to complex

engineering activities with an understanding of the limitations

PO6

The engineer and society: Apply reasoning informed by the contextual knowledge to

assess societal, health, safety, legal and cultural issues and the consequent relevant to the

professional engineering practices

PO7

Environment and sustainability: Understand the impact of the professional engineering

solutions in societal and environmental contexts, and demonstrate the knowledge of, and

need for sustainable development

PO8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and

norm of the engineering practices

PO9 Individual and team work: Function effectively as an individual, and as a member or

leader in diverse teams, and in multidisciplinary settings

PO10

Communications: Communicate effectively on complex engineering activities with the

engineering community and with society at large, such as, being able to comprehend and

write effective reports and design documentation, make effective presentations, and give

and receive clear instructions

PO11

Project management and finance: Demonstrate knowledge and understanding of the

engineering and management principles and apply these to one’s own work, as a member

and leader in a team, to manage projects and in multidisciplinary environments.

PO12 Life-long learning: Recognize the need for, and have the preparation and ability to engage

in independent and life learning in the broadest context of technological change.

Syllabus of the Subject

References:

Elaine Rich, Kevin Knight, & Shivashankar B Nair, Artificial Intelligence, McGraw Hill, 3rd

CO-PO Mapping for Session 2021-22

COs Kx PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2

KMC101.1 K2 3 2 - - - - - - - - - 3 - -

KMC101.2 K2 3 3 2 - 3 - -

KMC101.3 K2 3 3 3 3 - -

KMC101.4 K2 3 3 - -

KMC101.5 K2 3 2 2 3 - -

KMC101 3 2.33 2.66 2 - - - - - - - 3 - -

Lecture/Teaching Plan|| Artificial Intelligence for Engineers (KMC-101) ||

Odd Semester || 2021-22

Lecture

Number

Content of Syllabus Proposed

Date of

Lecture

Unit Number

CO1 Understand the evolution and various approaches of AI

1 The evolution of AI to the present

Unit 1

An overview to

AI

2 Various approaches to AI

3 What should all engineers know about AI?

4 Other emerging technologies

5 AI and ethical concerns

CO2 Understand data storage, processing, visualization, and its use in regression, clustering etc

6 History Of Data

Unit 2

7 Data Storage And Importance of Data and its

Acquisition

8 The Stages of data processing

9 Data Visualization Data &

Algorithms

10 Regression, Prediction & Classification

11 Clustering & Recommender Systems

CO3 Understand natural language processing and chatbots

12 Speech recognition

Unit 3

Natural

Language

Processing

13 Natural language understanding

14 Natural language generation

15 Chatbots

16 Machine Translation

CO4 Understand the concepts of neural networks

17 Deep Learning

Unit 4 Artificial

Neural Networks

18 Recurrent Neural Networks

19 Convolutional Neural Networks

20 The Universal Approximation Theorem

21 Generative Adversarial Networks

CO5 Understand the concepts of face, object, speech recognition and robots

22 Image and face recognition

Unit 5

Applications

23 Object recognition

24 Speech Recognition besides Computer Vision

25 Robots

26 Applications

Content Beyond Syllabus

1. Some Application development using AI

2. Neural Network Applications

3. NLP Application Development

Innovative teaching-learning, Details of NPTEL / Other online resource used

(OPTIONAL)

1. https://nptel.ac.in/courses/106/102/106102220/

2. https://nptel.ac.in/courses/106/105/106105158/

3. https://www.youtube.com/watch?v=0rrDqBIP2qU&list=PL-

JvKqQx2AtfQ8cGyKsFE7Tj2FyB1yCkd

Unit-1

An overview to AI

"It is a branch of computer science by which we can create intelligent machines which can

behave like a human, think like humans, and able to make decisions."

Artificial Intelligence exists when a machine can have human based skills such as learning,

reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite

that you can create a machine with programmed algorithms which can work with own

intelligence, and that is the awesomeness of AI.

Goals of Artificial Intelligence

Following are the main goals of Artificial Intelligence:

1. Replicate human intelligence

2. Solve Knowledge-intensive tasks

3. An intelligent connection of perception and action

4. Building a machine which can perform tasks that requires human intelligence such as:

o Proving a theorem

o Playing chess

o Plan some surgical operation

o Driving a car in traffic

5. Creating some system which can exhibit intelligent behavior, learn new things by itself,

demonstrate, explain, and can advise to its user.

History of AI

Here is the history of AI during 20th century

Year Milestone / Innovation

1923 Karel Čapek play named “Rossum's Universal Robots” (RUR) opens in

London, first use of the word "robot" in English.

1943 Foundations for neural networks laid.

1945 Isaac Asimov, a Columbia University alumni, coined the term Robotics.

1950 Alan Turing introduced Turing Test for evaluation of intelligence and

published Computing Machinery and Intelligence. Claude Shannon published Detailed

Analysis of Chess Playing as a search.

1956 John McCarthy coined the term Artificial Intelligence. Demonstration of the

first running AI program at Carnegie Mellon University.

1958 John McCarthy invents LISP programming language for AI.

1964 Danny Bobrow's dissertation at MIT showed that computers can understand

natural language well enough to solve algebra word problems correctly.

1965 Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries

on a dialogue in English.

1969 Scientists at Stanford Research Institute Developed Shakey, a robot, equipped

with locomotion, perception, and problem solving.

1973 The Assembly Robotics group at Edinburgh University built Freddy, the

Famous Scottish Robot, capable of using vision to locate and assemble models.

1979 The first computer-controlled autonomous vehicle, Stanford Cart, was built.

1985 Harold Cohen created and demonstrated the drawing program, Aaron.

1990 Major advances in all areas of AI −

o Significant demonstrations in machine learning

o Case-based reasoning

o Multi-agent planning

o Scheduling

o Data mining, Web Crawler

o natural language understanding and translation

o Vision, Virtual Reality

o Games

1997 The Deep Blue Chess Program beats the then world chess champion, Garry

Kasparov.

2000 Interactive robot pets become commercially available. MIT displays Kismet, a robot

with a face that expresses emotions. The robot Nomad explores remote regions of Antarctica

and locates meteorites.

Applications of AI

AI has been dominant in various fields such as −

Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe,

etc., where machine can think of large number of possible positions based on heuristic

knowledge.

Natural Language Processing − It is possible to interact with the computer that

understands natural language spoken by humans.

Expert Systems − There are some applications which integrate machine, software, and

special information to impart reasoning and advising. They provide explanation and

advice to the users.

Vision Systems − These systems understand, interpret, and comprehend visual input

on the computer. For example,

o A spying aeroplane takes photographs, which are used to figure out spatial

information or map of the areas.

o Doctors use clinical expert system to diagnose the patient.

o Police use computer software that can recognize the face of criminal with the

stored portrait made by forensic artist.

Speech Recognition − Some intelligent systems are capable of hearing and

comprehending the language in terms of sentences and their meanings while a human

talks to it. It can handle different accents, slang words, noise in the background, change

in human’s noise due to cold, etc.

Handwriting Recognition − The handwriting recognition software reads the text

written on paper by a pen or on screen by a stylus. It can recognize the shapes of the

letters and convert it into editable text.

Intelligent Robots − Robots are able to perform the tasks given by a human. They

have sensors to detect physical data from the real world such as light, heat,

temperature, movement, sound, bump, and pressure. They have efficient processors,

multiple sensors and huge memory, to exhibit intelligence. In addition, they are

capable of learning from their mistakes and they can adapt to the new environment.

1.1. The evolution of AI to the present

Maturation of Artificial Intelligence (1943-1952)

o Year 1943: The first work which is now recognized as AI was done by Warren

McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.

o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection

strength between neurons. His rule is now called Hebbian learning.

o Year 1950: The Alan Turing who was an English mathematician and pioneered

Machine learning in 1950. Alan Turing publishes "Computing Machinery and

Intelligence" in which he proposed a test. The test can check the machine's ability to

exhibit intelligent behavior equivalent to human intelligence, called a Turing test.

The birth of Artificial Intelligence (1952-1956)

o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial

intelligence program"Which was named as "Logic Theorist". This program had

proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some

theorems.

o Year 1956: The word "Artificial Intelligence" first adopted by American Computer

scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as

an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were

invented. And the enthusiasm for AI was very high at that time.

UNIT-2

Data & Algorithms

Data

In general, data is any set of characters that is gathered and translated for some purpose,

usually analysis. If data is not put into context, it doesn't do anything to a human or computer.

There are multiple types of data. Some of the more common types of data include the

following:

Single character

Boolean (true or false)

Text (string)

Number (integer or floating-point)

Picture

Sound

Video

In a computer's storage, data is a series of bits (binary digits) that have the value one or zero.

Data is processed by the CPU, which uses logical operations to produce new data (output)

from source data (input).

Algorithms

Big data is data so large that it does not fit in the main memory of a single machine, and the

need to process big data by efficient algorithms arises in Internet search, network traffic

monitoring, machine learning, scientific computing, signal processing, and several other

areas. This course will cover mathematically rigorous models for developing such algorithms,

as well as some provable limitations of algorithms operating in those models. Some topics we

will cover include:

Sketching and Streaming. Extremely small-space data structures that can be updated

on the fly in a fast-moving stream of input.

Dimensionality reduction. General techniques and impossibility results for reducing

data dimension while still preserving geometric structure.

Numerical linear algebra. Algorithms for big matrices (e.g. a user/product rating

matrix for Netflix or Amazon). Regression, low rank approximation, matrix

completion, ...

Compressed sensing. Recovery of (approximately) sparse signals based on few linear

measurements.

External memory and cache-obliviousness. Algorithms and data structures

minimizing I/Os for data not fitting on memory but fitting on disk. B-trees, buffer trees,

multiway mergesort, ...

2.1 History of Data

The history of big data starts many years before the present buzz around Big Data. Seventy

years ago the first attempt to quantify the growth rate of data in the terms of volume of data

was encountered. That has popularly been known as “information explosion“. We will be

covering some major milestones in the evolution of “big data”.

1944:

Fremont Rider, based upon his observation, speculated that Yale Library in 2040 will have

“approximately 200,000,000 volumes, which will occupy over 6,000 miles of shelves…

[requiring] a cataloging staff of over six thousand persons.”

He did not predict the digitization of libraries but predicted the information explosion.

From 1944 to 1980, many articles and presentations were presented that observed the

‘information explosion’ and the arising needs for storage capacity.

1980:

In 1980, the sociologist Charles Tilly uses the term big data in one sentence “none of the big

questions has actually yielded to the bludgeoning of the big-data people.” in his article “The

old-new social history and the new old social history”. But the term used in this sentence is not

in the context of the present meaning of Big Data today.

Now, moving fast to 1997-1998 where we see the actual use of big data in its present context.

1997:

In 1977, Michael Cox and David Ellsworth published the article “Application-controlled

demand paging for out-of-core visualization” in the Proceedings of the IEEE 8th conference

on Visualization. The article uses the big data term in the sentence“Visualization provides an

interesting challenge for computer systems: data sets are generally quite large, taxing the

capacities of main memory, local disk, and even remote disk. We call this the problem of big

data. When data sets do not fit in main memory (in core), or when they do not fit even on local

disk, the most common solution is to acquire more resources.”.

It was the first article in the ACM digital library that uses the term big data with its modern

context.

1998:

In 1998, John Mashey, who was Chief Scientist at SGI presented a paper titled “Big Data…

and the Next Wave of Infrastress.” at a USENIX meeting. John Mashey used this term in his

various speeches and that’s why he got the credit for coining the term Big Data.

2000:

In 2000, Francis Diebold presented a paper titled “’ Big Data’ Dynamic Factor Models for

Macroeconomic Measurement and Forecasting” to the Eighth World Congress of the

Econometric Society.

2001:

In 2001, Doug Laney, who was an analyst with the Meta Group (Gartner), presented a research

paper titled “3D Data Management: Controlling Data Volume, Velocity, and Variety.” The

3V’s have become the most accepted dimensions for defining big data.

2005:

In 2005, Tim O’Reilly published his groundbreaking article “What is Web 2.0?”. In this article,

Tim O’Reilly states that the “data is the next Intel inside”. O’Reilly Media explicitly used the

term ‘Big Data’ to refer to the large sets of data which is almost impossible to handle and

process using the traditional business intelligence tools.

In 2005 Yahoo used Hadoop to process petabytes of data which is now made open-source by

Apache Software Foundation. Many companies are now using Hadoop to crunch Big Data.

So we can say that 2005 is the year that the Big data revolution has truly begun and the rest

they say is history.

2.2 Data Storage And Importance of Data and its Acquisition

The systems, used for data acquisition are known as data acquisition systems. These data

acquisition systems will perform the tasks such as conversion of data, storage of data,

transmission of data and processing of data.

Data acquisition systems consider the following analog signals.

Analog signals, which are obtained from the direct measurement of electrical quantities

such as DC & AC voltages, DC & AC currents, resistance and etc.

Analog signals, which are obtained from transducers such as LVDT, Thermocouple &

etc.

Types of Data Acquisition Systems

Data acquisition systems can be classified into the following two types.

Analog Data Acquisition Systems

Digital Data Acquisition Systems

Now, let us discuss about these two types of data acquisition systems one by one.

Analog Data Acquisition Systems

The data acquisition systems, which can be operated with analog signals are known as analog

data acquisition systems. Following are the blocks of analog data acquisition systems.

Transducer − It converts physical quantities into electrical signals.

Signal conditioner − It performs the functions like amplification and selection of

desired portion of the signal.

Display device − It displays the input signals for monitoring purpose.

Graphic recording instruments − These can be used to make the record of input data

permanently.

Magnetic tape instrumentation − It is used for acquiring, storing & reproducing of

input data.

Unit 3

Natural Language Processing

3.1 Speech recognition

3.2 Natural language understanding

3.3 Natural language generation

3.4 Chatbots

3.5 Machine Translation

Natural Language Processing

Natural Language Processing (NLP) refers to AI method of communicating with an intelligent

systems using a natural language such as English.

Processing of Natural Language is required when you want an intelligent system like robot to

perform as per your instructions, when you want to hear decision from a dialogue based

clinical expert system, etc.

The field of NLP involves making computers to perform useful tasks with the natural

languages humans use. The input and output of an NLP system can be −

Speech

Written Text

Components of NLP

There are two components of NLP as given −

Natural Language Understanding (NLU)

Understanding involves the following tasks −

Mapping the given input in natural language into useful representations.

Analyzing different aspects of the language.

Natural Language Generation (NLG)

It is the process of producing meaningful phrases and sentences in the form of natural language

from some internal representation.

It involves −

Text planning − It includes retrieving the relevant content from knowledge base.

Sentence planning − It includes choosing required words, forming meaningful

phrases, setting tone of the sentence.

Text Realization − It is mapping sentence plan into sentence structure.

The NLU is harder than NLG.

Difficulties in NLU

NL has an extremely rich form and structure.

It is very ambiguous. There can be different levels of ambiguity −

Lexical ambiguity − It is at very primitive level such as word-level.

For example, treating the word “board” as noun or verb?

Syntax Level ambiguity − A sentence can be parsed in different ways.

For example, “He lifted the beetle with red cap.” − Did he use cap to lift the beetle or

he lifted a beetle that had red cap?

Referential ambiguity − Referring to something using pronouns. For example, Rima

went to Gauri. She said, “I am tired.” − Exactly who is tired?

One input can mean different meanings.

Many inputs can mean the same thing.

NLP Terminology

Phonology − It is study of organizing sound systematically.

Morphology − It is a study of construction of words from primitive meaningful units.

Morpheme − It is primitive unit of meaning in a language.

Syntax − It refers to arranging words to make a sentence. It also involves determining

the structural role of words in the sentence and in phrases.

Semantics − It is concerned with the meaning of words and how to combine words

into meaningful phrases and sentences.

Pragmatics − It deals with using and understanding sentences in different situations

and how the interpretation of the sentence is affected.

Discourse − It deals with how the immediately preceding sentence can affect the

interpretation of the next sentence.

World Knowledge − It includes the general knowledge about the world.

Steps in NLP

There are general five steps −

Lexical Analysis − It involves identifying and analyzing the structure of words.

Lexicon of a language means the collection of words and phrases in a language.

Lexical analysis is dividing the whole chunk of txt into paragraphs, sentences, and

words.

Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for

grammar and arranging words in a manner that shows the relationship among the

words. The sentence such as “The school goes to boy” is rejected by English syntactic

analyzer.

Semantic Analysis − It draws the exact meaning or the dictionary meaning from the

text. The text is checked for meaningfulness. It is done by mapping syntactic structures

and objects in the task domain. The semantic analyzer disregards sentence such as “hot

ice-cream”.

Discourse Integration − The meaning of any sentence depends upon the meaning of

the sentence just before it. In addition, it also brings about the meaning of immediately

succeeding sentence.

Pragmatic Analysis − During this, what was said is re-interpreted on what it actually

meant. It involves deriving those aspects of language which require real world

knowledge.

3.1 Speech Recognition

Speech recognition, also known as automatic speech recognition (ASR), computer speech

recognition, or speech-to-text, is a capability which enables a program to process human speech

into a written format. While it’s commonly confused with voice recognition, speech

recognition focuses on the translation of speech from a verbal format to a text one whereas

voice recognition just seeks to identify an individual user’s voice.

Key features of effective speech recognition

Unit 4

Artificial Neural Networks

4.1 Deep Learning

4.2 Recurrent Neural Networks

4.3 Convolutional Neural Networks

4.4 The Universal Approximation Theorem

4.5 Generative Adversarial Networks

Artificial Neural Networks

Artificial intelligence (AI), also known as machine intelligence, is a branch of computer science that

aims to imbue software with the ability to analyze its environment using either predetermined rules and

search algorithms, or pattern recognizing machine learning models, and then make decisions based on

those analyses.

Basic Structure of ANNs

The idea of ANNs is based on the belief that working of human brain by making the right connections,

can be imitated using silicon and wires as living neurons and dendrites.

The human brain is composed of 86 billion nerve cells called neurons. They are connected to other

thousand cells by Axons. Stimuli from external environment or inputs from sensory organs are accepted

by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A

neuron can then send the message to other neuron to handle the issue or does not send it forward.

ANNs are composed of multiple nodes, which imitate biological neurons of human brain.

The neurons are connected by links and they interact with each other. The nodes can take

input data and perform simple operations on the data. The result of these operations is passed

to other neurons. The output at each node is called its activation or node value.

Each link is associated with weight. ANNs are capable of learning, which takes place by

altering weight values. The following illustration shows a simple ANN –

Types of Artificial Neural Networks

There are two Artificial Neural Network topologies − FeedForward and Feedback.

FeedForward ANN

In this ANN, the information flow is unidirectional. A unit sends information to other unit

from which it does not receive any information. There are no feedback loops. They are used

in pattern generation/recognition/classification. They have fixed inputs and outputs.

FeedBack ANN

Here, feedback loops are allowed. They are used in content addressable memories.

4.1 Deep Learning

Deep learning is a branch of machine learning which is completely based on artificial neural

networks, as neural network is going to mimic the human brain so deep learning is also a kind of

mimic of human brain. In deep learning, we don’t need to explicitly program everything. The

concept of deep learning is not new. It has been around for a couple of years now. It’s on hype

nowadays because earlier we did not have that much processing power and a lot of data

Architectures :

1. Deep Neural Network – It is a neural network with a certain level of complexity

(having multiple hidden layers in between input and output layers). They are capable

of modeling and processing non-linear relationships.

2. Deep Belief Network(DBN) – It is a class of Deep Neural Network. It is multi-layer

belief networks.

Steps for performing DBN :

a. Learn a layer of features from visible units using Contrastive Divergence algorithm.

b. Treat activations of previously trained features as visible units and then learn

features of features.

c. Finally, the whole DBN is trained when the learning for the final hidden layer is

achieved.

Recurrent (perform same task for every element of a sequence) Neural Network – Allows

for parallel and sequential computation. Similar to the human brain (large

Unit 5

Applications

5.1 Image and face recognition

5.2 Object recognition

5.3 Speech Recognition besides Computer Vision

5.4 Robots

5.5 Applications

5.1 Image and face recognition

Image Recognition?

Image recognition is the ability of a computer powered camera to identify and detect objects

or features in a digital image or video. It is a method for capturing, processing, examining, and

sympathizing images. To identify and detect images, computers use machine vision technology

that is powered by an artificial intelligence system. Image recognition is a term for computer

technologies that can recognize certain people, animals, objects or other targeted subjects

through the use of algorithms and machine learning concepts. The term “image recognition” is

connected to “computer vision,” which is an overarching label for the process of training

computers to “see” like humans, and “image processing,” which is a catch-all term for

computers doing intensive work on image data.

Image recognition is done in many different ways, but many of the top techniques involve the

use of convolutional neural networks to filter images through a series of artificial neuron layers.

The convolutional neural network was specifically set up for image recognition and similar

image processing. Through a combination of techniques such as max pooling, stride

configuration and padding, convolutional neural filters work on images to help machine

learning programs get better at identifying the subject of the picture.

Image recognition has come a long way, and is now the topic of a lot of controversy and debate

in consumer spaces. Social media giant Facebook has begun to use image recognition

aggressively, as has tech giant Google in its own digital spaces. There is a lot of discussion

about how rapid advances in image recognition will affect privacy and security around the

world.

How does image recognition work?

How do we train a computer to tell one image apart from another image? The process of an

image recognition model is no different from the process of machine learning modeling. I list

the modeling process for image recognition in Step 1 through 4.

Step 1: Extract pixel features from an image

Step 2: Prepare labeled images to train the model

Step 3: Train the model to be able to categorize images

Step 4: Recognize (or predict) a new image to be one of the categories

Face Recognition

Face recognition is a method of identifying or verifying the identity of an individual using their

face. Face recognition systems can be used to identify people in photos, video, or in real-time.

Law enforcement may also use mobile devices to identify people during police stops.

But face recognition data can be prone to error, which can implicate people for crimes they

haven’t committed. Facial recognition software is particularly bad at recognizing African

Americans and other ethnic minorities, women, and young people, often misidentifying or

failing to identify them, disparately impacting certain groups.

Additionally, face recognition has been used to target people engaging in protected speech. In

the near future, face recognition technology will likely become more ubiquitous. It may be used

to track individuals’ movements out in the world like automated license plate readers track

vehicles by plate numbers. Real-time face recognition is already being used in other

countries and even at sporting events in the United States.

How Face Recognition Works

Face recognition systems use computer algorithms to pick out specific, distinctive details about

a person’s face. These details, such as distance between the eyes or shape of the chin, are then

converted into a mathematical representation and compared to data on other faces collected in

a face recognition database. The data about a particular face is often called a face template and

is distinct from a photograph because it’s designed to only include certain details that can be

used to distinguish one face from another.

Some face recognition systems, instead of positively identifying an unknown person, are

designed to calculate a probability match score between the unknown person and specific face

templates stored in the database. These systems will offer up several potential matches, ranked

in order of likelihood of correct identification, instead of just returning a single result.

Face recognition systems vary in their ability to identify people under challenging conditions

such as poor lighting, low quality image resolution, and suboptimal angle of view (such as in

a photograph taken from above looking down on an unknown person).

When it comes to errors, there are two key concepts to understand:

Questions Bank

Unit 1

1. What is Intelligence?

2. Describe the four categories under which AI is classified with examples.

3. Define Artificial Intelligence.

4. List the fields that form the basis for AI.

5. What are various approaches to AI.

6. What is emerging technologies? Give some examples.

7. What is the importance of ethical issue in AI?

8. Write the history of AI.

9. What are applications of AI?

10. What should all engineers know about AI?

Unit 2

1. What is Data and Big Data?

2. What is algorithm and is properties?

3. Explain data and its acquisition.

4. What are the stages involve in data processing?

5. Define data visualization.

6. How many types of data visualization.

7. What is data classification and Regression?

8. What is data clustering? Explain any one method in details.

9. What are recommender systems? How is working in OTT.

10. How many types of data acquisition systems.

Unit 3

1. What is Natural Language Processing? Discuss with some applications.

2. List any two real-life applications of Natural Language Processing.

3. What is Speech recognition

4. Explain the Natural language understanding and Natural language generation

5. Show the working of chatbots.

6. Analyse how statistical methods can be used in machine translation

7. Describe the different components of a typical conversational agent

Unit 4

1. Define ANN and Neural computing.

2. List some applications of ANNs.

3. What are the design parameters of ANN?

4. Explain the three classifications of ANNs based on their functions. Explain them in

brief.

5. Write the differences between conventional computers and ANN.

6. What are the applications of Machine Learning .When it is used.

7. What is deep learning , Explain its uses and application and history.

8. What Are the Applications of a Recurrent Neural Network (RNN)?

9. What Are the Different Layers on CNN?

10. Explain Generative Adversarial Network.

Unit 5

1. What is the Working of Image Recognition and How it is Used?

2. What is facial recognition - and how sinister is it?

3. What is object recognition in image processing.

4. what is speech recognition in artificial intelligence

5. What's the Difference Between Robotics and Artificial Intelligence?

6. What is robotics?

7. what are applications of artificial intelligence

Assignments

Unit 1

11. Describe the four categories under which AI is classified with examples.

12. List the fields that form the basis for AI.

13. What is emerging technologies? Give some examples.

14. Write the history of AI.

15. What should all engineers know about AI?

Unit 2

11. What is Data and Big Data?

12. Explain data and its acquisition.

13. How many types of data visualization.

14. What is data classification and Regression?

15. What are recommender systems? How is working in OTT.

Unit 3

8. What is Natural Language Processing? Discuss with some applications.

9. List any two real-life applications of Natural Language Processing.

10. What is Speech recognition

11. Explain the Natural language understanding and Natural language generation

12. Show the working of chatbots.

13. Analyse how statistical methods can be used in machine translation

14. Describe the different components of a typical conversational agent

Unit 4

11. Explain the three classifications of ANNs based on their functions. Explain them in

brief.

12. What are the applications of Machine Learning .When it is used.

13. What is deep learning , Explain its uses and application and history.

14. What Are the Applications of a Recurrent Neural Network (RNN)?

15. Explain Generative Adversarial Network.

Unit 5

8. What is the Working of Image Recognition and How it is Used?

9. What is facial recognition - and how sinister is it?

10. what is speech recognition in artificial intelligence

11. What's the Difference Between Robotics and Artificial Intelligence?

12. what are applications of artificial intelligence

Video Recording Link UNIT WISE:

Vision Mision , CO, Syllabus, Inroduction https://web.microsoftstream.com/video/a1a1ec7c-99e2-4fe8-

b633-0f23d123d607

The evolution of AI to the present https://web.microsoftstream.com/video/f6bb2d74-bf60-4ad9-

869f-a6c641e9c006

Various approaches to AI,What should all engineers know about AI?

https://web.microsoftstream.com/video/996d9f82-f139-442e-8efe-0585b3bd2b8a

Other emerging technologies, AI and ethical concerns

https://web.microsoftstream.com/video/3e7e6eb4-eb59-43ee-ae5c-48b8ed982290

Data and algorithm , History Of Data https://web.microsoftstream.com/video/f876d7ee-650d-4b47-a0d4-579af5206431

Data Storage And Importance of Data and its Acquisition,The Stages of data processing

https://web.microsoftstream.com/video/a16b8bf6-08fe-40d5-a14b-10f7b393905b

Data Visualization Regression

https://web.microsoftstream.com/video/033098e2-2c33-4395-b2b2-a7257feeccf3

Classification https://web.microsoftstream.com/video/580d73e2-2b85-4457-

b497-0cd96f8dc506

Recommender Systems https://web.microsoftstream.com/video/991e2a98-bf78-464b-

b2f8-4b576486edee

Classification Example https://web.microsoftstream.com/video/1b935ca4-fed9-44c9-

8a42-d463324f91d1

Revision https://web.microsoftstream.com/video/9dbec58c-a4ad-48fa-

941b-a9170c08f172

Natural Language Processing https://web.microsoftstream.com/video/ce86f61f-f0e8-47fd-ab60-ecc1143eefbe

Speech recognition https://web.microsoftstream.com/video/1b935ca4-fed9-44c9-8a42-d463324f91d1

Natural language generation, Chatbots Machine Translation

https://web.microsoftstream.com/video/572340d8-c63a-446f-b732-b4f2ae438c1f

Deep Learning, Recurrent Neural Networks https://web.microsoftstream.com/video/c93a6814-3933-442a-

b5cd-fc9576f67a3e

Convolutional Neural Networks, The Universal Approximation Theorem

https://web.microsoftstream.com/video/c93a6814-3933-442a-b5cd-fc9576f67a3e

Generative Adversarial Networks https://web.microsoftstream.com/video/c93a6814-3933-442a-

b5cd-fc9576f67a3e

Image and face recognition, Object recognition,

https://web.microsoftstream.com/video/9ea4ce33-22e0-4417-aedf-0e79b3f82810

Speech Recognition besides Computer Vision, Robots, Applications

https://web.microsoftstream.com/video/9ea4ce33-22e0-4417-aedf-0e79b3f82810