LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI...

69
LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network GPS Global Positioning System RADAR Radio Detection and Ranging LIDAR Light Detection and Ranging QVGA Quarter Video Graphic Array USB Universal Serial Bus RC Remote Controller LASER Light Amplification by Stimulated Emission of Radiations CV Computer Vision Open-CV Open Computer Vision ADAS Advanced Driver Assistance System IMU Initial Measurement Unit ITS Intelligent Transportation System IBM International Business Machine EBL Explanation Based Learning

Transcript of LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI...

Page 1: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

LIST OF ABBREVIATIONS

IV Intelligent Vehicle

AI Artificial Intelligence

HI Human Intelligence

ML Machine Learning

NN Neural Network

ANN Artificial Neural Network

GPS Global Positioning System

RADAR Radio Detection and Ranging

LIDAR Light Detection and Ranging

QVGA Quarter Video Graphic Array

USB Universal Serial Bus

RC Remote Controller

LASER Light Amplification by Stimulated Emission of Radiations

CV Computer Vision

Open-CV Open Computer Vision

ADAS Advanced Driver Assistance System

IMU Initial Measurement Unit

ITS Intelligent Transportation System

IBM International Business Machine

EBL Explanation Based Learning

Page 2: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

ii

ii

OCR Optical Character Recognition

MRI Magnetic Resonance Imaging

3D 3-Dimensional

NHTSA National Highway Traffic Safety Administration

ACC Adaptive Cruise Control

HDMI High Definition Multimedia Interface

Wi-Fi Wireless Fidelity

GPIO General Purpose Input/output

TCP Transmission Control Protocol

IP Internet Protocol

NumPY Numerical Python

ROI Region of Interest

LDW Lane Departure Warning

MLP Multi-Layer Perceptrons

Page 3: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

iii

iii

LIST OF FIGURES

Fig: 1.5(a) Block-diagram of Deep learning and computer vision ........................... 20

Fig: 3.1.1.1(a) Raspberry PI module .............................................................................. 20

Fig: 3.1.1.1(b) Camera port in Raspberry Pi ................................................................... 21

Fig: 3.1.1.1(c) Raspberry PI command prompt .............................................................. 20

Fig: 3.1.1.1(d) Voltage divider ....................................................................................... 22

Fig: 3.1.1.1(e) Raspberry PI voltage divider .................................................................. 23

Fig: 3.1.1.1(f) HC-SR04 pins ......................................................................................... 23

Fig: 3.1.1.1(g) Raspberry PI and HC-SR04 VCC & GND connections ......................... 24

Fig: 3.1.1.1(h) Raspberry PI and HC-SR04 ECHO pin connections .............................. 24

Fig: 3.1.1.1(i) Connections on breadboard .................................................................... 25

Fig: 3.1.1.1(j) Voltage divider on breadboard ............................................................... 25

Fig: 3.1.1.1(k) Raspberry PI & HC-SR04 complete connections ................................... 25

Fig: 3.1.1.2(a) PI-Cam module ....................................................................................... 26

Fig: 3.1.1.3(a) Arduino module ...................................................................................... 27

Fig: 3.1.1.4(a) Ultrasonic sensor HC-SR04 .................................................................... 28

Fig: 3.1.1.4(b) Component attached on RC-Car ............................................................. 29

Fig: 3.1.2.2(a) Serial interface between Arduino & RC-Controller ............................... 30

Fig: 3.2.1(a) Opto-isolator, interfacing Arduino and RC-Controller ........................... 32

Fig: 3.2.1(b) Complete anatomy of the project ............................................................ 33

Fig: 3.3.2(a) Stop-sign detection and distance measurement ...................................... 35

Fig: 3.3.3.2(a) Traffic green light detection ................................................................... 37

Fig: 3.3.3.2(b) Traffic red light detection ....................................................................... 37

Fig: 3.3.3.2(c) Flow chart of HAAR cascade training .................................................... 38

Fig: 3.4(a) Flow-chart of backpropagation ............................................................... 40

Page 4: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

iv

iv

Fig: 4.1(a) Pycharm IDE .......................................................................................... 41

Fig: 4.1(b) Arduino IDE ........................................................................................... 41

Fig: 4.1(c) Python shell of Raspberry PI .................................................................. 42

Fig: 4.1(d) Video streaming received on Laptop ...................................................... 44

Fig: 4.1(e) Console screen showing successful connection ...................................... 44

Fig: 4.1(f) Green light detection results ................................................................... 45

Fig: 4.1(g) Forward command of green light on console screen .............................. 45

Fig: 4.1(h) Red traffic light detection results ............................................................ 46

Fig: 4.1(i) Recognition of red light on Console screen ............................................ 46

Fig: 4.1(j) Obstacle car ahead .................................................................................. 47

Fig: 4.1(k) Obstacle detected on Console screen with distance ................................ 47

Fig: 4.1(l) Stop sign is at the distance of 27.1cm ..................................................... 48

Fig: 4.1(m) Forward command on console screen ..................................................... 48

Fig: 4.1(n) Stop sign detection .................................................................................. 49

Fig: 4.1(o) Stop time calculation on console screen ................................................. 49

Fig: 4.1(p) Stop sign is in less than 25cm ................................................................. 50

Fig: 4.1(q) Forward command after 5 seconds ......................................................... 50

Page 5: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

v

v

Table of Contents

DEDICATION.................................................................................................................. iii

CERTIFICATE ................................................................................................................ iv

ACKNOWLEDGMENT ...................................................................................................v

ABSTRACT ...................................................................................................................... vi

LIST OF ABBREVIATIONS ........................................................................................ vii

LIST OF FIGURES ......................................................................................................... ix

CHAPTER-1 INTRODUCTION ......................................................................................1

1.1 Overview ........................................................................................................................1

1.2 Motivation ......................................................................................................................2

1.2.1 Evidence of technology leading to intelligent vehicle ....................................2

1.2.2 Need of intelligent vehicle ..............................................................................3

1.3 Brief introduction to project...........................................................................................4

1.4 Aims and Objectives ......................................................................................................4

1.5 Approaches ....................................................................................................................5

CHAPTER-2 LITERATURE REVIEW .........................................................................7

2.1 Image processing ...........................................................................................................7

2.2 Machine learning ...........................................................................................................9

2.2.1 Supervised algorithm ....................................................................................10

2.2.2 Unsupervised algorithm ................................................................................11

2.3 Computer Vision ..........................................................................................................11

2.4 Road detection .............................................................................................................13

2.5 Object detection and recognition .................................................................................15

2.6 Collision detection .......................................................................................................17

Page 6: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

vi

vi

2.7 Human error .................................................................................................................18

2.8 Summary ......................................................................................................................19

CHAPTER-3 PROJECT METHODOLOGY ...............................................................20

3.1 System design ..............................................................................................................20

3.1.1 Modules.........................................................................................................20

3.1.1.1 Raspberry PI...................................................................................20

3.1.1.2 PI-Cam ...........................................................................................26

3.1.1.3 Arduino ..........................................................................................26

3.1.1.4 Ultrasonic sensor ............................................................................27

3.1.2 Interfaces .......................................................................................................29

3.1.2.1 Socket interface using TCP ............................................................29

3.1.2.2 Serial interface ...............................................................................30

3.1.3 Data ...............................................................................................................31

3.1.3.1 Video frames ..................................................................................31

3.1.3.2 Ultrasonic sensor data ....................................................................31

3.2 Methodology ................................................................................................................31

3.2.1 System hardware ...........................................................................................31

3.2.2 System software ............................................................................................33

3.3 Algorithms ...................................................................................................................34

3.3.1 Obstacle detection .........................................................................................34

3.3.2 Stop sign detection and recognition ..............................................................34

3.3.3 Traffic light detection and recognition .........................................................36

3.3.3.1 Brightest spot .................................................................................37

3.3.3.2 Position of brightest spot ...............................................................37

3.4 Self-learning using neural network ..............................................................................38

Page 7: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

vii

vii

CHAPTER-4 RESULTS AND DISCUSSIONS ............................................................41

4.1 System based division and results ...............................................................................41

4.2 Problems encountered ..................................................................................................51

4.2.1 Software problems ........................................................................................51

4.2.2 Hardware problems .......................................................................................51

4.3 Limitations ...................................................................................................................52

4.4 Conclusion ...................................................................................................................52

4.5 Future Work .................................................................................................................53

REFERENCES .................................................................................................................54

Page 8: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

viii

viii

Page 9: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

1

1

CHAPTER 1

INTRODUCTION TO INTELLIGENT VEHICLES

1.1 OVERVIEW.

Vehicles, whose functions are enriched with attributes to increase safety,

environmental awareness, effectiveness, comfort level and prestige, so that they can play a

key role in creating optimal mobility, are now being invented, planned and manufactured

for general use. Throughout the full spectrum of transport, intelligent vehicles will soon

exempt people from the routine of driving. These vehicles are also contributing to make

travelling safer.

With the rapid growth of technological development, the possibilities of utilizing

the driving style intelligence has been increased. The AI and ML algorithms are being

applied in vehicles to make them intelligent. Our project implementation provides a real-

time prediction of steering of a vehicle using image processing.

In this thesis report, we have utilized ANN for the prediction of left, right, forward

and reverse orientation of a vehicle. This IV is trained on a track manually. PI camera takes

real time video frames of the track and Raspberry PI sends these video frames to laptop. In

training, manually left, right, forward and reverse (stop) keys are pressed to drive the

vehicle on the track. Simultaneously video frames are captured by PI camera. A video

frame on an instant is labeled with the key pressed on that instant. Training creates the

trained parameter for the vehicle to learn. Neural network takes the decisions according to

the labeled video frames. This thesis, entirely focuses on the application of deep learning

where full autonomous vehicles are also possible.

Page 10: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

2

2

1.2 MOTIVATION

1.2.1 EVIDENCE OF TECHNOLOGIES LEADING TO INTELLGENT VEHICLE

IV use a variety of techniques to detect their surroundings, such as GPS, Self-steering,

Digital maps, Image recognition, Cameras and ultra sound, RADAR and LIDAR

Technologies that give rise to intelligent vehicle are:

• GPS:

GPS is one of the technologies used by self-driving vehicles. Many people who

have used GPS navigators for years can attest to their reliability. In fact, some

people can’t go anywhere within the city without a GPS navigator. The navigator

also displays their position on the map. Chances are that your phone has a GPS chip

and you can see your position on a map change as you walk or move.

• Self-steering system

Steering systems that use cameras that watch road markings and radar and laser

sensors that track other objects.

• Digital maps

Process by which a collection of data is compiled and formatted into a virtual.

• Image recognition

Image recognition is very important for achieving a true understanding of the

environment, because it is the only way to see indicators such as traffic lights,

brake lights and turn signals.

• Camera and ultrasonic

Cameras will receive visual information to help the car’s software determine

traffic signals and cross traffic warning. Ultrasonic sensor will be used to inform

about the immediate environment for parking assistance.

• RADAR

The set of technologies that help self-driving cars detect objects around them.

How do airplanes above the sky traveling at over 600 km/hr. detect other

airplanes and mountains. RADAR is also used by self-driving cars.

Page 11: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

3

3

• LIDAR

Using bounced pinpoints of laser light, LIDAR effectively forms a 3D model of

the world around the car, trivially determining size and distance of all things

around the car, day or night, sunny or cloudy. It is also used by self-driving cars

to detect stationary and objects in motion around them.

Each of these technologies will collect different information for the vehicle, and

based on the information collected the car will make interpretations and make the

appropriate response for that situation.

3.3.2 NEED OF INTELLIGENT VEHICLES

• Autonomous technology may save thousands of lives by making decisions on the

road faster than humans are able.

• To reduce the traffic collisions and the resulting injuries.

• The relief of travelers from driving.

• A reduction in crime.

• Enhanced mobility for children, elderly, disabled and the poor people.

• Reduce the stress of driving and allow motorists to rest and work while traveling.

• May reduce many common accident risks and therefore crash costs and insurance

premiums.

• Can drop off passengers and find a parking space, increasing motorist convenience

and reducing total parking costs.

• It can take decisions faster than the humans to avid road accidents.

• Reduce costs of paid drivers for taxis and commercial transport.

Page 12: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

4

4

1.3 BRIEF INTRODUCTION TO PROJECT

Our project is based on the practice of deep learning through the applied theme of

developing an IV which can be utilized for safety, obedience of traffic laws and driver

relaxation system. The proposed work uses RC vehicle to test the performance of the NN

algorithms in real time. Vehicle prototype modules such as Raspberry PI, PI-Cam module,

vehicle controller module, Arduino module, ultrasonic sensor module and battery module.

Tracks are used for real time performance analysis of the prototype vehicles

1.4 AIMS & OBJECTIVES

The principle aims of this 4th year group project is to design and develop an IV

which is capable of recognizing traffic light, detecting stop sign and obstacle and performs

the task of steering according to the current consequences of road.

We are aimed to equip the vehicles with advance features that can help the driver to

take some rest during a long journey and make decisions when human intelligence is not

applicable. This application is developed on an RC car, includes a Raspberry Pi board

attached with a pi camera module and an HC-SR04 ultrasonic sensor, used to collect input

data. Two client programs run on Raspberry Pi for streaming color video and ultrasonic

sensor data to the computer via Wi-Fi connection. The Laptop handles multiple tasks:

receiving data from Raspberry Pi, neural network training and prediction(steering), object

detection (stop sign and traffic light), distance measurement (monocular vision), and

sending instructions to Arduino through USB connection. The RC car used in this project

has an on/off switch type controller. When a button is pressed, the resistance between the

relevant chip pin and ground is zero. Thus, an Arduino board is used to simulate button-

press actions. Four Arduino pins are chosen to connect four chip pins on the controller,

corresponding to forward, reverse, left and right actions respectively. Overall, the RC car

could successfully navigate on the track with the ability to avoid obstacle collision, and

respond to stop sign and traffic light accordingly.

Page 13: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

5

5

1.5 APPROACHES

The age of autonomy is real, and it's almost here but we’re still learning how to build

intelligent cars, so there are different companies trying out different ideas. At a high-level

of abstraction, there are two types of approaches of building intelligent vehicles. Most

companies use the classical approach. This approach combines computer vision, sensor

fusion, (LIDAR, RADAR), localization, control theory, and path planning.

Google relies on a highly expensive complex remote-sensing system called

“Lidar”, which in simple terms — measures distance by pointing lasers at targets

surrounding the car and analyzing the light that's reflected."

Tesla's Autopilot relies on a combination of "cameras, radar, ultrasonic sensors and

data automatically steer down the highway, change lanes, and adjust speed in

response to traffic," and uses auto braking technology by Israeli company Mobileye

The other, second approach involves training a single deep neural network to take

sensor inputs and produce steering, throttle, and brake outputs. This approach is called

“deep learning”

Drive.ai was born for this. It was founded in 2015 by deep-learning experts from

Stanford University’s Artificial Intelligence Laboratory.

Deep-learning systems thrive on data. The more data an algorithm sees, the better

it’ll be able to recognize, and generalize about, the patterns it needs to understand

to drive safely.

Decision making through deep learning based on fused sensor data has advantages

in an autonomous vehicle context. Namely, it offers some protection against sensor

failure, since the deep-learning algorithms can be trained explicitly on perception

data with missing sensor modalities. Sensor failure is most often not a hardware or

software issue but rather a sensor that isn’t producing good data for some reason,

like sun glare, darkness at night, or (more commonly) being occluded by water.

The approach we are using for our project is deep learning because we believe that

deep learning leads to better performance and smaller systems and training data can be

collected by driving on a wide variety of places and in a diverse set of lighting conditions.

Page 14: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

6

6

We can train the weights of the network through different algorithms to minimize the

mean-squared error between the steering command output by the network.

What does the algorithm do?

• Process the image

• Output steering command based on image content

• Also output desired speed for current road condition

• Feedback the actual speed of the car

• let a speed controller to control the throttle/brake

Deep learning has become a very powerful tool for many “intelligent” tasks. This

approach needs to feed many input images and steering angles and the resulting weights

and parameters on your layers can be then used to assign a steering angle to your input

images. In our project, we have many input images labeled with steering angles which

allowed us to drive a car around

Fig: 1.5(a) Block diagram of Deep learning and computer vision

Page 15: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

7

7

CHAPTER 2

LITERATURE REVIEW

This chapter gives an insight to the literature survey. The major of this thesis report

as well as the similar or related works are described in this chapter. Intelligent vehicles are

like self-steered vehicles which is embedded with human intelligence. These vehicles have

eyes (camera) to see the real world, brain (NN) to think and hands (steering) to keep the

vehicle smoothly on track.

The idea of machine intelligence emerged in 1950. Nowadays, these vehicles are

being manufactured by many companies and going to be commercialized very soon.

The evolution of this technology includes many different types of technological

approaches, some of them are discussed below:

2.1 IMAGE PROCESSING

The Field of image processing is continually evolving. During the past years, there

has been a significant increase in the level of interest in image morphology, neural

networks, full-color image processing, image data compression, image recognition, and

knowledge-based image analysis systems.

Image processing is processing of images using mathematical operations by using

any form of signal processing for which the input is an image, a series of images, or a

video, such as a photograph or video frame; the output of image processing may be either

an image or a set of characteristics or parameters related to the image. Image processing

methods stems from two principal application areas: improvement of pictorial information

for human interpretation, and processing of scene data for autonomous machine perception.

One of the first applications of digital images was digitized newspaper pictures sent

by undersea cable between London and New York. Introduction of the Bartlane cable

picture transmission system in the early 1920's reduced the time required to transport a

picture across the Atlantic from more than a week to less than three hours. Pictures were

coded for cable transmission and then reconstructed at the receiving end on a telegraph

Page 16: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

8

8

printer fitted with typefaces simulating a halftone pattern. The early Bartlane systems were

capable of coding images in five distinct brightness levels. This was increased to fifteen

levels in 1929.

Analyzing road scenes using cameras could have a crucial impact in many domains,

such as autonomous driving, ADAS, personal navigation, mapping of large scale

environments, and road maintenance. For instance, vehicle infrastructure, signage, and

rules of the road have been designed to be interpreted fully by visual inspection. As the

field of computer vision becomes increasingly mature, practical solutions to many of these

tasks are now within reach.

Important Academic Papers Regarding Deep Learning Processing

• NVIDIA - “End to End Learning for Self-Driving Cars”

Video input from a forward-facing camera is trained against steering wheel

position and deep learning networks are capable of detecting important road

features with limited additional nudging in the right direction

• Comma.ai - "Learning A Driving Simulator"

Using video input with no additional training metadata (IMU, wheel angle) auto-

encoded video was generated, predicting many frames into the future while

maintaining road features. Comma approach to Artificial Intelligence for self-

driving cars1 is based on an agent that learns to clone driver behaviors and plans

maneuvers by simulating future events in the road.

• NYU & Facebook AI - “Deep Multi-Scale Video Prediction Beyond Mean Square

Error”

Learning to predict future images from a video sequence involves the construction of

an internal representation that models the image evolution accurately, and therefore, to

some degree, its content and dynamics. Therefore pixel-space video prediction may be

viewed as a promising avenue for unsupervised feature learning.

Page 17: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

9

9

2.2 MACHINE LEARNING

ML is a sub-set of artificial intelligence where computer algorithms are used to

autonomously learn from data and information. In machine learning computers don’t have

to be explicitly programmed but can change and improve their algorithms by themselves.

Today, machine learning algorithms enable computers to communicate with humans,

autonomously drive cars, write and publish sport match reports, and find terrorist suspects.

1950 — Alan Turing creates the “Turing Test” to determine if a computer has real

intelligence. To pass the test, a computer must be able to fool a human into believing it is

also human.

1952 — Arthur Samuel wrote the first computer learning program. The program was the

game of checkers, and the IBM computer improved at the game the more it played,

studying which moves made up winning strategies and incorporating those moves into its

program.

1957 — Frank Rosenblatt designed the first neural network for computers (the perceptron),

which simulate the thought processes of the human brain.

1967 — The “nearest neighbor” algorithm was written, allowing computers to begin using

very basic pattern recognition. This could be used to map a route for traveling salesmen,

starting at a random city but ensuring they visit all cities during a short tour.

1979 — Students at Stanford University invent the “Stanford Cart” which can navigate

obstacles in a room on its own.

1981 — Gerald Dejong introduces the concept of EBL, in which a computer analyses

training data and creates a general rule it can follow by discarding unimportant data.

1985 — Terry Sejnowski invents NetTalk, which learns to pronounce words the same way

a baby does.

Page 18: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

10

10

1990s — Work on machine learning shifts from a knowledge-driven approach to a data-

driven approach. Scientists begin creating programs for computers to analyze large

amounts of data and draw conclusions — or “learn” — from the results.

1997 — IBM’s Deep Blue beats the world champion at chess.

2006 — Geoffrey Hinton coins the term “deep learning” to explain new algorithms that let

computers “see” and distinguish objects and text in images and videos.

2010 — Microsoft Kinect can track 20 human features at a rate of 30 times per second,

allowing people to interact with the computer via movements and gestures.

After that, the work on ML goes on. ML algorithms are organized into taxonomy, based

on the desired outcome of the algorithm. Common algorithm types include:

2.2.1 SUPERVISED MACHINE LEARNING

The majority of practical ML uses supervised learning. Supervised learning is

where you have input variables (x) and an output variable (Y) and you use an algorithm to

learn the mapping function from the input to the output.

Y = f(X)

The goal is to approximate the mapping function so well that when you have new

input data (x) that you can predict the output variables (Y) for that data.

It is called supervised learning because the process of an algorithm learning from

the training dataset can be thought of as a teacher supervising the learning process. We

know the correct answers; the algorithm iteratively makes predictions on the training data

and is corrected by the teacher. Learning stops when the algorithm achieves an acceptable

level of performance.

Page 19: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

11

11

2.2.2 UNSUPERVISED MACHINE LEARNING

Unsupervised learning is where you only have input data (X) and no corresponding

output variables. The goal for unsupervised learning is to model the underlying structure

or distribution in the data to learn more about the data.These are called unsupervised

learning because unlike supervised learning above there is no correct answers and there is

no teacher. Algorithms are left to their own devises to discover and present the interesting

structure in the data.

Unsupervised learning problems can be further grouped into clustering and

association problems.

• Clustering: A clustering problem is where you want to discover the inherent groupings

in the data, such as grouping customers by purchasing behavior.

• Association: An association rule learning problem is where you want to discover rules

that describe large portions of your data, such as people that buy X also tend to buy Y.

2.3 COMPUTER VISION

Computer vision tasks include methods for acquiring, processing, analyzing and

understanding digital images, and extraction of high-dimensional data from the real world

to produce numerical or symbolic information, e.g., in the forms of decisions.

Understanding in this context means the transformation of visual images (the input of the

retina) into descriptions of the world that can interface with other thought processes and

elicit appropriate action.

Nowadays computer vision is used in many application areas such as OCR, Face

detection, smile detection, object recognition, vision based biometrics, face recognition,

object recognition, motion capture, Panorama stitching,3D terrain modeling, Obstacle

detection, position tracking, vision guided robots, 3D imaging MRI and image guided

surgery.

Upcoming self-driving cars are also using the same computer vision technology. In

August 2012, Google announced that their self-driving car had completed over 300,000

Page 20: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

12

12

autonomous-driving miles (500,000 km) accident-free, typically having about a dozen cars

on the road at any given time, and were starting to test them with single drivers instead of

in pairs. In late-May 2014, Google revealed a new prototype of its driverless car, which

had no steering wheel, gas pedal, or brake pedal, and was fully autonomous.

In June 2015, Google founder Sergey Reconfirmed that there had been 12 collisions

as of that date, eight of which involved being rear-ended at a stop sign or traffic light, two

in which the vehicle was side-swiped by another driver, one in which another driver rolled

through a stop sign, and one where a Google employee was controlling the car manually.

In July 2015, three Google employees suffered minor injuries when the self-driving car

they were riding in was rear-ended by a car whose driver failed to brake at a traffic light.

This was the first time that a self-driving car collision resulted in injuries. On 14 February

2016 a Google self-driving car attempted to avoid sandbags blocking its path. During the

maneuver it struck a bus. Google addressed the crash, saying “In this case, we clearly bear

some responsibility, because if our car hadn’t moved there wouldn’t have been a collision

Google characterized the crash as a misunderstanding and learning experience

Tesla Autopilot

In mid-October 2015 Tesla Motors rolled out version 7 of their software in the U.S.

that included Tesla Autopilot capability. On 9 January 2016, Tesla rolled out version 7.1

as an over-the-air update, adding a new "summon" feature that allows cars to self-park at

parking locations without the driver in the car.

Autopilot should be used only on limited-access highways, and sometimes it will

fail to detect lane markings and disengage itself. In urban driving the system will not read

traffic signals or obey stop signs. The system also does not detect pedestrians or cyclists.

According to Tesla, starting 19 October 2016, all Tesla cars are built with hardware

to allow full self-driving capability at the highest safety level’ The hardware includes eight

surround cameras and twelve ultrasonic sensors, in addition to the forward-facing radar

with enhanced processing capabilities. The system will operate in "shadow mode"

(processing without acting) and send data back to Tesla to improve its abilities until the

software is ready for deployment via over-the-air upgrades.

Page 21: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

13

13

2.4 ROAD DETECTION

Road detection is a fundamental issue in field navigation systems, which have

attracted keen attention in the past several decades. Road detection and tracking has been

subject of discussion since the idea of IVs came into play.

It is started with basic features like edge detection and lane detection and then grown

into a complex composite and dynamic approach for road and object detection. Road

detection and tracking are important for many intelligent vehicle applications, such as

LDW systems, Anti-sleep systems, driver assistance and safety warning system,

autonomous driving. In Road detection usually following approaches are used

1. Simple sensor based approach

2. GPS or navigation based approach

3. Vision sensor based approach

• Simple sensor based approach

Sensor based approach is the most basic and conventional approach used

for road detection or lane detection in sensor based approach multiple sensor like

proximity sensors, ultrasonic sensors color sensors etc. are used to acquire basic

knowledge of the road. Simple sensor’s based approach is dependent approach as

it provides only static picture/perspective of the road.

• GPS based approach

Global Positioning System (GPS) is a worldwide radio-navigation system

formed from the constellation of 24 satellites and their ground stations. The Global

Positioning System is mainly funded and controlled by the U.S Department of

Defense (DOD). The system was initially designed for the operation of U. S.

military. But today, there are also many civil users of GPS across the whole world.

Self-driving cars rely on optical (also radar) sensors and 3D maps to

understand precisely where they are and where the hazards are. It is believed LDW

is no longer performing perfectly. In rainy or mildly snowy weather where optical

systems degrade, or the lane markings are partially obscured, the combination

Page 22: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

14

14

might help the car maintain lane-centering a bit longer. Eventually it would shut

down.

• Vision sensor based approach

Vision sensors play an important role in road detection for their great

potential in environmental perception. Vision sensors play an important role in road

detection. Image data captured by vision sensors contains rich information, such as

luminance, color, texture, etc. Thus, vision sensors have a great potentiality in road

detection. Moreover, vision sensors are inexpensive compared to other popular road

detection sensors, such as LIDAR and millimeter-wave radar.

For these reasons, many state-of-the-art field robot systems and intelligent

systems employ vision sensors for road detection. For example, Xu et alp resented

a mobile robot using a vision system to navigate in an unstructured environment.

The vision system consisted of two cameras; one is used for road region detection,

and the other is used for road direction estimation. Rasmussen introduced a vehicle-

based mobile robot system, which has achieved success in the DARPA Grand

Challenge. Vision sensors mounted on the top of the windshield were used to detect

the road vanishing point for steering control. Vision sensor-based road detection is

a binary labeling problem trying to label every pixel in the given road image with

the category (road or background) to which it belongs. However, vision sensor-

based road detection is still a challenging job due to the diversity of road scenes

with different geometric characteristics (varying colors and textures) and imaging

conditions (different illuminations, viewpoints and weather conditions).

The main advantages of vision-based approach are as follows:

• Vision sensors acquire data in a non-invasive way, thus not polluting the road

environment. In other words, vision sensors do not interfere with each other when

multiple intelligent vehicle are moving within the same area. By contrast, besides

the problem of environment pollution, we must carefully think about some typical

problems of active sensors, such as the wide variation in reflection ratios caused by

Page 23: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

15

15

different reasons (such as obstacles shape of material), the need for the maximum

signal level to comply with some safety rules, and the interference among active

sensors of the same type.

• In most of ITS and IV applications, vision sensors play a fundamental role, for

example, in lane marking localization, traffic sign recognition, obstacle

recognition. Among those applications, other sensors, such as laser and radar, are

only complementary to vision sensors.

• We do not need to modify road infrastructures when using vision sensors to capture

visual information. This is extremely important in practice applications

2.5 OBJECT DETECTION AND RECOGNITION

Object detection is a computer technology related to computer vision and image

processing that deals with detecting instances of real world objects such as faces, humans,

cars and building in images and videos. Object detection and recognition are used to locate,

identify, and categorize objects in images and video.

• Electromagnetic sensor

These sensors can be used in vehicles moving slowly and smoothly towards the

obstacles. If the vehicle stops immediately upon the detection of obstacle, the

sensors continue to emit a signal of obstacle's present. If the vehicle resumes its

movement, the alarm signal becomes more prominent as the obstacle approaches.

These sensors are commonly used as they do not require drilling of holes in the

vehicle and they can be discreetly mounted on the inner side of the bumper.

• Wireless Ultrasonic Sensors

These sensors are highly sophisticated devices that create sharp radio signals for

detecting the obstacles. They use the echo time of the radio signals bouncing from

the obstacles to indicate the distance of the obstacle. Wireless ultrasonic sensors

range from four to eight, placed equally in the front and rear parts of a vehicle.

They detect objects even when the car is stationary.

Page 24: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

16

16

• Nissan’s moving object detection

Cameras detect moving objects around the vehicle when it is in park or slowly

maneuvering; the system then alerts the driver both visually and audibly. There are

two types of systems: one uses the Around View Monitor and four cameras to the

front, back and sides of the car, while the second system uses only a single camera

installed in the rear of the car. The four-camera system can alert drivers in three

scenarios: while parked or in neutral; moving forward; and backing up. When

moving forward or backing up, the cameras to the front or back respectively detect

certain moving objects. When in park or neutral, the system detects certain moving

objects around the car using a virtual bird’s-eye view image. If a vehicle has the

single rear-view camera system it can only detect certain moving objects behind

the vehicle. The system processes video imagery from the cameras and can then

detect certain moving objects.

• Object detection by pattern recognition

An object detection system includes a sensor in communication with a controller

to identify an object within a field of view. Pattern recognition algorithms are

incorporated into the controller, and objects are predefined to minimize false

detection and sift predefined objects such as vehicles from background clutter.

Upon recognition of the object, an indicator in communication with the controller

provides an alert to the operator who can then take corrective action. By defining

another field of view the controller halts or reverses the movement of a power

window to prevent trapping an object between the closing window and the window

frame. Yet another field of view includes a vehicle entry point such as a vehicle

door. Movement of the vehicle door will be identified by the controller and will

provide an alert such as activation of the vehicle alarm system.

• Reverse Car Parking Sensors

These sensors are commonly used during parking the car in a reverse car parking

system. They get activated as soon as the car is put in reverse gear, and are usually

placed in the front side of a vehicle. Reverse car parking sensors are small and

Page 25: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

17

17

generate ultrasonic radio waves to send and receive signals reflected from the

obstacle. Mazda have already integrated this technology into their vehicles.

• Google’s self-driving cars detect and avoid obstacles

Google’s driverless car tech uses an array of detection technologies including sonar

devices, stereo cameras, lasers, and radar. All these components have different

ranges and fields of view, but each serves a purpose according to the patent filings

Google has made on its driverless cars. Anyone who has ever seen an image of

Google’s self-driving Prius has probably noticed one of these systems poking up

above the vehicle — the LIDAR laser remote sensing system. According to Google

engineers, this is at the heart of object detection.

2.6 COLLISION DETECTION

Collision detection is a key factor in enabling the integration of unmanned vehicles

into real life use. It is the ability of the vehicle to “see” other vehicles or pedestrians,

anticipate collisions, and automatically apply the brakes or take corrective steering actions.

It is also known as a pre-crash system, forward collision warning system, or collision

mitigating system. It uses radar (all-weather) and sometimes laser (LIDAR) and camera

(employing image recognition) to detect an imminent crash and ultrasonic sensors for

prototypes.

Early warning systems have been attempted as early as the late 1950s. Cadillac for

instance, developed a prototype vehicle named the Cadillac Cyclone which used the new

technology of radar to detect objects in the front of the car with the radar sensors. It was

deemed too costly and the model was subsequently dropped.

The first modern demonstration of forward collision avoidance was performed in

1995 by a team of scientists and engineers at Hughes Research Laboratories in Malibu,

California. The project was funded by Delco Electronics. The technology was labeled for

marketing purposes as "Forewarn". The system was radar based.

In the early-2000s, the U.S NHTSA researched whether to make frontal collision

warning systems and lane departure warning systems mandatory. A 2009 study conducted

Page 26: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

18

18

by the IIHS found a 7 percent reduction in crashes for vehicles with a basic forward-

collision warning system, and a 14 to 15 percent reduction for those with automatic

braking.

The federal National Highway Traffic Safety Administration is also on board, with

an eye to making some collision-avoidance systems mandatory, but the cost of collision-

avoidance systems can still be an obstacle.

The idea of incorporating radar systems into vehicles to improve road traffic safety

dates to the 1950s. Such systems are now reaching the market as recent advances in

technology have allowed the signal processing requirements and the high angular

resolution requirements from physically small antennas to be realized. Automotive radar

systems have the potential for several different applications including ACC and anti-

collision devices. The problem with this brand of cars is that they are expensive.

2.7 HUMAN ERRORS

According to the US Department of Transportation's National Motor Vehicle Crash

Causation Survey, 94 percent of road accidents are caused by human error, and it is said

that machine intelligence technology will drastically lower.

Experts explains that this improvement in road safety with driverless cars is simply

down to there being many tasks that robots, machines or driverless cars can do much better

than people. Experts claim that once the Autonomous vehicles become a reality, we can

expect increased roadway capacity and reduced traffic congestion due to the reduced need

for safety gaps and being able to better manage traffic flow.

Intelligent vehicles open new possibilities such as allowing those who are not legally

eligible to drive - convicted drink drivers, younger people, older people or those with

disabilities - to be mobile, it should be noted that one of the major barriers to driverless

technology is legislation.

By the end of this century, there’s good reason to believe that tens of millions of

traffic fatalities will be prevented around the world. This is not merely theoretical. There’s

Page 27: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

19

19

already some precedent for change of this magnitude in the realms of car culture and

automotive safety. In 1970, about 60,000 people died in traffic accidents in the United

States. A dramatic shift toward safety including required seat belts and ubiquitous

airbags—helped vastly improve a person’s chance of surviving the American roadways in

the decades that followed. By 2013, 32,719 people died in traffic crashes, a historic low.

Researchers estimate that driverless cars could, by midcentury, reduce traffic

fatalities by up to 90 percent. Which means that, using the number of fatalities in 2013 as

a baseline, intelligent cars could save 29,447 lives a year.

2.8 SUMMARY

Computer vision tasks include methods for acquiring, processing, analyzing and

understanding digital images, and extraction of high-dimensional data from the real world

to produce numerical or symbolic information e.g. in the forms of decisions. Upcoming

self-driving cars are also using the same computer vision technology. Nowadays computer

vision is used in many application areas such as OCR, Face detection, smile detection,

object recognition, vision based biometrics, face recognition, object recognition, motion

capture, Panorama stitching,3D terrain modeling, Obstacle detection, position tracking,

vision guided robots, 3D imaging MRI and image guided surgery.

Page 28: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

20

20

CHAPTER 3

PROJECT METHODOLOGIES

3.1 SYSTEM DESIGN

In this chapter we will look at the architecture, modules, interfaces, and data for the

intelligent system to satisfy specified requirements. Systems design could be the

application of systems theory to product development.

3.1.1 MODULES

In modules we will discuss each of a set of standardized parts or independent units that

can be used to construct a more complex structure. Modules of Intelligent vehicles are:

3.1.1.1 Raspberry PI

Raspberry pi is a mini computer that plug into HDMI monitor and keyboard.

It is a capable little computer which can be used in electronics projects. The

Raspberry Pi is a fully functional Linux computer. A Raspberry Pi board (model

B+), attached with a pi camera module and an HC-SR04 ultrasonic sensor is used

to collect input data. Two client programs run on Raspberry Pi for streaming color

video and/ ultrasonic sensor data to the laptop via local Wi-Fi connection.

Fig: 3.1.1.1(a) Raspberry PI module

• Raspberry PI and Camera module connections

Open Raspberry PI Camera module.

Page 29: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

21

21

Install the Raspberry PI Camera module by inserting the cable into the Raspberry

Pi. The cable slots into the connector situated between the Ethernet and HDMI

ports, with the silver connectors facing the HDMI port.

Fig: 3.1.1.1(b) Camera port in Raspberry PI

Now boot up Raspberry PI

From the prompt, run "sudo raspi-config". If the "camera" option is not listed, it

will need to run a few commands to update your Raspberry Pi. Run "sudo apt-get

update" and "sudo apt-get upgrade".

Now Run "sudo raspi-config" again - you should now see the "camera" option.

Fig: 3.1.1.1(c) Raspberry PI command prompt

• Raspberry PI and Ultrasonic Sensor Connections

Page 30: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

22

22

The sensor output signal (ECHO) on the HC-SR04 is rated at 5V.

However, the input pin on the Raspberry Pi GPIO is rated at 3.3V. Sending a 5V

signal into 3.3V input port could damage GPIO pins.

A voltage divider consists of two resistors (R1 and R2) in series connected

to an input voltage (Vin), which needs to be reduced to our output voltage (Vout).

In our circuit, Vin will be ECHO, which needs to be decreased from 5V to our

Vout of 3.3V.

Fig: 3.1.1.1(d) Voltage Divider

Four pins on the Raspberry Pi are used for this connection:

GPIO 5V [Pin 2]; VCC (5V Power),

GPIO GND [Pin 6]; GND (0V Ground),

GPIO 23 [Pin 16]; TRIG (GPIO Output)

GPIO 24 [Pin 18]; ECHO (GPIO Input)

Page 31: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

23

23

Fig: 3.1.1.1(e) Raspberry-PI voltage divider

Plug four of male to female jumper wires into the pins on the HC-SR04 as follows:

Red; Vcc, Blue; TRIG, Yellow; ECHO and Black; GND.

Fig: 3.1.1.1(f) HC-SR04 pins

Plug Vcc of both Raspberry Pi and sensor into the positive rail of breadboard,

similarly plug GND into negative rail.

Plug GPIO 5V [Pin 2] into the positive rail, and GPIO GND [Pin 6] into the

negative rail.

Page 32: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

24

24

Fig: 3.1.1.1(g) Raspberry-PI and HC-SR04 VCC and GND connections

Plug TRIG of Sensor into a blank rail, and plug that rail into Raspberry PI GPIO

23 [Pin 16].

Fig: 3.1.1.1(h) Raspberry PI and HC-SR04 ECHO pin connection

Plug ECHO into a blank rail, link another blank rail using R1 (1kΩ resistor)

Link the rail with R1 to the GND rail using R2 (2kΩ resistor). Leave a space

between the two resistors.

Page 33: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

25

25

Fig: 3.1.1.1(i) Connections on breadboard

Add GPIO 24 [Pin 18] to the rail with R1 (1kΩ resistor). This GPIO pin needs to

sit between R1 and R2

Fig: 3.1.1.1(j) Voltage Divider on breadboard

HC-SR04 is now connected with Raspberry PI

Fig: 3.1.1.1(k) Raspberry PI and HC-SR04 connections

Page 34: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

26

26

3.1.1.2 PI-cam

The Raspberry Pi Camera Module v2 (PI noir) is used in this project. The

v2 Camera Module has a Sony IMX219 8-megapixel sensor. It is used to take high-

definition video. For initializing pi-cam, there certain steps:

• Locate the camera port in Raspberry PI as discussed above and connect the camera

• Start up the Pi

• Open the Raspberry Pi Configuration Tool from the main menu

• Ensure the camera software is enabled. If it's not enabled, enable it and reboot your

Pi to begin.

Fig: 3.1.1.2(a) PI-cam module

3.1.1.3 Arduino

Arduino is some open-source electronics prototyping platform based on

flexible, easy-to-use hardware and software. The RC car used in this project has an

on/off switch type controller. The processing unit (computer) handles multiple

tasks: receiving data from Raspberry Pi, neural network training and prediction

(steering), object detection (stop sign and traffic light), distance measurement

(monocular vision), and sending instructions to Arduino through USB connection.

When a button is pressed, the resistance between the relevant chip pin and ground

is zero. Thus, an Arduino board is used to simulate button-press actions. Four

Arduino pins are chosen to connect four chip pins on the controller, corresponding

to forward, reverse, left and right actions respectively. Arduino pins sending LOW

signal indicates grounding the chip pins of the controller; on the other hand,

sending HIGH signal indicates the resistance between chip pins and ground remain

unchanged. The Arduino is connected to the computer via USB. The computer

Page 35: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

27

27

outputs commands to Arduino using serial interface, and then the Arduino reads

the commands and writes out LOW or HIGH signals, simulating button-press

actions to drive the RC car.

Fig: 3.1.1.3(a) Arduino module

3.1.1.4 Ultrasonic sensor

Ultrasonic sensor emits short, high-frequency sound pulses at regular

intervals. These propagate in the air at the velocity of sound. If they strike an object,

then they are reflected as echo signals to the sensor, which itself computes the

distance to the target based on the time-span between emitting the signal and

receiving the echo.

This project uses HC-SR04 ultrasonic sensor to determine distance to an

object like bats do. It offers excellent non-contact range detection with high

accuracy and stable readings in an easy-to-use package. From 2cm to 400 cm or 1”

to 13 feet. It operation is not affected by sunlight or black material like Sharp

rangefinders are (although acoustically soft materials like cloth can be difficult to

detect). It comes complete with ultrasonic transmitter and receiver module.

Page 36: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

28

28

Fig: 3.1.1.4(a) Ultrasonic sensor- HC-SR04

Features:

• Power Supply: +5V DC

• Quiescent Current: <2mA

• Working Current: 15mA

• Effectual Angle: <15°

• Ranging Distance: 2cm – 400 cm/1″ – 13ft

• Resolution: 0.3 cm

• Measuring Angle: 30 degrees

• Trigger Input Pulse width: 10uS

• Dimension: 45mm x 20mm x 15mm

Pins:

• VCC: +5VDC

• Trig : Trigger (INPUT)

• Echo: Echo (OUTPUT)

• GND: GND

Working in our project:

HC-SR04 ultrasonic sensor is placed on front of RC car and it is connected

to GPIO pins of Raspberry PI which is also mounted on top of RC car, ultra-sonic

sensor collects data (distance) from an obstacle if it comes in front of RC car and

sends it to Raspberry PI.

Page 37: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

29

29

Fig: 3.1.1.4(b) Components attached on the RC-CAR

3.1.2 INTERFACES

An interface is a shared boundary across which two or more separate components

of a computer system exchange information. The exchange can be between software,

computer hardware, peripheral devices, humans and combinations of these

In this project we are using two types of interfaces

• Socket interface using TCP

• Serial interface

3.1.2.1 Socket interface using TCP

A socket represents a single connection between two network applications.

Sockets are bidirectional, meaning that either side of the connection is capable of

both sending and receiving data.

In this project we are using TCP Sockets (or virtual ports) for

communication between Raspberry PI and Laptop. Each side of a socket connection

uses its own port number, which does not change during the life of that connection.

The port number and IP address together uniquely identify an endpoint. Together,

two endpoints are considered a 'socket'.

Page 38: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

30

30

Client-host pairing:

All TCP communication has a source and destination, so there is always a

source port and a destination port used for every socket connection.

Each computer host uses a unique IP address, and uses a unique source and

destination port pairings to identify that specific connection between the two

computers.

We have used, multithread TCP server program runs on the computer to

receive streamed image frames and ultrasonic data from the Raspberry Pi. Image

frames are converted to gray scale and are decoded into NumPY arrays.

3.1.2.2 Serial interface

After receiving the data from Raspberry Pi, the computer will process all

the data and sends commands to Arduino through serial interface i.e. RS-232, which

further sending the commands/instructions to the control unit (remote controller)

so that motors connected to vehicle can be controlled.

Fig: 3.1.2.2(a) Serial interface between Arduino and the remote controller

Page 39: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

31

31

3.1.3 DATA

3.1.3.1 Video Frames

Video stream is taken by pi cam mounted on raspberry pi, raspberry

pi sends the video stream to PC by TCP server at the rate of 10 frames per

second. Algorithms of NN that run on Laptop decide where to move and

then sends the left, right and forward commands to Arduino, connected with

the controller of RC Vehicle.

3.1.3.2 Ultrasonic sensor data

Ultrasonic sensor is only used to determine the distance to an

obstacle in front of the RC car and provides accurate results when taking

proper sensing angle and surface condition into considerations. In fact, if

we know the corresponding number to the actual distance, we know when

to stop the RC car.

3.2 METHODOLOGY

Methodology is the system of methods employed do research activity or product/project

realization

In our case it is divided in two types

1) System hardware

2) System software

3.2.1 SYSTEM HARDWARE

Input unit:

A Raspberry Pi board (model B+), attached with a pi camera module and

an HC-SR04 ultrasonic sensor is used to collect input data. Two client programs

run on Raspberry Pi for streaming color video and ultrasonic sensor data to the

computer via local Wi-Fi connection.

Page 40: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

32

32

Processing Unit:

The processing unit (computer) handles multiple tasks: receiving data from

Raspberry Pi, neural network training and prediction(steering), object detection

(stop sign and traffic light), distance measurement (monocular vision), and sending

instructions to Arduino through USB connection.

RC car control unit:

The RC car used in this project has an on/off switch type controller. When

a button is pressed, the resistance between the relevant chip pin and ground is zero.

Four Arduino pins are chosen to connect four chip pins on the controller,

corresponding to forward, reverse, left and right actions respectively. Arduino pins

sending LOW signal indicates grounding the chip pins of the controller; on the other

hand, sending HIGH signal indicates the resistance between chip pins and ground

remain unchanged. The Arduino is connected to the computer via USB. The

computer outputs commands to Arduino using serial interface, and then the

Arduino reads the commands and writes out LOW or HIGH signals, simulating

button-press actions to drive the RC car.

NOTE: An Optical-Isolator is a component that transfers electrical signals between

two isolated circuits by using light. It prevents high voltages from affecting the

system receiving the signal. This IC is being used between Arduino and controller

Fig: 3.2.1(a) Opto-Isolator, interfacing Arduino and Remote controller

Page 41: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

33

33

Fig: 3.2.1(b) Complete anatomy of the project

3.2.2 SYSTEM SOFTWARE

Dependencies

Programming languages: Python, C

IDE: Python 2.4, PyCharm, Arduino

• PI-camera: a library that contains framework for PI-camera

• PI-Serial: PI-Serial is a library which provides support for serial connections

("RS-232") over a variety of different devices.

• PY-game: PY-game is a cross-platform set of Python modules designed for

writing video games. It includes computer graphics and sound libraries

designed to be used with the Python programming language.

• OpenCV-Python: OpenCV-Python is a library of Python bindings designed to

solve computer vision problems. Python is a general-purpose programming

language started by Guido van Rossum that became very popular very quickly,

mainly because of its simplicity and code readability. Our project has used

Open-CV 2.4.10

Page 42: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

34

34

• NumPY: NumPY is a library for the Python programming language, adding

support for large, multi-dimensional arrays and matrices, along with a large

collection of high-level mathematical functions to operate on these arrays.

3.3 ALGORITHMS

3.3.1 OBSTACLE DETECTION

To achieve this goal, the ultrasonic sensor is attached in front of the car. Whenever

the car is going on the desired path the ultrasonic sensor transmits the ultrasonic waves

continuously from its sensor head. Whenever an obstacle comes ahead of it the ultrasonic

waves are reflected from an object and that information is passed to the raspberry pi. The

raspberry pi sends all the data collected from ultrasonic sensor to the laptop (main

controller) which takes the decision according to the received data, and give commands to

the Arduino, which is interfaced with the controller of the car which in results control the

vehicle.

3.3.2 STOP SIGN DETECTION AND RECOGNITION

One of the goal of this project is to detect stop signs in video streams captured by

a front-facing camera mounted on a moving car. Given a video stream captured by a car

mounted, front-facing camera, recognize the STOP sign that appear in the images and track

them over a sequence of frames. The approach we have used is HAAR cascade classifier.

Haar feature-based Cascade Classifier initially proposed by Viola and Jones classifies

objects using a series of edge, line, and center-surround features that scan throughout the

image to construct ROI features. It uses the same method of negative and positive images

of stop signs as we did in traffic light detection. Once we have created a trained xml file

with the classifier, we have used those trained parameters for further detection of stop sign.

Page 43: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

35

35

Fig: 3.3.2(a) Stop sign detection and distance measurement

Haar-Cascade classifier:

• Positive images

The very first step into creating an object detector with a machine learning

algorithm is the data collection of positive images, hereby referred to as positives.

The positives are the images which represent Stop sign. The positives were

photographed from the real world in form of high definition images of Stop sign.

• Negative images

The negatives, are images that not representing the object that is desired to

be detected. It is a method for the algorithm to distinguish between the desired

objects and anything else in the environment that shall not be detected. Those

negatives could be collected from the same environment the object will appear in.

• Training of the algorithm

The training of the Haar Cascade Classifier was performed on the collected

positive and negative images as mentioned bove. The first phase of the training was

to identify the vectors of the positive images. In this phase, the area where the

objects are in each positive image is retrieved by cropping the traffic light from the

rest of the environment. The second phase into the training was to provide the

negative images in a separate location from the positive images. The third phase

was to provide different parameters for the Haar Classifier to perform the training.

Page 44: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

36

36

After the training was finished the opencv_traincascade provided

Classifiers for each stage performed during the training, those in turn were provided

to another tool from OpenCV to generate a XML file with all the necessary data

retrieved from the training. The training for this research took 3 weeks to complete.

This is all done for the detection of traffic light.

• OpenCV

OpenCV was used for applying the algorithms in this project. OpenCV is

an open source library aimed for computer vision. The library is written in C and

C++ with bindings to other programming languages such as Python and Java. The

OpenCV library contains more than 500 functions which contributes to computer

vision, including camera calibration, stereo vision and robotics. OpenCV does also

features ML modules, which makes it a suitable choice for this project.

3.3.3 TRAFFIC LIGHT DETECTION AND RECOGNITION

Vehicle and Traffic Safety is a growingly important research topic among the

automotive industry and academia. Traffic lights are important in terms of traffic safety;

therefore, it is of importance to have a solution to detect them without having to spend time

to find their occurrence manually in a video analysis. This project uses Haar feature-based

cascade classifiers for traffic light detection.

• Positive images

The very first step into creating an object detector with a machine learning

algorithm is the data collection of positive images, hereby referred to as positives.

The positives are the images which represent traffic light. The positives were

photographed from the real world in form of high definition images of traffic lights.

To relate to ROI, the parameters would be the positives with visible traffic lights,

positive of traffic light from different visibility angles and positives with different

traffic light status (Red, Yellow, Green). These parameters are important to be taken

into consideration since they may affect the results.

3.3.3.1 Brightest spot

Page 45: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

37

37

The purpose of this approach is to identify which state the detected traffic

light is in, the states refer to red light, yellow light and green light. It is done in two

steps: the first step is to find the brightest point in the ROI

3.3.3.2 Position of the brightest spot.

After finding the brightest spot in detected traffic light, red or green states

are determined simply based on the position of the brightest spot in the ROI.

Fig: 3.3.3.2(a) Traffic green light detection

Fig: 3.3.3.2(b) Traffic Red light detection

Page 46: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

38

38

Fig: 3.3.3.2(c) Flow chart of Haar cascade training

3.4 SELF-LEARNING USING NEURAL NETWORK

Neural networks can be used to implement highly nonlinear controllers with

weights or internal parameters that can be determined by a self- learning process.

A neural network is a class of computing system. They are created from very simple

processing nodes formed into a network. They are inspired by the way that biological

systems such as the brain work. They are fundamentally pattern recognition systems and

tend to be more useful for tasks which can be described in terms of pattern recognition.

They are 'trained' by feeding them with datasets (images, raw data) with known outputs.

Page 47: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

39

39

• Back-propagation

Back-Propagation is one of the several ways in which an ANN can be

trained. It is a supervised training scheme, which means, it learns from labeled

training data (there is a supervisor, to guide its learning). To put in simple terms,

Back-Prop is like "learning from mistakes". The supervisor corrects the ANN

whenever it makes mistakes.

An ANN consists of nodes in different layers; input layer, intermediate

hidden layer(s) and the output layer. The connections between nodes of adjacent

layers have "weights" associated with them. The goal of learning is to assign correct

weights for these edges. Given an input vector, these weights determine what the

output vector is.

In supervised learning, the training set is labeled e.g. labeled images. This

means, for some given inputs, we know (label) the desired/expected output.

• Back-propagation algorithm

Initially all the edge weights are randomly assigned. For every input in the

training dataset, the ANN is activated and its output is observed. This output is

compared with the desired output that we already know, and the error is

"propagated" back to the previous layer. This error is noted and the weights are

"adjusted" accordingly. This process is repeated until the output error is below a

predetermined threshold.

Once the above algorithm terminates, we have a "learned" ANN which, we

consider is ready to work with "new" inputs. This ANN is said to have learned from

several examples (labeled data) and from its mistakes (error propagation)

Page 48: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

40

40

Fig: 3.4(a) Flow chart of backpropagation

• Prediction and steering

In this project we are using this back-propagation methodology for

prediction, which leads to steering mechanism. A set of progressive frames are

captured and converted to an NumPY array. This array is then paired with a label

which is basically the human input and all these images are then saved into a

lightweight database. The neural network is trained in Open CV using the back-

propagation method. Once the training is completed the weights for each input node

(labels for the captured images previously) is stored into an XML file. To see the

predictions work, the neural network is loaded with this XML file.

Page 49: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

41

41

CHAPTER 4

RESULTS AND DISCUSSIONS

4.1 SYSTEM BASED DIVISION AND RESULTS

We are using two systems: Laptop and the Raspberry PI. In Laptop we are using 2

IDE’s: PyCharm IDE (Comprised of NN-algorithms) and Arduino IDE (Codes for Sending

commands from Laptop to RC-Controller). In raspberry PI we are running (codes for video

stream and Sensor data Collection and transmission) in Python 2.3

Fig: 4.1(a) PyCharm IDE

Fig:4.1(b) Arduino IDE

Page 50: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

42

42

Fig: 4.1(c) Python Shell of Raspberry PI

• Codes to be executed in Raspberry PI/ (built in python 2.3 environment)

o stream_client.py: stream video frames in jpeg format to the host

computer

o ultrasonic_client.py: send distance data measured by sensor to the host

computer

• Codes to be executed in Arduino/:(in Arduino IDE)

o rc_keyboard_control.ino: acts as an interface between RC-Controller and

computer and allows user to send command via USB serial interface

• Codes to be executed in computer/ (PyCharm IDE)

o cascade_xml/

trained cascade classifiers xml files

o chess_board/

images for calibration, captured by pi camera

o training_data/

training data for neural network in npz format

o training_images/

saved video frames during image training data collection stage

(optional)

o mlp_xml/

trained neural network parameters in a xml file

Page 51: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

43

43

o picam_calibration.py: pi camera calibration, returns camera

matrix(optional)

o collect_training_data.py: receive streamed video frames and label frames

for later training

o mlp_training.py: neural network training

o rc_driver.py: a multithread server program receives video frames and

sensor data, and allows RC car drives by itself with stop sign, traffic light

detection and front collision avoidance capabilities

• For testing purposes

o rc_control_test.py: RC car control with keyboard

o stream_server_test.py: video streaming from Pi to computer

o ultrasonic_server_test.py: sensor data streaming from Pi to computer

Step by step process:

1. Flash Arduino: Flash “rc_keyboard_control.ino” to Arduino and

run “rc_control_test.py” to drive the rc car with keyboard (testing purpose)

2. Pi Camera calibration: Take multiple chess board images using pi camera at various

angles and put them into “chess_board” folder, run “picam_calibration.py” and it returns

the camera matrix, those parameters will be used in “rc_driver.py” (optonal)

3. Collect training data and testing data: First run “collect_training_data.py” and then

run “stream_client.py” on raspberry pi. User presses keyboard to drive the RC car,

frames are saved only when there is a key press action. When finished driving, press “q”

to exit, data is saved as a npz file.

4. Neural network training: Run “mlp_training.py”, depend on the parameters chosen, it

will take some time to train. After training, model will be saved in “mlp_xml” folder

5. Cascade classifiers training (optional): trained stop sign and traffic light classifiers are

included in the "cascade_xml" folder

6. Self-driving in action: First run “rc_driver.py” to start the server on the computer and

then run “stream_client.py” and “ul trasonic_client.py” on raspberry pi.

Page 52: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

44

44

• RESULTS

1. Video stream is taken by PI-cam, which is connected to Raspberry PI is

displayed on the Laptop screen. Raspberry Pi has sent this video streaming to

Laptop through TCP server, using Client-Host pairing.

Fig: 4.1(d) Video Streaming received on laptop

Fig: 4.1(e) Console screen showing successful connection

Page 53: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

45

45

2. Detection and recognition of green traffic light and necessary commands of

keeping the vehicle in forward direction is appeared on the Console Screen.

Fig: 4.1(f) Green light detection results

Fig: 4.1(g) Forward command of green light on Console screen

Page 54: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

46

46

3. Detection and Recognition of Red traffic Light, with results on the console

screen.

Fig: 4.1(h) Red traffic light detection results

Fig: 4.1(i) Recognition of red light on console

Page 55: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

47

47

4. A car as an obstacle came into the way of our trained Vehicle. Vehicle has

detected the Obstacle ahead with the distance from the obstacle is also shown

on the Console Screen.

Fig: 4.1(j) Obstacle car is ahead

Fig: 4.1(k) Obstacle is detected on console screen

Page 56: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

48

48

5. Stop sign is detected but it is still at a distance more than 25cm, so “Forward”

command is shown on the screen. Vehicle will keep on moving forward until

the distance become less than 25cm

Fig: 4.1(l) Stop sign is at the distance of 27.1cm

Fig: 4.1(m) Forward command on the console screen.

Page 57: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

49

49

6. Stop sign is at the distance of less than 25cm, Vehicle stopped and started

calculating the stop the time. Results are shown on the Console screen.

Fig: 4.1(n) Stop sign detection

Fig: 4.1(o) Stop time calculation on the console screen

Page 58: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

50

50

7. When the distance is less than 25cm then vehicle will stop and wait for 5

seconds, after 5 seconds it will move “Forward”. Results are shown below

Fig: 4.1(p) Stop sign is in less than 25cm

Fig: 4.1(q) Forward Command after 5 seconds

Page 59: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

51

51

4.2 PROBLEMS ENCOUNTERED

This thesis also reports the typical problems encountered in utilizing machine

learning technology and developing a prototype intelligent vehicle. These problems are:

4.2.1 SOFTWARE PROBLEMS

• Installation Python library packages:

While working on Python, we needed to install some library packages from

the Python Package Index. Unfortunately, this is not as simple as it could be. Few

of the Python packages were easily installed by pip which is the preferred installer

program. It is included by default with the Python binary installers.

But some Python packages have complex binary dependencies, and could not be

easily installed by using pip directly. At this point in time, we often have preferred

to install these packages by other means rather than attempting to install them with

pip.

4.2.2 HARDWARE PROBLEMS

• Arduino- controller interfacing

Not all the RC-controllers are compatible to interface directly with Arduino.

Even many times, controllers don’t work properly. So, while interfacing we used

an external circuitry consist of opto-isolator to make the controller work properly.

• Speed

Built-in speed of RC-Vehicle was too fast to capture proper frames and act

on decisions taken by NN algorithms. So, reducing the speed of vehicle was another

challenge to complete.

• Power

RC-vehicle built-in power source was not enough to run the vehicle, so

external battery cells were used to provide enough power. We used LIPO battery

cells externally for that purpose.

Page 60: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

52

52

4.3 LIMITATIONS

Adverse weather can be a source of major frustration for Human drivers, and the

significant factor in the number of accidents can occur.

Every technology has some limitations. The approach this project has used is based

on camera, which can’t deal with snow, rain. darkness and fog very well. Like humans,

camera also faces the similar issues of reduced visibility. However, this approach is cost-

effective but become untrustworthy in these situations. Another Limitation for this project

is a strong Wi-Fi connection, that makes the communication possible between Raspberry

PI and the Laptop.

4.4 CONCULSION

ITS have attracted increasing attention in recent years due to their great potential in

meeting driving challenges.

• Scope of the project

As technology improves, a vehicle will become just a computer with tires.

Driving on roads will be just like surfing the Web: there will be traffic congestion

but no injuries or fatalities. Advanced intelligent system and new sensing

technologies can be highly beneficial, along with large body of work on intelligent

vehicles

• Advantages

The main advantages of intelligent vehicle are: it allows driver and the other

people in vehicle to reach their destination quickly, safely and in a more relaxed

frame of mind. Above all on routine journeys, in traffic jams, on crowded

motorways with speed restrictions and at accident blackspots, an intelligent vehicle

can assist the driver and relieving them of tedious routine tasks. However, the

intention is not to deprive the driver of the experience and pleasure of doing the

driving for themselves. "Our intelligent system offer to assist and unburden the

driver.

Page 61: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

53

53

4.5 FUTURE WORK

The reliable intelligent vehicles and safety warning systems is still a long

way to go. However, as computing power, sensing capacity, and wireless

connectivity for vehicles rapidly increase, the concept of intelligent driving is

speeding towards reality. These findings suggest that the research into intelligent

system within the ITS field is a short-term reality and a promising research area

and these results constitute the starting point for future developments. Some of the

suggestions towards extension and/or future related works are identified and are

summarized below:

• New sensory systems and sensory fusion is to be explored to plug additional

information to the control system.

• This work can be extended to include different maneuvers to make the driving

system capable of dealing with all driving environments.

• Future issues may also include an algorithm for autonomous formation of the

cooperative driving.

Page 62: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

54

54

REFERENCES

[1] David A. Medler (1998). "A Brief History of Connectionism" (PDF). Neural

Computing Surveys. 1: 61–101.

[2] J. Weng, N. Ahuja and T. S. Huang, "Learning recognition and segmentation of 3-D

objects from 2-D images," Proc. 4th International Conf. Computer Vision, Berlin,

Germany, pp. 121-128, May, 1993.

[3] Sven Behnke (2003). Hierarchical Neural Networks for Image Interpretation. (PDF).

Lecture Notes in Computer Science. 2766. Springer.

[4] Gosman, Tim (2016-07-24). "Along for the ride: How driverless cars can become

commonplace". Brand Union. Retrieved 2016-10-29.

[5] "Preparing a nation for autonomous vehicles: Opportunities, barriers and policy

recommendations.". Transportation Research Part A: Policy and Practice. 7

[6] Scott Klement,” Tcp socket”

[7] InetDaemon,” tcp socket and client-server pairing”,April 2013

[8] P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple

features”, Proc. Computer Vision and Pattern Recognition, 2001.

[9] Luke Graham,” What is a Neural Network in simple words”

[10] “Neural Networks for Visual Recognition”; CS231n.stanford.edu

[11] Nikhil Buduma,” Inside Deep Learning: Computer Vision with Convolutional Neural

Networks”, (MIT).

[12] Hemanth Kumar Mantri, “How-do-you-explain-back-propagation-algorithm-to-a-

beginner-in-neural-network “

[13] Jim Torresen, Jorgen W. Bakke and Lukas Sekanina, “Efficient Recognition of Speed

Limit Signs”, 2004 IEEE Intelligent Transportation Systems Conference Washington,

D.C., USA, October 3-6, 2004.

Page 63: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

55

55

[14] N. Dalal, B Triggs, “Histogram of Oriented Gradients for Human Detections”, Proc.

IEEE Computer vision and pattern recognition, 2005.

[15]. Introduction to OpenCV-Python Tutorials by Alexander Mordvintsev (GSoC-2013

mentor), Abid Rahman K. (GSoC-2013 intern).

[16] Sander Soo, “Object detection using Haar-cascade classifier”, Institute of Computer

Science, University of Tartu

[17] I. Paliy, “Face detection using Haar-like features cascade and convolutional neural

network,” in 2008 International Conference on “Modern Problems of Radio Engineering,

Telecommunications and Computer Science” (TCSET), 2008.

[18] Muhanad Nabeel and David ustarbowski “Evaluation of traffic light detection

algorithms for automated video analysis” Department of Computer Science and

Engineering Chalmers university of technology, Gothenburg, Sweden 2016.

[19] Meyer, G; Beiker, S (2014). Road vehicle automation. Springer International

Publishing. pp. 93–102.

[20] Borenstein, E., Ullman, S.: Class-specific, top-down segmentation. In: ECCV (2).

(2002)

[21] Levin, A., Weiss, Y.: Learning to combine bottom-up and top-down segmentation. In:

ECCV. (2006)

[22] Leibe, B., Seemann, E., Schiele, B.: Pedestrian detection in crowded scenes. In:

CVPR. (2005)

[23] Ferrari, V., Tuytelaars, T., Gool, L.J.V.: Object detection by contour segment

networks. In: ECCV. (2006)

[24] Kokkinos, I., Maragos, P., Yuille, A.L.: Bottom-up & top-down object detection using

primal sketch features and graphical models. In: CVPR. (2006)

[25] Zhao, L., Davis, L.S.: Closely coupled object detection and segmentation. In: ICCV.

(2005)

Page 64: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

56

56

[26] Lee Bell (2015) Humans vs robots: “Driverless cars are safer than human driven

vehicles”

https://www.theinquirer.net/inquirer/feature/2426988/humans-vs-robots-driverless-cars-

are-safer-than-human-driven-vehicles

[27] Adrienne Lafrance (2015) Self-Driving Cars Could Save 300,000 Lives Per Decade

in America

https://www.theatlantic.com/technology/archive/2015/09/self-driving-cars-could-save-

300000-lives-per-decade-in-america/407956/

[28] Ren, X., Berg, A.C., Malik, J.: Recovering human body configurations using pairwise

constraints between parts. In: ICCV. (2005)

[29] Mori, G., Ren, X., Efros, A.A., Malik, J.: Recovering human body configurations:

Combining segmentation and recognition. In: CVPR. (2004)

[30] Srinivasan, P., Shi, J.: Bottom-up recognition and parsing of the human body. In:

CVPR. (2007)

[31] Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition.

International Journal of Computer Vision 61(1) (2005)

[32] Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR.

(2005)

[33] OpenCV, “Cascade Classifier Training — OpenCV 2.4.9.0 documentation,” [Online].

Available: http://docs.opencv.org/doc/use r_guide/ug_traincascade.html.

[34] C. Robin, “Train your own OpenCV HAAR classifier,” 12 [Online]. Available:

http://codingrobin.de/2013/07/22/trainyour-own-opencv-haarclassifier.html.

[35] N. Seo, “OpenCV haartraining (Rapid Object Detection with A Cascade of Boosted

Classifiers Based on Haar-like Features),”

[Online]. Available: http://note.sonots.com/SciSoft ware/haartraining.html

Page 65: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

57

57

[36] S. Nagabhushana, “Introduction,” in Computer Vision and Image Processing, New

Age International (P) Ltd., Publishers, 2005, p. 3.

[37] T. M. Inc., “Train a Cascade Object Detector,” [Online]. Available:

http://www.mathworks.se/help/ vision/ug/train-a-cascadeobject-detector.html#btugex8

[39] “Effective traffic lights recognition method for real time driving assistance system in

the daytime.” http://www.waset.org/publications/725

[40] S. Tiwari, S. S. Rathore, and A. Gupta, “Selecting requirement elicitation techniques

for software projects,” in Software Engineering (CONSEG), 2012 CSI Sixth International

Conference on, pp. 1–10, Sept 2012.

[41] D. Zowghi and C. Coulin, Requirements Elicitation: A Survey of Techniques,

Approaches, and Tools, pp. 19–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005.

[42] “Cascade classification — opencv 2.4.13.0 documentation.”

http://docs.opencv.org/2.4/modules/objdetect/doc/cascade_classification.html.

[43] “Colormaps in opencv — opencv 2.4.13.0 documentation.”

http://docs.opencv.org/2.4/modules/contrib/doc/facerec/colormaps.html.

[44] Chenyi Chen “Deep Learning for Self-Driving Cars”

[45] “Neural and evolutionarz computing”, D. Zaharia, West Universitz of Timisoara

[46] Daniel Morariu “Machine learning”, Lucian Blaga University, Sibiu

[47] “Artificial Intelligence”, University of Berekeley

[48] “Introduction to Artificial Intelligence”, Indian Institute of Technology

Kharagpur

[50] “Road-marking analysis for autonomous vehicle guidance”, Stefan Vacek

[51] Pawlicki, T.F., Lee, D.S., Hull, JJ., Srihari, S.N. (1988) Neural network models and

their application to handwritten digit recognition. Proceedings of IEEE International

Conference on Neural Networks, San Diego, CA.

Page 66: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

58

58

[52] Pomerleau, D.A., Gusciora, GL., Touretzky, D.S., and Kung, H.T. (1988) Neural

network simulation at Warp speed: How we got 17 million connections per second.

Proceedings of IEEE International Conference on Neural Networks, San Diego, CA.

[53] Dickmanns, E.D., Zapp, A (1986) A curvature-based scheme for improving road

vehicle guidance by computer vision. "Mobile Robots", SPIE-Proc, Vol. 727, Cambridge,

MA

[54] Elman, J.L, (1988) Finding structure in time. Technical report 8801. Center for

Research in Language, University of California, San Diego

[55] Jordan, M.I. (1988) Supervised learning and systems with excess degrees of freedom.

COINS Tech. Report 88-27, Computer and Information Science, University of

Massachusetts, Amherst MA.

[56] Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., Lang, K. (1988) Phoneme

recognition: Neural Networks vs. Hidden Markov Models. Proceedings from Int. Conf. on

Acoustics, Speech and Signal Processing, New York, New York.

[57] P. Zador, S. Krawchuck and R. Voas Automotive Collision Avoidance System

(ACAS) Program/First Annual Report. NHTSA - National Highway Traffic Safety

Administration (http://www.nhtsa.dot.gov/) DOT HS 809 080, August 2000

[58] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In

Proceedings of International Conference on Machine Learning (ICML), pp. 148–156,

1996.

[59] A. Ben-Yaacov, M. Maltz and D. Shinaar , Effects of an in-vehicle collision avoidance

warning system on short- and long-term driving performance. In Hum Factors., pages 335

-342, 2002

[60] O. Mano, G. Stein, E. Dagan and A. Shashua. Forward Collision Warning with a

Single Camera., In IEEE Intelligent Vehicles Symposium (IV2004), June. 2004, Parma,

Italy.

Page 67: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

59

59

[61] A. Ben-Yaacov, M. Maltz and D. Shinaar , Effects of an in-vehicle collision avoidance

warning system on short- and long-term driving performance. In Hum Factors., pages 335

-342, 2002

[62] G. Stein and O. Mano and A. Shashua Vision-based ACC with a Single Camera:

Bounds on Range and Range Rate Accuracy In IEEE Intelligent Vehicles Symposium

(IV2003), June 2003

[63] O. Mano, G. Stein, E. Dagan and A. Shashua. Forward Collision Warning with a

Single Camera., In IEEE Intelligent Vehicles Symposium (IV2004), June. 2004, Parma,

Italy.

[64] A. Mohan, C. Papageorgiou, and T. Poggio. Example-based object detection in images

by components. In IEEE Transactions on Pattern Analysis and Machine Intelligence

(PAMI), 23:349-361, April 2001.

[65] A. Johnson, J. Montgomery, and L. Matthies, “Vision guided landing of an

autonomous helicopter in hazardous terrain,” in IEEE Intl. Conf. on Robotics and

Automation (ICRA), 2005.

[66] S. Bosch, S. Lacroix, and F. Caballero, “Autonomous detection of safe landing areas

for an uav from monocular images,” in IEEE/RSJ Intl. Conf. on Intelligent Robots and

Systems (IROS), 2006.

[67] V. Desaraju, N. Michael, M. Humenberger, R. Brockers, S. Weiss, and L. Matthies,

“Vision-based landing site evaluation and trajectory generation toward rooftop landing,”

in Robotics: Science and Systems (RSS), 2014.

[68] S. Weiss, M. Achtelik, L. Kneip, D. Scaramuzza, and R. Siegwart, “Intuitive 3D maps

for MAV terrain exploration and obstacle avoidance,” Journal of Intelligent and Robotic

Systems, vol. 61, pp. 473–493, 2011.

[69] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof, “Dense reconstruction

onthefly,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 2012.

Page 68: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

60

60

[70] M. Pizzoli, C. Forster, and D. Scaramuzza, “REMODE: Probabilistic, Monocular

Dense Reconstruction in Real Time,” in IEEE Intl. Conf. on Robotics and Automation

(ICRA), 2014.

[71] M. Faessler, F. Fontana, C. Forster, E. Mueggler, M. Pizzoli, and D. Scaramuzza,

“Autonomous, vision-based flight and live dense 3D mapping with a quadrotor MAV,” J.

of Field Robotics, 2015.

[72] P. Fankhauser, M. Bloesch, C. Gehring, M. Hutter, and R. Siegwart, “Robot-centric

elevation mapping with uncertainty estimates,” in International Conference on Climbing

and Walking Robots (CLAWAR), 2014.

[73] D. Gallup, J.-M. Frahm, P. Mordohai, Q. Yang, and M. Pollefeys, “Real-time plane-

sweeping stereo with multiple sweeping directions,” in Proc. IEEE Int. Conf. Computer

Vision and Pattern Recognition, 2007.

[74] J. Stuehmer, S. Gumhold, and D. Cremers, “Real-time dense geometry from a

handheld camera,” in DAGM Symposium on Pattern Recognition, 2010, pp. 11–20.

[75] R. A. Newcombe, S. Lovegrove, and A. J. Davison, “DTAM: Dense tracking and

mapping in real-time,” in Intl. Conf. on Computer Vision (ICCV), 2011, pp. 2320–2327.

[77] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast Semi-Direct Monocular Visual

Odometry,” in IEEE Intl. Conf. on Robotics and Automation (ICRA), 2014.

[78] S. Saripalli, J. F. Montgomery, and G. S. Sukhatme, “Vision-based autonomous

landing of an unmanned aerial vehicle,” in IEEE Intl. Conf. on Robotics and Automation

(ICRA), 2002.

[79] P. J. Garcia-Pardo, G. S. Sukhatme, and J. F. Montgomery, “Towards vision-based

safe landing for an autonomous helicopter,” Journal of Robotics and Autonomous Systems,

vol. 38, 2002.

[80] C. Forster, M. Pizzoli, and D. Scaramuzza, “Appearance-based active, monocular,

dense depth estimation for micro aerial vehicles,” in Robotics: Science and Systems (RSS),

2014.

Page 69: LIST OF ABBREVIATIONS...LIST OF ABBREVIATIONS IV Intelligent Vehicle AI Artificial Intelligence HI Human Intelligence ML Machine Learning NN Neural Network ANN Artificial Neural Network

61

61

[81] S. Lynen, M. Achtelik, S. Weiss, M. Chli, and R. Siegwart, “A robust and modular

multi-sensor fusion approach applied to mav navigation,” in IEEE/RSJ Intl. Conf. on

Intelligent Robots and Systems (IROS), 2013.

[82] G. Vogiatzis and C. Hernandez, “Video-based, Real-Time Multi View ´ Stereo,”

Image and Vision Computing, vol. 29, no. 7, 2011.

[83] X. Bresson, S. Esedoglu, P. Vandergheynst, J.-P. Thiran, and S. Osher, “Fast global

minimization of the active contour/snake model,” Journal of Mathematical Imaging and

Vision, vol. 28, no. 2, 2007.

[84] A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems

with applications to imaging,” Journal of Mathematical Imaging and Vision, vol. 40, no.

1, 2011.

[85] D.B. Fogel. Blondie24: playing at the edge of AI. The Morgan Kaufmann Series in

EvolutionaryComputation. Morgan Kaufmann Publishers, 2002. ISBN 9781558607835.

URL http://books.google.ie/books?id=5vpuw8L0C AC