AUTONOMOUS SIMULTANEOUS LOCALIZATION AND...

40
PROJECT IN MECHANICAL ENGINEERING or MECHATRONICS Group Members: B. Eastwood (Convener) M. McCauley A. Carrell J. Mills Supervisor: A. Veale Department of Mechanical Engineering University of Auckland 05 June 2015 AUTONOMOUS SIMULTANEOUS LOCALIZATION AND MAPPING ROBOT Group 14 Project Report MT00-2015

Transcript of AUTONOMOUS SIMULTANEOUS LOCALIZATION AND...

Page 1: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

PROJECT IN MECHANICAL ENGINEERING or MECHATRONICS

Group Members: B. Eastwood (Convener)

M. McCauley

A. Carrell

J. Mills

Supervisor: A. Veale

Department of Mechanical Engineering

University of Auckland

05 June 2015

AUTONOMOUS SIMULTANEOUS LOCALIZATION

AND MAPPING ROBOT

Group 14

Project Report MT00-2015

Page 2: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

ii

ABSTRACT

The project task is to design and build an autonomous SLAM robot with the

intent of completely covering an unknown environment as well as map this

environment. An omnidirectional robot powered by an Arduino processor as well

as a number of displacement sensors and actuators have been provided for the

purpose of completing this task.

A very good background of the SLAM concept was developed throughout the

course of this project as the goal was to implement a full working model. Two

separate functional models were developed for common SLAM algorithms, wall

detection and obstacle identification. It was eventually discovered that due to the

limitations in the processor capability, sensor performance and inaccuracies, as

well as time constraints, integration of these algorithms into a single system was

not feasible. Thus a model that was more suited given these limitations was

produced.

A full odometry model was developed to provide motion control of the robot

through linear and rotational speed commands. This was utilised to implement a

positional controller, where the desired position could be entered. The

combination of speeds inputs and position targets was used to direct the robot on

a prior planned path to cover as much of the board as possible.

Higher levels of control were added to specify the robot’s performance in certain

states, including finding a corner at the beginning of the run, and orientation to a

wall when one was encountered. A low level of obstacle avoidance was tested and

implemented however, again due to tie constraints it was not present in the final

system.

Overall, separate functionality for various different possible robot situations were

developed, and acceptably accurate motion control was achieved, but planning

did not account for the time it would take to integrate these separate subsystems.

Thus the performance of the final system was far less than the potential of the

combined subsystems.

Page 3: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

iii

Table of Contents

Table of Figures .................................................................................................................................... iv

Table of Tables ..................................................................................................................................... iv

Glossary of Terms ................................................................................................................................. v

Abbreviations ......................................................................................................................................... v

1 Introduction .................................................................................................................................. 1

2 Project Description and Specifications ....................................................................................... 1

2.1 Literature Review ......................................................................................................................................... 1

3 SLAM ............................................................................................................................................ 2

3.1 Introduction to SLAM .................................................................................................................................. 2 3.2 SLAM Summary ........................................................................................................................................... 2 3.3 SLAM Models .............................................................................................................................................. 3 3.4 SLAM Concepts ........................................................................................................................................... 4 3.5 Intended SLAM ............................................................................................................................................ 5

4 System Design and Integration .................................................................................................... 6

4.1 Initial System ................................................................................................................................................ 6 4.2 Additional Hardware .................................................................................................................................... 6

4.2.1 Ultrasonic Range Finder ...................................................................................................................... 6 4.2.2 Servo ..................................................................................................................................................... 7 4.2.3 IR Sensors ............................................................................................................................................. 7 4.2.4 MPU ...................................................................................................................................................... 9

4.3 Final System ............................................................................................................................................... 10

5 Software design and implemented algorithms ......................................................................... 11

5.1 Mecanum Motor Modelling & Odometry ................................................................................................... 11 5.1.1 Base Level Motor Control ................................................................................................................... 11 5.1.2 RPM Conversion to PPM .................................................................................................................... 11 5.1.3 Linear Speed Conversion to RPM ....................................................................................................... 12 5.1.4 Heading Control ................................................................................................................................. 13 5.1.5 Drift Rejection ..................................................................................................................................... 14 5.1.6 Speed Correction Factor ..................................................................................................................... 14 5.1.7 Coercion into Range for Odometry ..................................................................................................... 14 5.1.8 Odometry Calculations ....................................................................................................................... 15 5.1.9 Position Controller ............................................................................................................................. 16 5.1.10 Accuracy limitations ......................................................................................................................... 16

5.2 Landmark Extraction .................................................................................................................................. 16 5.2.1 Spike .................................................................................................................................................... 17 5.2.2 RANSAC .............................................................................................................................................. 17

5.3 Landmark Association ................................................................................................................................ 18 5.4 Mapping ...................................................................................................................................................... 19 5.5 Path Planning .............................................................................................................................................. 20

5.5.1 Finding the Initial Corner ................................................................................................................... 20 5.5.2 Map Covering ..................................................................................................................................... 21 5.5.3 Obstacle Avoidance Algorithm ........................................................................................................... 22

5.6 Arduino Program Structure ......................................................................................................................... 22 5.6.1 General Structure................................................................................................................................ 22 5.6.2 Robot Motion ...................................................................................................................................... 23 5.6.3 Main Robot Motion Functions ............................................................................................................ 23

6 Testing & Results ........................................................................................................................ 25

6.1 Heading Control ......................................................................................................................................... 25 6.1.1 Method ................................................................................................................................................ 25 6.1.2 Results ................................................................................................................................................. 25

6.2 Linear Speed Testing .................................................................................................................................. 26

Page 4: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

iv

6.3 Positional Controller Testing ...................................................................................................................... 27 6.4 Spike Testing .............................................................................................................................................. 27 6.5 RANSAC Testing ....................................................................................................................................... 28

7 Discussion .................................................................................................................................... 30

7 Recommendations ....................................................................................................................... 31

7.1 Algorithm Development ............................................................................................................................. 31 7.2 Sensor Improvements ................................................................................................................................. 31 7.3 Proposed Odometry Improvements ............................................................................................................ 32

8 Conclusions ................................................................................................................................. 33

9 References ......................................................................................................................................... 34

Table of Figures

Figure 1: PuTTY Output for Matrix Calculations ................................................................................................... 5

Figure 2: Mecanum Wheel Schematic ..................................................................................................................... 6

Figure 3: Possible Ultrasonic Range Patterning ...................................................................................................... 7

Figure 4: Medium IR Sensor Characteristic ............................................................................................................ 8

Figure 5: Final Robot Configuration ..................................................................................................................... 10

Figure 6: Relationship Between PPM Sent to Motors and Output RPM ............................................................... 11

Figure 7: Right Rear Motor Forward RPM to PPM Model ................................................................................... 12

Figure 8: Mecanum Inverse Kinematics ................................................................................................................ 12

Figure 9: Right Rear Inverse Kinematics Calculation ........................................................................................... 13

Figure 10: LabView Ground Station Front Panel .................................................................................................. 19

Figure 11: Robot Locating a Corner and Traversing First Wall ............................................................................ 21

Figure 12: General Traversing and Strafing Plan .................................................................................................. 21

Figure 13: Obstacle Avoidance Planning with Return to General Motion ............................................................ 22

Figure 14: Robot Scanning then Aligning to a Wall ............................................................................................. 24

Figure 15: Initial Position for Heading Control Test ............................................................................................. 25

Figure 16: Heading Error for ‘X’ Motion .............................................................................................................. 26

Figure 17: Speed Correction Factors ..................................................................................................................... 26

Figure 18: RANSAC Test Data Output ................................................................................................................. 28

Figure 19: IR Sensor Corner Inaccuracies Visualisation ....................................................................................... 29

Table of Tables

Table 1: IR Range Error Testing ............................................................................................................................. 9

Table 2: Heading Error with & without a Controller............................................................................................. 25

Table 3: Positional Controller Errors ..................................................................................................................... 27

Page 5: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

v

Glossary of Terms

Yaw Rotation about the axis perpendicular to its plane of motion.

Drift Unwanted motion perpendicular to the desired direction.

Omnidirectional Allowing movement in all directions.

Abbreviations

SLAM Simultaneous Localization And Mapping

PPM Pulse Position Modulation

MPU Motion Processing Unit

PID Proportional Integral Derivative

IR Infra-Red

EMI Electro Magnetic Interference

I2C Inter-Integrated Circuit

EKF Extended Kalman Filter

IMU Inertial Measurement Unit

IDE Integrated Development Environment

Page 6: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

vi

Page 7: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

1

1 Introduction

For complete autonomy, a robot needs to be able to navigate while simultaneously localizing

itself within an unknown environment. This has been one of the significant road blocks for

autonomous systems and is still a major area of research to date. This process, known as

SLAM, is the concept of simultaneously locating a robot within an unknown environment,

and mapping that environment. It can be adapted in many different ways depending on the

situation. This is the concept that will be used to complete the project task given.

The aim of the project is to design and build an Autonomous SLAM robot for “room vacuum

cleaning”. The robot is to cover a set area while avoiding obstacles in the shortest time

possible. The less defined, and open-endedness of this problem leaves a vast number of ways

in which the task can be completed. Both systems thinking and integration will become key

factors during the course of the project.

An omnidirectional robot, powered by an Arduino Mega 2560 has been provided, for the

purpose of completing the task. Multiple paths were attempted throughout the progression of

the project. This report discusses the approach taken to complete the assigned task including

concepts which were unnecessary or un-useable. The report covers the implemented

algorithms, receiving sensor data, and sending the robot appropriate commands.

2 Project Description and Specifications

The Autonomous SLAM Robot design project draws on knowledge from a wide variety of

subjects including mechanical, electrical, electronic engineering, and computer technology.

An in depth understanding of motion actuation, sensing, signal processing, programming, and

microcontrollers is needed in order to complete the required task.

The main project task include designing an autonomous SLAM robot, capable of deciding its

own path of movement, while building a map of the surroundings. A number of cylindrical

objects are to be placed randomly about the 1200 x 2000 mm board. Collision with any of

these objects or the walls, is not permitted. The robot must record the map of the path

travelled in order to show complete coverage of the board. Assessment will be based on area

covered, and the time taken to complete this.

A mobile, agile, and highly manoeuvrable robot chassis has been provided. The chassis has

suitable space for all the necessary components needed. Sensors, motors, control circuits and

interfaces have also been provided. Thus, to complete the project task, the microcontroller

needs to be programmed to read sensor data and apply appropriate motor signals to affect the

robot’s behaviour accordingly.

2.1 Literature Review

It has become generally acknowledged that a truly autonomous robot must be able to

navigate, travel and arrive at a location, independent of external commands [1]. When

extended to an unknown environment, this requires both mapping, and localisation, within

that map. If this is performed simultaneously, it is called SLAM [2, 3, and 4]. SLAM

however is not a simple task as it requires a very accurate knowledge of sensor

characteristics, and sensor errors that are introduced in different conditions [5]. Furthermore,

the use of an extended Kalman filter (EKF), requires large matrix operations that can be very

computationally expensive.

Page 8: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

2

Beevers and Huang [2] utilise low cost IR sensors and a microcontroller to implement SLAM,

recognising the inaccuracy of such sensors, especially their inability to completely identify

map features in one scan. They, along with Abrate et al. [3], utilise the idea of a ‘multiscan’,

which combines data from several scans on the same map area to improve sensor

performance. Brunner et al. [5] have characterised general IR sensor characteristics, and

present conditions where IR sensors work far better than vision systems or ultrasonic range

finders. Furthermore, they conclude that if appropriate sensor fusion or data manipulation is

performed, adequate readings can be taken from most sensors in a wide variety of conditions.

Huang and Song [1] use simplified SLAM algorithms and increase the reliance on previous

environment scans to reduce the complexity of EKF calculations. Essentially, by restricting

the scan range and only scanning new areas, the amount of data that must be processed by the

EKF is reduced, also reducing computational requirements. Beevers and Huang [2] also

simplify the EKF calculations by exploiting symmetry of covariance matrices, values that are

multiplied by zero and using lookup tables to avoid full matrix multiplication. This drastically

reduces computational load, and allows SLAM to be executed even on a microcontroller.

Finally, Wang et al. [4] have developed a linearisation technique to manipulate motion

models in preparation for use with an EKF. These papers [1, 2, and 4] prove that SLAM, even

with an EKF, can be performed with limited computational power. Manipulation and

simplification of SLAM algorithms and equations is required however, generally exploiting

characteristics of these equations that avoids simplistic calculations.

Sheu et al. [6] acknowledge the strict requirements of indoor navigation, and accordingly

have developed methods of speed and pose correction, to ensure actual motion matches

desired. Specifically, they have used a gyroscope to give heading readings, and calculated the

error with an initial heading to equate additional rotational movement of the robot

supplementary to primary linear motion.

3 SLAM

3.1 Introduction to SLAM

Localization is the process of a robot moving through a known environment, while using

knowledge of the environment, as well as on-board sensors to find its absolute position.

Mapping is the process where absolute position of the robot is known, and from on-board

sensors, the environment through which it is navigating is modelled. Both of these individual

problems are conceptually derivative. However, when a robot is placed in an unknown

environment, with no knowledge of its own position, both must be computed simultaneously,

thus making the task much more difficult. This problem is known as SLAM. The robot

attempts to move through an unknown environment, using imperfect motion, while also

trying to build a map of this environment, and locating itself within the map. It is currently

one of the biggest problems faced by completely autonomous systems. SLAM is made up of

multiple sub-problems, each of which being very difficult for engineers to solve.

3.2 SLAM Summary

SLAM is one of robotics largest problems, and consequently a large amount of research is

present for this particular field of interest. Because of the broad scope of SLAM, many

different solutions exist of various levels of robustness, difficulty, and computational

expense. There are many different parameters that must be taken into consideration, each of

which influence the required robustness of the implemented algorithm. For example, if

landmarks in the environment are incredibly sparse, then a more accurate motion model is

Page 9: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

3

required. Conversely, if there are many different obstacles in the environment then accurate

sensors are required to not only distinguish them, but to help in associating landmarks to

others previously seen. All implementations of SLAM are based on a key architecture which

is described in the Section 3.3 SLAM Models.

3.3 SLAM Models

SLAM is the process of a robot moving through an unknown environment with uncertain

motion. When the process begins, the robot is either given its initial position, or no

information at all. In either case the robot builds its map relative to this initial position.

For a robot operating on a single flat plane, the locations can be described by a 3x1 vector.

This is the robots x and y positions as well as its rotation about the z-axis.

𝑥𝑡 = [𝑥𝑦𝜃

]

Over the course of SLAM the robot’s path for each discrete time step, 𝑡, can be described by:

𝑋𝑡 = {𝑥0, 𝑥1, 𝑥2, … , 𝑥𝑡}

The robot’s first location is known, but the rest are only correct to a certain degree. Full

SLAM involves evaluating the most likely robot path over the entire run, while online SLAM

is the simplification of this problem that only tries to estimate the current robot position.

In order to evaluate the robot path, most implementations use a probabilistic approach that

fuses data between odometry and sensor readings. The odometry of the robot uses a physical

motion model to estimate the current position of the robot within the map. This can either be

through wheel encoders, on-board motion sensors or assuming motion commands give

accurate odometry.

𝑢𝑡 is characterised by the motion between time 𝑡 − 1 and time 𝑡, which over the course of the

implementation is represented by:

𝑈𝑡 = {𝑢1, 𝑢2, 𝑢3, … , 𝑢𝑡}

For complete noise-free motion, 𝑈𝑡 would be enough to infer position, but for almost every

physical solution there exists some uncertainty between desired or measured motion and the

actual motion. Because of this, the robot also needs to sense the environment around it.

The map, denoted 𝑚, represents the true map of the environment such as landmarks, objects,

surfaces etc. This is regularly updated during each discrete time interval of the robot and is

typically assumed to be time invariant. This means the map often deals with passive rather

than active objects. Measurements of the surroundings give the relationship between 𝑚, and

the robot location 𝑥𝑡. 𝑍𝑡is used to describe observations of the map, and is the combination of

all prior observations:

𝑍𝑡 = {𝑧1, 𝑧2, 𝑧3, … , 𝑧𝑡}

A SLAM model can thus be considered a problem of evaluating a map, 𝑚, from the sequence

of robot locations 𝑋𝑡 using the data obtained from odometry and measurements.

As previously mentioned, SLAM can be broadly broken into two areas. One area which aims

to retrieve the robots full path from all previous readings, and another which only obtains the

current robot position. The first, full SLAM, obtains more accurate results. However this

method uses vast amounts of computational power which grows as the algorithm continues to

Page 10: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

4

run. Often this method of SLAM is used in conjunction with a stationary groundstation, to

which previous data is sent to, and then does not require continual on board processing

resource. Full SLAM can be represented as shown:

𝑝(𝑋𝑡, 𝑚 | 𝑍𝑡 , 𝑈𝑡)

Online SLAM is the method of only calculating the current pose of the robot from the last

series of measurements and locations. This is a much simpler algorithm, and the computation

effort stays constant independent of how long the algorithm is run for. This is represented as:

𝑝(𝑥𝑡, 𝑚 | 𝑍𝑡 , 𝑈𝑡)

Algorithms for this are usually incremental and process one point at a time.

In order to solve SLAM, the robot needs at least two mathematical models. Firstly, a motion

model to relate the encoder values from the motors or voltages sent to those motors to

displacement. This mathematical model accounts for noise to give a probabilistic distribution

on where the robot is located. This function is denoted:

𝑝(𝑥𝑡 | 𝑥𝑡−1 , 𝑢𝑡)

The second model, relates the measurements to the environment and the robot location. This

function also uses probability theory and is shown below:

𝑝(𝑧𝑡 | 𝑥𝑡 , 𝑚)

In this problem, the exact robot location is unknown, there exists three main paradigms for

solving the SLAM problem. These are Extended Kalman Filter based, Particle Filter based or

Graph based.

3.4 SLAM Concepts

SLAM consists of multiple different facets. These facets include:

Landmark extraction - obtaining information on the points of interest of the robot’s

environment from raw sensor data.

Landmark association - recognized as one of the hardest problems in SLAM. This

problem attempts to associates already observed landmarks in the environment with

landmarks observed once the robots position has changed.

State estimation - attempting to accurately locate the robot given the motion and

orientation of the robot.

The final major subset of SLAM is how the associated error of the landmarks are updated

given how often they have been seen. Many of these different problems depend on the

environment and the sensor data, for the purposes of this discussion, an indoor 2D motion

robot is used.

The SLAM calculation process occurs at discrete time steps, and attempts to take in all of the

information from all different facets of the robot. A good SLAM algorithm can use all of this

information and evaluate exactly where the robot is within the environment. An EKF is

integrates the two models required for a SLAM algorithm. It takes in the statistical noise for

the sensor readings and state estimation and converts this into the most accurate estimate of

the robot’s position. The EKF also keeps track of the uncertainty in this estimate which grows

as the robot continues to move, and decreases as more landmarks are re-observed.

Page 11: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

5

Unfortunately, an EKF requires a very large amount of computational effort. It involves many

large matrix multiplications which grow exponentially with the amount of landmarks in the

system.

The EKF involves many different matrices which are all used in different ways to update the

robot position. The first matrix, X, contains the information of the robot state, as well as the X

and Y positions of all of the observed landmarks. The size of this matrix is a one column

vector with 3 + 2𝑁 rows, where 𝑁 is the number of observed landmarks. The next of the

matrices for the EKF is the covariance matrix, 𝑃. This is a very important matrix in the

performance of the system. The covariance is a measure of the correlation between any two

variables and is used to measure the linear dependence between variables. For the purposes of

SLAM the covariance matrix contains the covariance between all elements, and therefore it

has 3 + 2𝑁 columns and 3 + 2𝑁 rows. The Kalman gain matrix , 𝐾, is computed every

discrete time interval to evaluate how much the newly observed landmark values are trusted.

This Kalman gain is calculated the Riccarti equation. The equation considers the levels of

noise present in both the range measurement models and the odometry models to give the

weighting for each value. This value is the perfect compromise between how much the state

estimation is trusted, and how much the landmark observations are trusted. For example, if

the sensor observations are very poor, and the state estimation is very accurate then the

Kalman gain is very low and vice versa. This matrix has two columns, and 3 + 2𝑁 rows,

where the first column represents the range gain and the second column the bearing gain. This

allows a differing amount of accuracy to be placed on either measurement. There are multiple

extra matrices used for the EKF process, and all are needed for this implementation.

For the purposes of the robot vacuum cleaner project, and the need to deal with three

obstacles as well as four corners, the maximum sized matrix calculation needed to be

performed is a 17 x 17 matrix multiplication. As the processing power available for this

project is relatively low, it was decided to assess the speed required to perform this

calculation, checking the feasibility, before continuing this approach.

Just one calculation of these 17 x 17 matrices takes 100ms. The time for calculation is shown

in Figure 1. Each 17 x 17 matrix of floats is 1156 bytes, the Arduino MEGA only has 8192

bytes of dynamic memory. So this one matrix alone takes up 14% of the maximum dynamic

memory for the processor. As this time is only a small portion of the maximum calculations

that the SLAM process needs to perform, as well as the limited dynamic storage of the

Arduino MEGA, it was decided to abandon the attempts for an EKF and focus on an

implementation of SLAM that is possible with the given hardware.

Figure 1: PuTTY Output for Matrix Calculations

3.5 Intended SLAM

The processing power, as well as the dynamic storage of the supplied processor was such that

the EKF implementation of SLAM would not be possible. A more simplified SLAM

algorithm needed to be designed which would not need complex matrix multiplications, but

would still provide an accurate knowledge of the robot’s position which would be updated

both through state estimation and landmark locations.

Page 12: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

6

It was decided to use an algorithm which was based on the SLAM for Dummies algorithm

proposed by Riisgaard and Blas [7], but instead of an EKF, a weighted average would be used

between the state estimation values and the values obtained from the landmark association

algorithms. A Spike algorithm, discussed in Section 5.2.1 Spike, was intended to be used for

extracting landmarks from the raw sensor data. As well as Spike, the RANSAC algorithm

obtained from the same reference [7] was intended to be used to accurately map the

environment and approximate lines for the walls of this environment.

The intersection points of these walls as well as the extracted landmarks from spike were to

be filtered through a landmark association algorithm which takes the observed landmarks

over multiple iterations of the algorithms and only considers them true landmarks when they

have been seen multiple times. These values would be used to create an accurate map of the

system, and through this a simple path planning algorithm described in Section 5.5 was

intended to be used for moving through the environment.

The RANSAC algorithm will also be used to align the robot to the walls to correct gyroscope

drift.

4 System Design and Integration

4.1 Initial System

As described in Project Description and Specifications, the robot chassis has been provided.

The chassis includes:

4 VEX motors and motor drivers

4 Mecanum wheels

An Arduino Mega 2560

A Bluetooth module and computer dongle

2200 mAh LiPo

The Mecanum wheel, pictured in Figure 2, are the source

of the robots manoeuvrability. By orienting the wheels in

a specific way to the chassis, complete omni-directional

motion can be achieved. The robot can move in any

direction within the XY plane, as well as rotate about the

Z axis. The wheels are controlled by VEX motors which

in turn are controlled by their respective motor drivers

through a PPM signal. These motors are all powered by

the 2200mAh Lipo Battery.

An Arduino Mega 2560 is used as the main processing

unit for the system. The microcontroller is based on the ATmega2560, and is configurable

through the Arduino IDE. Wireless serial communication to the microcontroller is made

possible through the Bluetooth module, and Bluetooth dongle.

4.2 Additional Hardware

4.2.1 Ultrasonic Range Finder

The LV-MaxSonar-EZ0 is a high performance sonar range finder. The sensor takes an input

voltage of 2.5 - 5.5V and provides a very short to long-range detection. The sonar detects

Figure 2: Mecanum Wheel Schematic

Page 13: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

7

objects from 6-inches to 254-inches with a 1-inch resolution at 20Hz. The sonar can output in

analog, pulse width, or serial.

The sonar has 4 types of use, each of which

change the beam pattern. The approximate

patterns are displayed in Figure 3.

From initial testing, the sonar readings were

found to be quite inaccurate. With the

manufacturers proposing 1-inch resolution the

power supply was thought to be the issue. Thus,

a low-pass filter was added to clean the sensors

input power and stabilize range data. The

manufacturer recommended a 100-ohm resistor

connected in series with the voltage line, and a

100-microfarad capacitor connected in parallel

between voltage and ground. This resulted in a

cut-off frequency of:

𝑓𝑐 =1

2𝜋𝑅𝐶=

1

2 × 𝜋 × 100 × 100 × 10−6= 15.92𝐻𝑧

The results of this low-pass filter seemed to aid the sensor to some degree but further

inaccuracies made it not very practical for use, including the lower end of its range being only

6 inches.

4.2.2 Servo

The provided actuator is a micro A0090 9g microservo. The servo motor can be set to move

any rotation angle within is specified range of 180o but testing proved the effective range to

be 167o. The servo motor does not have any form of feedback, but keeping the last position

sent to the motor in memory was a useful enough measure.

4.2.3 IR Sensors

Supplied Hardware

Three possible IR proximity sensors were made available for the project, designated short,

medium and long based on their respective relative ranges:

Short: 4-30 cm

Medium: 10-80cm

Long: 20-150cm

The sensors were sourced from Sharp, a manufacturer of optoelectric devices, along with

associated data sheets for each different type. The sensors draw (ideally) 5V and output and

analogue voltage which is generally less than 3.5V. The sensors are connected to the Arduino

Processor by three pins; ground, Vcc, and a third, which the output can be read from. Analog

Pins 13, 14 and 15 are used for the three sensors used.

Sensor Characterisation

Prior to full integration and use of the sensors for robot tasks, such as mapping or obstacle

avoidance, readings had to be generated and interpreted. Furthermore, the reliability and

Figure 3: Possible Ultrasonic Range Patterning

Page 14: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

8

accuracy of readings had to be defined and assessed for their level of usability. Taking initial

readings presented the following issues:

The relationship between output voltage

and distance to object is not one-to-one.

This can be seen on Figure 4, as a reading

of 2V could be interpreted as 3.6 or 11.1

cm. This is the reason that the sensors

have a minimum range, anything

measured within that range appears to be

further than it actually is.

There is a nonlinear relationship even in

the usable range. This requires additional

curve fitting, using either a high order

polynomial or power model.

In order to remove the inaccuracies caused by the

obstacles within the lower range limit, the

different ranges of sensors were modelled and

compared to ascertain whether the use of two

sensors could provide a level of redundancy. This

would work by eliminating one reading if a

second sensor could identify when the first was

out of range. This line of development was scrapped as the modelling and testing was too

difficult, and the use of redundant sensors would be inefficient. The sensors were placed on

the robot so that objects could not come into the lower range, thus eliminating the source of

error.

Software Interpretation

Code was sourced from the ‘Arduino Playground’, an open source collection of repositories

for use with Arduino processors. The specific library used is called ‘Sharp IR Sensors library

for Arduino’ produced and copyrighted by Rd. Marcal Casa-Cartagena, in 2014. Whilst this

code performed relatively well, modifications and additions were made to improve the

accuracy of the readings.

The basic code structure was maintained throughout use of the library, and consists of one

‘SharpIR’ class, which has functions to read, interpret and average the readings from the pin,

designated at initialisation. This reading functions in two stages as follows:

Power Modelling

A power model is fitted to the readings of each sensor to determine the range in mm.

Medium: 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = 277.28 × (𝐴𝑛𝑎𝑙𝑜𝑔 𝑉

1000⁄ )−1.2045

Long: 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = 615.73 × (𝐴𝑛𝑎𝑙𝑜𝑔 𝑉

1000⁄ )−1.1068

Short: 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = 103.98 × (𝐴𝑛𝑎𝑙𝑜𝑔 𝑉

1000⁄ )−0.924

Figure 4: Medium IR Sensor Characteristic

Page 15: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

9

The long and medium power models were supplied with the Sharp IR Sensors Library, and

appeared to provide adequate results. These power models also solved the aforementioned

issue of the nonlinear relationship between voltage and distance.

The short range IR sensor model was not covered in the library, thus had to be produced

through modelling. This was done using Excel and a power model to follow the implemented

system as closely as possible. Short sensors were not used in the final solution, due to their

ineffective range.

Multiple Sample Averaging

As the IR sensors tend to fluctuate for a single reading, the sourced code contains averaging

functionality, taking a number of readings and ensuring that they agree to within a certain

percentage. Values for number of readings and percentage tolerance are specified at

initialisation, 50 and 95% are used in the project respectively.

IR Testing & Calibration

The sensors were tested, statically, on a material similar to that of the final ‘room’. This was

achieved by placing the sensors a known distance from an object, taking three readings and

averaging them. The results are presented in Table 1, with all distances in cm. The sensors are

relatively accurate, displaying error ≥5cm for most values. The long range sensors are more

accurate, especially in the 20-55cm range. The inaccuracy at low range can readily be seen

here, as 10cm reads as 22 from the long range sensor.

Table 1: IR Range Error Testing

Actual: 5 10 15 20 25 30 35 40 45 50 55 60 65 70

Long Range: - 22 20 22 25 29 34 40 46 51 57 64 70 75

Short Range: 5 8 13 17 22 27 32 37 43 48 - - - -

Using the process described above to convert the outputted voltage to range, still gave invalid

readings. From further inspection it was found that these values still held a relative constant

value, but offset. Thus a function was developed to convert the output reading to the correct

value. The function varied between sensors and therefore had to be developed individually.

Output readings were mapped to the actual measured values and a trendine was fit using

excel. A linear trendline was sufficient with an R2 value of at least 0.99, this function was

then implemented in Arduino code, returning the corrected values back.

3D printed tunnels were initially used to increase the accuracy of the IR readings. This was

found to prevent random obscure readings from happening, but also introduced other

problems to the sensor. Measurements that were known to be flat, i.e. walls, appeared curved

when mapped on to an XY graph. The reason for this error is still unknown.

4.2.4 MPU

MPUs are rapidly becoming an essential hardware component in the majority of ‘smart’

devices, as users find added value in the ability to measure acceleration, tilt, angular velocity,

and orientation. The applications of these readings are nearly unlimited, but all relate to

assessing the pose or change in pose of the system in which it is attached. The functional unit

of an MPU is an IMU, which typically combines an accelerometer, gyroscope and

magnetometer for linear and rotational measurements. In this case, the IMU is part of a larger

Page 16: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

10

integrated circuit, which reads the sensors, processes the signal, and stores data until it is read

by the main processor. The MPU consists of the IMU, integrated circuit and peripheral

communication system.

The MPU provided is the MPU-9150 manufactured by InvenSense. The MPU-9150 has a 3

axis accelerometer, gyroscope and magnetometer, as well as a temperature sensor. The

magnetometer can be used to supplement gyroscope readings for orientation, and provide

absolute heading information relative to Earth’s magnetic field. Communication with off-

board processors is achieved through I2C protocol.

The SLAM robot is limited to motion in the horizontal plane, assuming the test surface is flat,

the only readings that are relevant to the project are; X and Y accelerations, Z rotational

velocity and orientation. In the final solution only orientation is used, for heading control and

realignment at the start of the program.

Libraries for reading the data from the MPU were provided. The libraries use a read function

to initialise I2C communication and receive data sent from the MPU registers Readings from

the MPU have been utilised in the directional control functions.

4.3 Final System

The final setup of the robot is

relatively simple, with only three

displacement sensors beings used.

The three chosen sensors are

medium-range IR sensors. These

were used in conjunction with the

servo to provide a 360o field of view

to the robot. As displayed Figure 5,

the IR sensors have been connected

to the servo through a 3D printed

mount. The sensors have been set up

with 120o displacement between

them, thus with one 120o sweep the

full 360o field of view can be

covered. The decision to go with

three sensors was chosen since the

servo motor didn’t have a complete 180o range, as two sensors could have easily been used to

give the same outcome, if the range was larger. The servo is incremented after the IR has

completed one full reading, this is then repeated 120 times. As the project task progressed,

more accurate readings were needed, therefore the IR sensors were set to average 50 values.

This made the full 120o sweep relatively long, in turn showing that the choice of three sensors

over two was the right option. This particular setup, with the 360o view, enabled the robot to

move in any direction, with sensor feedback in that direction. This allowed the robot

complete its planned path, without rotating. The IR mount has been screwed in to the servo to

prevent it from disconnecting during fast sweeps. Height of the servo was also taken in to

careful consideration. Initially it was placed too high, which was one of the main sources for

bad readings, as the offset angle relative to the horizontal plane caused the IR to read outside

of the board. Thus the servo was mounted as low as possible without receiving interference

from the robot itself.

This configuration for the IR sensors was chosen as it gave complete coverage around the

robot as well as a good set of data for the intended landmark extraction algorithms.

Figure 5: Final Robot Configuration

Page 17: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

11

As shown in the right of Figure 5, the MPU has been placed towards the front on a raised

mount. The mount needed to be low enough to not obstruct the IR sensors, but high enough to

not to affected by the electromagnetic disturbances. This placement was chosen by trial and

error.

5 Software design and implemented algorithms

5.1 Mecanum Motor Modelling & Odometry

Mapping an environment, and by extension SLAM, requires an estimation of landmarks,

given the robot’s poses. Thus, increasing the accurate knowledge of the robot’s poses, will in

turn increase the accuracy of the map produced.

The Mecanum robot used in this project presents the following key difficulties in estimating

its pose:

Non-linear relationship between the PPM voltage values sent to the motor and the

RPM of the wheel.

Indeterminate slip, skidding and differing resistance forces, all stemming from the

design and structure of Mecanum wheels.

Different and indeterminate motor characteristics and discrepancies in the mechanical

design connecting the wheels to the motors.

The following models have been developed with the end goal of providing accurate and

reliable odometry across a range of velocities, in the process eliminating the difficulties

mentioned.

5.1.1 Base Level Motor Control

The motors are PPM, however on the software level this control is achieved through writing a

microsecond value to the motor either side of 1500, to make the motor turn forward or

backwards. Unfortunately input of a certain PPM gives an indeterminate wheel RPM, and

does not correspond to a linear speed either. Furthermore, input of the same PPM to each

motor doesn’t result in the wheels spinning at the same RPM. The first correction required is

a linearisation of output RPM from PPM sent to the motors.

5.1.2 RPM Conversion to PPM

The model relating PPM and RPM was formulated by sending PPM values between -350 and

350 in increments of 10, and measuring the RPM of each wheel, with a tachometer.

-150

-100

-50

0

50

100

150

-400 -200 0 200 400

Wh

eel R

PM

PPM Sent to Motors

Right Rear

Right Rear RR F RR R

Figure 6: Relationship between PPM Sent to Motors and Output RPM

Page 18: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

12

The results for the right rear motor are shown in Figure 6.

The model shows three distinct regions, forward and reverse relationships as well as a

deadband region between. In this case PPM values between -60 and 60 resulted in no motion.

The forward and reverse relationships are clearly non-linear, and could be described by

power, exponential or polynomial models, however to minimise computational resource

requirements a polynomial was chosen.

As this relationship, albeit a piece-wise one, is one to one, it was a simple task to switch the

model to obtain PPM as a fourth order polynomial function of (desired) wheel RPM. This can

be shown in Figure 7 again modelling the right rear motor, however only showing the

forward relationship.

Whilst modelling was initially performed using Microsoft Excel, the accuracy of the ‘fitted’

line and the actual points did not align. This was identified when the models were applied to

the physical robot, and the resulting motion did not match the linear performance it should

have. The final models were produced in Matlab, due to its higher computational power, and

resulting improvement in accuracy. The models were then validated by the robot’s

performance, which displayed linear performance for a given RPM, with minimal disturbance

from the initial heading. At this stage linear speed of the actual robot was completely

unknown.

Implementing these models in software was relatively simple in its final form. Whilst several

iterations were produced, the final design is modular, with separate functions for each motor,

and an additional function to perform the polynomial calculations.

5.1.3 Linear Speed Conversion to RPM

To generate RPM from desired linear speeds,

Mecanum wheel inverse kinematics were used,

as shown in Figure 8.

Implementation of these equations in software

is relatively simple, especially as the motor

Figure 7: Right Rear Motor Forward RPM to PPM Model

Figure 8: Mecanum Inverse Kinematics

Page 19: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

13

models had already been separated into individual function calls. Desired speeds, in mm/s, in

two directions (X, Y), and in rad/s (in the Z direction) are sent to each model, where they are

manipulated into a single RPM value. For example, the RPM for the right rear wheel (also

denoted wheel 4), is generated using the following equation:

�̇�𝑹𝑹 =(𝒗𝒙 + 𝒗𝒚 + (𝑳𝑬𝑵𝑮𝑻𝑯𝑺) × 𝝎)

𝑹𝒘×

𝟏𝟖𝟎

𝝅

The software implementation uses the equation shown in Figure 9:

In this case, ‘LENGTHS’, ‘RADIUS’ and ‘RAD2RPM’ are constants, defined as macros, and convert

between values of different units.

5.1.4 Heading Control

Whilst aggregation of the first two conversions gave approximately linear motion in all

direction, the robot still encountered yaw, and drift. These values were not large, but with

respect to the scale of the target ‘room’, the level was unacceptable, and accurate odometry

would be extremely time-consuming and difficult to implement, let alone test.

Compensation for yaw was solved using a digital controller. Initially, PD control was used,

and results were extremely positive. The gains were determined using a rough form of

Zeigler-Nichols Closed-Loop Method, with a priority of stability. As increasing amounts of

code were introduced into the system instabilities grew to the point where the gains calculated

made the system unstable, and caused the response be too rapid. This is likely due to the

reliance of digital controllers on previous error and output values. Thus if not updated at a

high enough frequency, delays, which became increasingly prevalent, can cause the controller

output to grow to unstable levels. The result of this was overshoot and oscillations. To

prevent this, the proportional gain was reduced and derivative removed, but it was not

possible to have a stable system that also had a high enough gain to correct error. The end

solution was to increase the proportional gain to 10, but limit controller output to 2rad/s. This

ensured a higher level of control for small errors, without overloading the system for large

ones.

The controller was designed using standard digital controller theory, and the following

algorithm is used to implement the software form of the controller. All heading values are

fused Euler pose readings from the MPU.

If the robot is stopped:

o Set all errors (digital controllers require current error, and errors from the two

previous cycles) and output speeds in each direction, to zero, and update the

desired heading.

If the robot is moving:

o Update the errors, with each error value moving back by one, and the current

error calculated from the current heading

o Ensure the current error is within –pi → pi, otherwise the robot may attempt to

complete a turn in the longer direction to restore heading

o Correct X and Y speeds using the current error, in terms of world coordinates.

This ensures the robot continues at the desired heading, rather than in a

direction relative to its orientation

Figure 9: Right Rear Inverse Kinematics Calculation

Page 20: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

14

o Update rotational speed with digital controller, using heading errors and

previous speed value

o Update previous speed value

The initial iteration of the controller used the magnetomer on the MPU in the fused Euler

angle reading, but this made the reading fluctuate heavily. Additionally the reading varied

heavily based on interference from wiring, electrical devices, and ferrite based objects.

Eventually, the magnetomer was discarded from the fused Euler pose giving fairly good

results, specifically a deviation error of 10mm over 50mm. This was deemed acceptable

especially given the precision of testing and the error that it would have produced.

5.1.5 Drift Rejection

The heading controller reduced yaw disturbances, but could not account for drift. The likely

cause is overpowered motors on one side of the robot. To compensate this, forward and

reverse values were ‘trimmed’, either increasing or decreasing the RPM value entered into the

motor models. This method of altering the relative rotation of specific motors kept the

solution adaptable, without major testing or redesign. An extra level of trimming was added

for motion in the Y direction, adding a significant proportion of X motion, to prevent drift.

The trim values are equated in software just before the RPM values are converted to PPM,

and are declared as macros, so that they can be easily altered without finding each and every

instance.

5.1.6 Speed Correction Factor

Even after adding the preceding models, converting linear speed to RPM, then PPM, heading

control to remove yaw, and trimming the RPM values to account for drift, the input speed

was found to be far slower than the actual speed of the robot. This could be due to the motor

models representing a relationship, but not exact values, or due to the additional control

effort, however a correction factor accounts for the discrepancy regardless.

Through testing and comparing the input speed to the actual linear speed of the robot models

were formed to correct the difference. The test method and results are discussed in Section

6.2 Linear Speed Testing, after which the following equations were able to be produced.

Forward:

o 𝑋 𝑆𝑝𝑒𝑒𝑑 = 0.511 × (𝐷𝑒𝑠𝑖𝑟𝑒𝑑 𝑆𝑝𝑒𝑒𝑑) + 18.073

o 𝑌 𝑆𝑝𝑒𝑒𝑑 = 0.528 × (𝐷𝑒𝑠𝑖𝑟𝑒𝑑 𝑆𝑝𝑒𝑒𝑑) + 38.282

Reverse:

o 𝑋 𝑆𝑝𝑒𝑒𝑑 = 0.489 × (𝐷𝑒𝑠𝑖𝑟𝑒𝑑 𝑆𝑝𝑒𝑒𝑑) − 18.959

o 𝑌 𝑆𝑝𝑒𝑒𝑑 = 0.517 × (𝐷𝑒𝑠𝑖𝑟𝑒𝑑 𝑆𝑝𝑒𝑒𝑑) − 39.956

These relationships are linear, proving that the linearisation steps and inverse kinematics are

correct, but suggesting that there is at least one last motor characteristic, which is not

modelled.

5.1.7 Coercion into Range for Odometry

Finally, in order to use the input speeds for accurate odometry, the speeds are coerced into a

range that is:

High enough that the robot actually moves – speeds below ~50mm/s do not result in

the robot moving – but would be registered by the odometry.

Page 21: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

15

Not too high that the motors are over saturated, and do not turn any faster for

increases in speed – again this would result in the Odometry estimating the robot is

moving faster than it actually is, giving inaccuracies at high speeds.

The solution is to set speeds lower than 50mm/s to zero and if the total speed is greater than

300 (thus presenting a chance to over saturate on of the motors), the speeds are scaled back.

This scaling factor is the same ratio for each direction, thus ensuring the total vector is in the

same direction

5.1.8 Odometry Calculations

Odometry was achieved by integrating an acceleration and speed values in each direction.

Due to the computational resource available, and in general speed inputs are constant or step

functions, integration was limited to the following approximations:

𝑣 = 𝑎 × ∆𝑡

𝑠 = 𝑣 × ∆𝑡

Fusion of MPU and Robot Commands

The first odometry model attempted to use acceleration readings from the MPU (integrated

twice) to generate changes in X and Y position. The algorithm was as follows:

Read acceleration from MPU

Integrate over time period since last reading to give velocity

Fuse MPU ‘speed’ with output speed

Integrate again to give ‘position’

Update position

It became immediately obvious that the value produced was completely unusable, because the

MPU did not give a constant reading, even when stationary. This resulted in a constantly

increasing absolute value of position. Furthermore the Accelerometer had these issues:

‘Drift’ / constant increase in reading

Vibrations when moving, particularly at low speed

Indeterminate Conversion from output to acceleration value

Calibration requirements and increased reliance on MPU

Requirement of integrating twice – increasing the error at each stage

Because of the magnitude of these combined issues, it was simpler to redesign and test a

different method, than attempt to eliminate MPU errors.

Pure Robot Commands

Thus the solution was to eliminate MPU readings altogether, and only integrate output

velocity, using the following algorithm (which was made more accurate by including a

trapezium approximation for velocity):

Update current velocity, and time information

Calculate average of current and previous velocity – trapezium approximation

Integrate average speed over time step (since last integration) using same integration

method, giving position

Update ‘previous’ values (speed and time) in preparation for next loop – set to current

This method gave accurate enough readings to not warrant further development.

Page 22: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

16

Additionally, accuracy was increased by updating the odometry, once at the start of the loop,

then once again at the end, after all the speeds have been manipulated.

5.1.9 Position Controller

Once accurate odometry information was available, it became a simple task to produce a

position controller that would automatically direct the robot to a specific point. This is

extremely useful for path planning, as it simplifies the requirements of such an algorithm, all

it then needs to generate is a set of points.

The obvious controller to use is a digital one, repeating the form used for heading control.

Unfortunately, this type of controller did not translate to position control with the same

accuracy. This was because of the variable speed outputs of the controller, which were not

necessarily in the ideal range for odometry, thus decreasing the accuracy of the odometry

model and producing sub-par results.

Alternatively, it was noted that the robot could move responding to step functions of speed

for periods of time, and maintain odometry accuracy. Thus a simple ‘bang-bang’ or on/off

controller is used with a deadband of 3mm. Whilst this isn’t the most elegant solution, it is

computationally very efficient, accurate to an acceptable level, and with appropriate

tolerances for obstacle avoidance will integrate well with path planning.

5.1.10 Accuracy limitations

Whilst the entire system was designed to give complete omni-directional accuracy, testing

showed that fusing X and Y speed vectors:

Produces less accurate odometry

This inaccuracy makes achieving a working positional controller too difficult,

requiring increasing complexity to overcome the inaccuracy

Additionally, because of the trimming and correction factors for speed, the optimal speed

range (in terms of odometry accuracy) to use was roughly 150 – 220mm/s in all four

directions. The robot was tuned around these values to maintain accuracy whilst also moving

as fast as possible around the environment. Thus, movement was limited to X and Y

directions separately, with a preference to the X (FWD & REV) direction, as this was more

accurate than Y, and using speeds of 200mm/s where appropriate. The biggest issue was drift

(without change in heading) especially at low speeds, in all directions as the heading

controller does not account for this.

5.2 Landmark Extraction

Both landmark extraction algorithms work off a circular array built by the IR sensors and

servo. A full 360 field of view is given to the robot by this circular array as a global variable,

which is thus accessible by any of the necessary algorithms requiring it. The two main

landmark extraction algorithms are explained further below.

In order to build a good knowledge of the surroundings of the robot, it was essential to

convert the raw Infrared sensor readings produced by the rotating sensor into useable data for

the rest of the SLAM algorithm. One aspect of this is to sense where the main obstacles in the

surroundings are, and another aspect is to sense where the walls of the enclosure are.

To extract the location of the main obstacles an algorithm called spike was used, and to

extract the equation for the lines an algorithm called Random Sample Consensus, or

RANSAC, was used.

Page 23: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

17

5.2.1 Spike

The spike landmark extraction algorithm is used for detecting outstanding objects from the

surroundings. By passing through IR sensor data and comparing differences in range, these

objects can be detected.

Spike proposes a simple landmark extraction algorithm but lacks robustness. The algorithm is

fully dependent on sensor readings and accuracy, as fluctuations in readings can easily be

mistaken as landmarks. Additional conditions have been added to the algorithm to increase

suitability and robustness given the limitations of the current sensors, but a plateau has been

hit where further additions will jeopardise valid landmarks from being detected.

The implemented algorithm is as follows:

Cycle through circular array

Check if the current and previous readings are valid

Take the difference of these two values

If the current value is smaller than the previous, i.e. an edge has been detected from its

surroundings, and is greater than the landmark detection range of 200mm

Cycle through the next 30 values searching for the other edge of the object

Reset values above 360 to the correct circular value

Check if the current value and the next value are valid readings

If the current is smaller than the next value, and is greater than the landmark detection

range - a landmark has been located

Set the bearing and range in memory for the located landmark

The range has been set using another simple algorithm. Initially, range was set by taking the

IR value from halfway between the two edge bearing readings. Due to the fluctuation of the

sensors, this value taken was often false. Thus the new implemented algorithm cycled through

the readings between both edges and took the most common values, within a set tolerance.

The range then became the average of these values.

Under the assumption that all incorrect landmarks are due to fluctuations in sensor readings,

which is valid given the project task, incorrect landmarks should not pose a problem as a

landmark is only considered fit for use if it has been observed more than a set amount of

times. It is very unlikely that a fluctuation will occur in the same relative place twice.

5.2.2 RANSAC

The RANSAC algorithm uses the data extracted from the three rotating IR sensors, and

converts it into lines of the form 𝑦 = 𝑚𝑥 + 𝑐. The algorithm initially works by having an

unsorted array of raw absolute position data representing one value for each separate degree

around the robot. Firstly, a random point from the 360 degrees is chosen, and a sample of 10

unique degrees surrounding it are chosen using the inbuilt Arduino random function. Once

the sample points are chosen, a line of best fit is fit between these points. The algorithm then

cycles through all points of the 360 degree array and gets the distance from these points to the

line fit through the sampled data.

If this distance is less than some specified-variable tolerance then these data points are added

into an array which is the consensus for this line of best fit. The amount of data points in the

consensus is then compared to a pre-defined limit, and if there are more of them than a

required number, this constitutes enough to represent a line. As these data points were chosen

about a line which most likely included random outlier values, a new line of best fit is created

only through the consensus points. All members of the consensus of the newly found line are

Page 24: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

18

then indicated that they have been sorted. The data for the lines is stored in global arrays

which can be further accessed by other parts of the robot software.

The algorithm then repeats itself, but ensures than any values previously sorted into a line are

not chosen or selected for further iterations of the algorithm. This will be repeated until a

large portion of the points are sorted into a line, or if the algorithm has been performed too

many times.

In order to ensure correct operation of the algorithm, the same code was ported onto a C++

Visual Studio 2013 project for the purposes of real-time debugging and processing.

Artificially built rectangles were used for testing, and it was found that the line of best fit

algorithm would crash when trying to fit data to a perfect vertical line. Although this was

almost impossible in a real-world situation, measures were implement to remove the

possibility of the code crashing if this were the case.

The Arduino language is based on C++, and so very little changes were required in order for

the code to be operational. Once the code was performing as required, the code was returned

to the Arduino language and testing of the algorithm was performed on artificially-produced

IR sensor data before finally implementing using the raw data on the actual system.

The results from testing the RANSAC algorithm are discussed in Section 5.5.2 Map

Covering.

6.5 RANSAC Testing. Because of the results, the actual implementation of the algorithm

differed from that described above. The final implementation is described below.

The RANSAC algorithm was intended to be run after every motion of the robot, so that it

could be used for building an accurate map as well as ensuring the robot was aligned to the

walls. However it was found that under dynamic conditions, which are further discussed in

Section 6.5 RANSAC Testing, that the lines the robot found were not as accurate as the angle

obtained from the gyroscope. It was decided to use a modified version of RANSAC to align

to the walls which eliminated the effect of the random noise by telling the algorithm where

the wall was most likely to be. This was run every time the robot stopped near a wall in the

map coverage algorithm, which is discussed in Section 5.5 Path Planning.

The robot would perform a scan which spanned 30 degrees in the direction which it was

aligning to, and the case structure of its implementation was only performed when there was

definitely a wall located there. The 𝑦 = 𝑚𝑥 + 𝑐 line was obtained, and the inverse tangent

was performed on the gradient to obtain the necessary angle of bot rotation. This algorithm

performed very well in rotating the robot back to the perfect orientation.

5.3 Landmark Association

The landmark association algorithm aims to identify similarities between inputted landmark

data, thus recognizing the same landmark at different points in time. This enables landmarks

to be marked as more reliable if they have appeared in sensor data multiple times. This

functionality stops misinterpreted data or landmarks only visible within a certain range, from

affecting the robots behaviour. A landmark is only considered verified, and therefore taken in

to account for path planning, if it has been observed more than X amount of times.

The algorithm first checks the landmark data that has been previously collected and

determines which of the stored landmarks is closest to the new data inputs. It then checks if

the new landmark data is within a set margin of error, of the closest landmark. If so, then the

previously observed landmarks observation count is incremented, otherwise the landmark is

added as a new, separate, landmark to be compared with future data inputs.

Page 25: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

19

5.4 Mapping

The given task requires the robot to record the map of the paths travelled, as well as the

surroundings. Storing all this data within the Arduino is not feasible due to the small memory

capacity, thus a real time logging application was written. The logging application works on

the basis of Online SLAM, where the current estimates and pose are computed on the robot,

and then sent to the ground station to be further interpreted. The chosen IDE was LabVIEW.

The logging application, directly connects to the robot through serial communication via the

Bluetooth modules. Both reading and writing is capable from the application. The front panel

of the application is displayed in Figure 10, below.

As shown in Figure 10, the application lets the user select the correct COM port. The byte

count, or the number of bytes read per scan is also selectable. Lengthy lines exceeding the

byte count would return random ASCII characters, thus this configurability was needed. A

read buffer is shown towards the top right, this works exactly like the buffers used in PUTTY

or even the Arduino serial communications application. Write communication has been made

available directly above this and as displayed the XY graph is placed below. The graphs

updates all relative data in real-time, including plot configurations. The current graph shows

the mapped path from one of the initial testing runs. The path is shown in red, and the

approximated walls in white. Initially the robot tries to locate a wall so it can align itself with

this wall. This is shown in the top left portion of the path. From there the robot moves back

and forth down the length of the map. This path planning algorithm is explained in more

detail in Section 5.5.

Data to be mapped is sent to the application via the serial communication. Line by line this

data is interpreted and placed on the map accordingly. Each line has a unique identifier which

Figure 10: LabView Ground Station Front Panel

Page 26: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

20

determines the way this data is interpreted. In the case of the robot path, the identifier

“PATH” is written at the beginning of the line, as well as the x and y co-ordinates. These co-

ordinates are then appended to the path array and then is the placed on the real-time graph.

The other forms of identifiers accepted include walls, IR readings, and interpolated lines

outputted from RANSAC. The identifier setup within the program was made relatively simple

so that if extras could easily be added, saving time in the future.

An error dialog has been placed on the bottom left. This was added to provide the user with

any connection issues that may occur whilst running. Usually the user isn’t notified until run-

time is over.

5.5 Path Planning

The following section describes the algorithm implemented for completing full room

coverage, which has been proposed. As the program has been decomposed into many series

of very logical steps, the algorithm described below is presented in a series of bullet points

that describe what the robot does at each step.

The robot begins the SLAM problem by having a random position in an unknown map that is

known to be a rectangular shape. This is shown in the top left (‘1’) of Figure 11.

The first step of the algorithm is to find the first possible wall, and then traverse along this

until the first corner is reached.

5.5.1 Finding the Initial Corner

The robot moves forward until it detects an object in front of it which can either be an

obstacle or a wall

Checks if this object is an obstacle or a wall

o If it is a wall the robot rotates its body until it is perpendicular to this wall

o If it is an obstacle the robot begins a function which will take it around the

obstacle until it reaches a wall

Once the first wall is found, the robot orientates itself perpendicular to the wall

It then strafes right until there is a wall to the right of the robot

Once the robot has found the initial corner, it enters the next process which is the algorithm

used to cover all of the map possible. Finding the initial corner is depicted by ‘1’ and ‘2’ of

Figure 11.

Page 27: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

21

Figure 11: Robot Locating a Corner and Traversing First Wall

5.5.2 Map Covering

The map coverage algorithm works by the robot moving forwards or backwards until it

encounters a wall, and then strafes left by the width of the robot before continuing in the other

direction. The general motion plan is described in Figure 12.

The robot moves in the x-direction until it encounters a wall

Once a wall is encountered, the robot strafes left by the width of the robot

The robot then continues in the opposite x-direction until it encounters the other wall

Whenever the robot discovers an object, it performs a check to see whether this is an

obstacle or a wall and acts accordingly.

Figure 12: General Traversing and Strafing Plan

Page 28: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

22

5.5.3 Obstacle Avoidance Algorithm

This algorithm is only entered once the robot has encountered something either in front or

behind it, and checks if this is an obstacle or a wall.

The robot updates the current y-position as the master y-position which represents the

final y-position that the robot will have before the algorithm is exited

The robot then checks if there is an object to its left, and if there is not then it will

move by one width of the robot to the left.

Otherwise it will try move by the same amount to the right.

After strafing sideways, the robot then checks to see if it can move forward

o If it cannot, then it continues moving sideways and this step is then repeated

o If it can, then it moves forward by two times the length of the robot

Once the robot has moved forward, it then checks to see if it can return back to the

master y-position mentioned before

o If it can, then the robot strafes back until it is at the original y-position on the

state estimation

o If it can’t then it continues to move forward until it can strafe in the desired

direction

When the robot reaches the desired y-position, the algorithm is finished and the

original map coverage algorithm is continued

Figure 13: Obstacle Avoidance Planning with Return to General Motion

5.6 Arduino Program Structure

5.6.1 General Structure

The program code used for the implementation of the robot vacuum cleaner has been written

using the Arduino IDE. The language used in this environment is closely related to C++ with

a few small changes in order for it to be run on an embedded system. Every Arduino program

consists of a setup function which is run once on start-up and then a loop function which is

called continuously throughout normal operation. The robot vacuum cleaner program code

Page 29: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

23

uses these two functions to build the core of the code, and it is within these two that all of the

custom function calls are made.

Many different software libraries have been obtained online, which have been included into

the Arduino libraries for use with the project. These libraries are used to interface with the

MPU, IR sensors, and servo motor. All libraries have been included at the beginning of the

program code at the top of the main Arduino file.

Most functions written for the purpose of implementing SLAM have been consolidated into

other files which are also included into the Arduino program at the beginning of compilation.

This has been done to reduce file size, speed up compilation, add modularity, and to make

debugging simpler. The included files are discussed below, the most important of them being

‘Variables.h’. This header file houses all global variables which are used throughout the

operation of the program. Although this can make it difficult to understand where each

variable has been altered, the sheer number of different values needed made this a necessity

and thus careful programming was performed in order to ensure variables are only changed

where necessary.

5.6.2 Robot Motion

The main two modes of motion for the robot are either moving straight in the forward and

backwards direction or strafing left and right. These modes of motion keep the robot

orientated in the same direction during operation, orientation is only changed initially during

the first mode of motion to align perpendicular to the wall. While the robot is moving, there is

active collision avoidance which is achieved through sweeping the IR sensors through the

direction in which it is moving.

5.6.3 Main Robot Motion Functions

RotateToWall()

The rotate to wall function scans a range of 30 degrees about the central direction of the wall.

A least squares regression line is then fit to these points, the relative offset angle to this wall

is then calculated by the inverse tangent of the gradient of this line. This basic concept is

shown in Figure 14 below:

Page 30: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

24

Figure 14: Robot Scanning then Aligning to a Wall

ForwardMoveScan()

This function can perform motion in the robot x-position in either direction. A control

Boolean can be passed as an input thus allowing two different functionalities within the one

function. With this Boolean false, the robot will continue moving in the given direction until

the IR sensor detects an object. If the Boolean is true, the robot will attempt to move a

specified distance unless an object has been detected.

StrafeMoveScan()

Strafe move scan operates under the exact same principle as ForwardMoveScan, except the

motion is in the robot y-direction. These were split for debugging purposes under the

intention that the perfectly working system could them merged into one again.

CheckObstacle()

Check obstacle is called within either StrafeMoveScan or ForwardMoveScan when an object

has been detected. This function is used to check the characteristic of the object which has

been detected. It performs a scan from −25° to 25°, and calculates the objects parallel

distance away. The maximum and minimum are then obtained from this data set. From this

maxima and minima, the object can be characterised. If the two values are similar, it is most

likely a wall, If this object is an obstacle, the difference will be large. This function will also

return a flag if a false reading has been obtained.

MoveToCorner()

The move to corner function is the first to be called out of all the explained functions.

ForwardMoveScan is called until a wall has been encountered. The robot then uses the

RotateToWall function to align to this wall, upon completion, the robot calls StrafeMoveScan

until a wall to the right has also been encountered. The robot should now be located within

the top right corner. The robot now begins the map coverage algorithm.

Page 31: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

25

6 Testing & Results

6.1 Heading Control

After the heading controller was implemented it was essential to test its effectiveness over the

range of a standard run, as these values could be used to assess the controller gain

performance.

6.1.1 Method

The robot was set at a desired heading, and

made to run at 180mm/s, as this value was

roughly in the middle of the range of

accurate speeds. At intervals of 10cm, the

position of the front left wheel was noted, on

the test platform. At the end of the run

(100cm), the positions of the front left wheel

relative to the original heading were

measured. The initial position is shown in

Figure 15, with every 100mm noted, so the

error at those positions can be recorded.

The raw absolute values are shown in Table

2, as well as the percentage of total X movement.

Table 2: Heading Error with & without a Controller

Dist. From Start (D) (mm) 0 100 200 300 400 500 600 700 800 900 1000

Error Controller (mm) 0 2 3 5 7 10 13 14 18 21 24

% of D 0% 2.00% 1.50% 1.67% 1.75% 2.00% 2.17% 2.00% 2.25% 2.33% 2.40%

No Controller (mm) 0 5 8 14 24 38 54 83 122 173 215

% of D 0% 5.00% 4.00% 4.67% 6.00% 7.60% 9.00% 11.86% 15.25% 19.22% 21.50%

6.1.2 Results

The controller provides a marked improvement, by keeping deviation to an average of 2.01%

of primary motion. This was acceptable, as even 2.4cm could be accounted for in tolerances

without major loss in coverage. Conversely without the controller the deviation was an

average of 10.41% over the tested range, but increases constantly. The results can be seen in

Figure 16, without a heading controller the error grows to an unusable >20cm. This would

have placed increased reliance on obstacle avoidance, and introduced a large uncertainty into

path planning.

Figure 15: Initial Position for Heading Control Test

Page 32: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

26

6.2 Linear Speed Testing

Testing to compare the linear speed of the robot to the desired user speed was performed.

Input Speeds were tested between 50 and 200 for each of the four directions (FWD, REV,

LFT, RGT), giving actual robot speeds between 25 and 350mm/s. The relationship between

input speed in each direction and actual robot speed can be seen in Figure 17, including the

equations which model each relationship.

The results and associated models are used to correct for the difference in desired and realised

output speeds, as discussed in Section 5.1.6 Speed Correction Factor.

y = 0.5111x + 18.073R² = 0.9964

y = 0.4889x - 18.959R² = 0.9956

y = 0.5284x + 38.282R² = 0.9968

y = 0.5165x - 39.956R² = 0.9951 -250

-200

-150

-100

-50

0

50

100

150

200

-400 -300 -200 -100 0 100 200 300 400

Inp

ut

Spee

d

Robot Speed (mm/s)

Speed Correction Factor Models

FWD: REV: RIGHT: LEFT:

Linear (FWD:) Linear (REV:) Linear (RIGHT:) Linear (LEFT:)

Figure 17: Speed Correction Factors

0

50

100

150

200

250

0 200 400 600 800 1000 1200

Hea

din

g Er

ror

(mm

)

X Position Relative to Start (mm)

Heading Error for 'X' Motion

Error: w. Controller Error: w/out Controller

Figure 16: Heading Error for ‘X’ Motion

Page 33: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

27

6.3 Positional Controller Testing

The position controller was tested by sending the robot to four different locations, and then

returning it to home. At each location, actual distance from the origin in each axis was

measured. The error was then computed, presented in Table 3.

Table 3: Positional Controller Errors

Desired Position Assumed Position Actual Position Error

(mm from start) (mm from start) (mm from start) (difference in mm)

X Y X Y X Y X Y

0 0 0 0 0 0 0 0

350 350 349.25 350.8 310 335 40 15

1500 350 1499.5 350.8 1370 270 130 80

1000 -80 1000.6 -80.6 940 -80 60 0

-100 -150 -99.8 -150.4 -105 -165 5 15

0 0 0.94 0.78 10 -10 -10 10

Whilst the position controller shows appreciable physical error in each direction, the assumed

position is accurate to within the set dead band. This suggests that either the odometry

integration estimation is invalid, or that the speed correction is incorrect. Either way further

testing would be required to eliminate these errors, but this was not deemed necessary in the

time frame of the project, as the robot did not move great enough distances that these errors

became appreciable.

6.4 Spike Testing

The spike algorithm was tested on both long and medium range sensors. It was found that

long range sensors had a higher tendency to pick out landmarks. For out of range readings,

the medium sensor was found to fluctuate. These fluctuated readings often prevented an edge

from being detected. With long range sensors, this the range was less often exceeded, thus

making edge detection a lot easier. Spike was tested with long range sensors, and the

implemented data association code and was found to correctly extract, and associate the set

cylindrical landmarks, both static and dynamic tests were made with the robot, i.e. associating

from the same position, and different position, this was found to give positive results also.

The decision to change to medium range sensors was made as the project plan varied at late

stages of the project. Although spike implemented with landmark association and long range

sensors proved to work well, it was hard to integrate with path planning. The absolute

position of the landmark is passed as an output for spike which almost proved useless without

a full SLAM implementation. From the inaccuracies of this landmark position as well as the

position error, it is very likely that collisions will not be avoided. Also, Long range sensors

have a very high lower range detection limit, this made it almost impossible to get into close

proximity with the walls therefore affecting the map coverage. Because of these two issues,

Page 34: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

28

medium range became the better choice of sensor. Instead of locating all landmarks, the

medium range sensors were swept across the direction that the robot was travelling in and

were accounted for as they came within an intolerable distance, as discussed in Section 5.5.2

Map Covering.

6.5 RANSAC Testing

Extensive testing of the RANSAC algorithm was performed using the rotating IR sensors to

obtain a full 360o view about the robot. This algorithm performed well on the raw sensor data

it received, but the supplied data itself was not perfect. Custom models were made for each

IR sensor and so distance values obtained from them were accurate under static testing

conditions. However under dynamic conditions, with some parts of the map in shadow and

others not, as well as the servo imparting motion onto the sensors the readings were not as

accurate as desired. For example, Figure 18 below shows the raw sensor data with RANSAC

line values superimposed over the top.

Figure 18: RANSAC Test Data Output

As can be seen, the values obtained from IR sensors which represent the walls of the test

centre are very accurate along the length of the line. However, the values obtained from the

corners of the test data are incredibly inaccurate which sways the lines obtained from the

RANSAC algorithm. This is undesirable, as the corners are the points of the most interest for

performing this algorithm. The values obtained from the IR sensors at the corners are wrong

for a few reasons. The lighting conditions are always worse in the corners which can read to

inaccurate readings. As well as the infra-red beam sent from the sensor also has the

-500

-400

-300

-200

-100

0

100

200

300

400

500

-500 -300 -100 100 300 500Y

X

RANSAC with Fitted Lines

Page 35: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

29

possibility of bouncing off the corner, hitting the wall on the opposite side and going back to

the IR sensor having travelled a longer distance, this would therefore take longer and make

the IR sensor read the value as larger than the actual distance. This phenomenon is shown

clearer in Figure 19.

Figure 19: IR Sensor Corner Inaccuracies Visualisation

From analysing the data further, one can notice sudden discontinuous jumps which is where

data from two sensors are joined. The discontinuity is mainly due to the angle of the servo not

being perfectly accurate as well as the sensor models having slight inaccuracies. The angle of

the servo motor was used to calculate the X and Y positions from the range using simple

trigonometric functions. These X and Y position arrays are what the RANSAC algorithm

operates on.

In the right most line shown in Figure 18, the line should be very nearly vertical. It isn’t and

this is mainly due to the discontinuities between the data when two different sensors are used.

The line fits the bottom half of data very well, and where it begins to diverge heavily from the

sensor data is where the discontinuity can be clearly seen.

RANSAC could estimate which direction relative to the robot was the closest corner very

well, however the actual location of this corner varied a lot more than how much the sensor

data varied. This was mainly due to the algorithm finding this corner from a line which was

fit to data points a very long way away from the corner location. This exact problem can be

seen in Figure 18.

The algorithm was initially tested for obtaining the location of the starting corner off-board

the robot using the same 360° IR sensor and servo arrangement before accurate control of the

robot was possible. The RANSAC algorithm would be performed, and then specify to the

user that the system should be moved half of the distance to the nearest intersection point that

the robot had located. The algorithm was then run again and this was performed until the

location of the found corner would not change. Through this method of testing, the position of

the closest corner could be found within 20mm in both the x and y directions.

Page 36: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

30

7 Discussion

Unfortunately due to unforeseen problems with the Arduino board, and problematic IR

sensors, the intended SLAM algorithm could not be perfected in the specified time. The

actual simultaneous localisation and mapping from the system built a map as the robot moved

through the environment, and used this map to align itself every pass of the map. The exact

location of the robot was only evaluated through the dead-reckoning of the robot state

estimation system, and this was updated with the built map by aligning to the walls on each

passing.

A higher emphasis should have been placed on integrating all sub-components of the project.

A lot of work was performed without any in depth thought of the concept and the benefit to

the entire system. For example, both spike and RANSAC worked well in both theory and

individually testing. But when it came to integrating these with the overall system it was

difficult use this data to produce a useful control output. Integration of these two components

in to the system proved to be a lot more work than what was initially expected, thus they

became unused.

Towards the end of the project, it was decided to remove all use of the magnetometer. The

interference to be too much to give any reliable feedback from the sensor. Further

classification should have been made in order to eradicate these errors. The use of ferrite

beads for EMI rejection or relocating the sensor to a less susceptible position could have

resulted in more reliable readings, but the decision was made to use the gyroscope instead.

Mounting the MPU in a higher location (away from any electrical disturbances) was not an

option as this would obstruct the view of the IR sensors. The magnetometer would have been

the better option of rotation sensor as it gave absolute and relative position feedback.

The Mecanum wheels were very hard to predict, thus the odometry model was not as accurate

as what would be desired. Any form of direct relative positioning feedback would have been

more ideal in this case as the robots pose could only be estimated and not actually measured.

Encoder or tachometer feedback could have proved to be very useful as the velocity vectors

are directly dependent on the angular velocity of the wheel. Or the use of a spherical encoder

could have aided the odometry problem greatly. Further problems were encountered with the

motors as they would only work under certain unknown conditions. In the odd case, values

were sent to the motors, but no actuation took place. The reason for this error is still not

known, and many hours went into fixing this problem by trial and error.

The IR sensors were the main cause of most of the encountered errors. It was because of the

inaccuracies in these sensors that made both landmark extraction algorithms un-useable.

Initial planning went on the assumption that the IR sensors are ideal, this was not the case as

tuning had to happen on a regular basis to keep the inaccuracies at a minimum. On top of this,

many of the sensors needed to be replaced during the duration of the project. It was unclear

what was causing the issue with these sensors, but nonetheless the difficulty with them was a

large set back to the project.

Page 37: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

31

7 Recommendations

7.1 Algorithm Development

The SLAM aspects of the overall algorithm should be improved so that the system

updates its known position based on the environment it is moving through. For

example, the overall position of the robot could be updated every time the robot

reaches the first main wall so as to eliminate the drift in the state estimation values

The RANSAC algorithm should be incorporated into both the mapping and the first

corner finding algorithm to improve the robot’s response to changing map sizes by

finding closest possible corner first. Measures should then be included to not have the

robot always strafing left during the general map coverage, and to incorporate the

algorithm so that the robot can strafe left, if the first corner it finds is to the left of the

robot

The Spike algorithm should be tuned for the medium-range IR sensors, and this

should be incorporated in the data association algorithm for building the map of the

environment. This could then be used for identifying obstacles that can be used for the

obstacle avoidance algorithm to recognise if an object in front is an obstacle or a wall,

and therefore determine how the robot should act accordingly

The RANSAC algorithm should be used to align with any possible walls mid-way

through the traversing of the wall. The real system map size could be a lot larger, and

so relying on dead-reckoning for the straight-line performance might not be as

accurate as possible.

The RANSAC algorithm should also be improved to ignore the distance values of

1000, which correspond to the IR sensor sensing out-of-range. These should not be

used for the algorithm building the lines and this may sway the gradients of the lines

out and make them more inaccurate.

The RANSAC and spike algorithms should be rewritten to allow for them to be run in

parallel with the motion of the robot. This would reduce the need for the robot to stop

for these algorithms to be run, and the performance of the system would be improved.

If this were the case, the scan would need to be able to be performed fast enough so

that the obstacles in front are not collided into if they are in the blind spot of the

sensors.

7.2 Sensor Improvements

The errors in the servo angle should be mapped through testing, so as to remove the

discontinuities from the built X and Y map that came about from the use of merging

three different sets of sensor data.

If the servo still proves to be not as accurate as required, then an accurate stepper

motor would work well for this system, and it would not be difficult to implement

The errors that originated from the MPU when trying to perform the rotate to wall

function should be identified, and then the function should be improved so that the

robot does not randomly rotate 180° only some of the time that the rotate to wall

function is called.

The functions used for the robot motion should incorporate the functionality of the IR

sensors actively scanning a 50° area in the desired direction of motion. This would

Page 38: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

32

allow the robot to pick up obstacles that currently go unseen and that hit the wheels of

the robot.

The reason for IR sensors failing after being used for a short period of time should be

investigated, and the possibility of using more robust and consistent range-sensing

devices should be considered for further implementation of this project. If a new

range-sensing device is purchased, it should be more accurate and be able to sense

much larger distances

7.3 Proposed Odometry Improvements

At the software level, the full solution is produced in the following stages:

Input speeds in X, Y directions, and rotational speeds are drawn from the user, or

from the positional controller

Input speeds are coerced into a range appropriate for odometry

Coerced speeds are corrected so the output motion of the robot matches the input

Control is applied to all three speeds to maintain heading

RPM values are generated for each wheel using controlled linear speeds and

Mecanum inverse kinematics

RPM values are converted to PPM values, which are sent to the motors themselves

As described above, the complete model is lengthy, complicated, and requires a redundant

number of corrections and manipulations. In hindsight a better solution for motor control and

odometry, would have been to:

Characterise the relationships between PPM and wheel RPM, in the same manner

Find the relationship between output robot speed and input wheel RPM,

experimentally. This would give a single correction factor for conversion between

desired linear robot speed and required wheel RPM.

Fine tune the system for a smaller range of values – for example speeds between 100

and 200mm/s – and limit speeds to be within these values – this would have increased

relationship accuracy for less testing , whilst ensuring the robot still moved around the

board at an acceptable speed

This solution would have been faster, computationally more efficient, and easier to

implement. Furthermore, it would have been a far simpler task to tune or trim the system. In

all likelihood, a PID controller would have worked in this set-up too.

Finally, had an odometry and robot pose been realised earlier, even in an inaccurate form, it

could have been integrated with other SLAM elements sooner. SLAM integration would have

given a more definitive scope for the required accuracy, without which development was

aimed at absolute accuracy. This took a large amount of time and effort that may have been

wasted, if the level of accuracy achieved is unnecessary.

Page 39: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

33

8 Conclusions

The solution presented in this report can be applied to the original goal of an autonomous

vacuum cleaner. The solution showed particular characteristics that made it both highly

practical and novel. Specifically, the following design considerations and features:

The real-time logging software that was designed using the LabVIEW software package

worked perfectly at telling the designer what was happening on the robot at any one time.

This functionality could be further enhanced if the docking station of the robot had some form

of a processor to supply feedback on the operation of the system. This could provide valuable

information to the user and improve the marketability of the system.

The decision to mount the three sensors on the rotating servo gave the ability to see all of four

directions of motion easily, and allowed for the robot to scan in these directions without the

need for having four sensors. By reducing the amount of sensors used the cost of the system

was less, as well as providing a simpler implementation. The final robot had a very

minimalistic design because of using only these three sensors. It also lead to a much smaller

profile than many other concepts.

The robot path planning algorithm works in a very predictable way, which is desirable for a

home situation where the home owner can see where it is operating, and they know it will not

move from its path until it reaches the end wall. The robot also operates in a systematic way,

so once it has left an area for cleaning that entire area would have been covered.

Page 40: AUTONOMOUS SIMULTANEOUS LOCALIZATION AND …homepages.engineering.auckland.ac.nz/~pxu012/mechatronics2015/group14.pdf · Beevers and Huang [2] utilise low cost IR sensors and a microcontroller

34

9 References

1. Huang, F. and Song, K. (2008). ‘Vision SLAM Using Omni-Directional Visual Scan

Matching’. Paper presented at 2008 IEEE/RSJ International Confernece on Intelligent

Robots and Systems, Nice, France

2. Beevers, K. and Huang, W. (2008). ‘An Embedded Implementation of SLAM with Sparse

Sensing’. Paper presented at the 2008 IEEE International Conference on Robotics &

Automation (ICRA 2008)

3. Abrate, F., Bona, B. and Indri, M. ‘Experimental EKF-Based SLAM for Mini-rovers with

IR Sensors Only’. Dipartimento di Automatica e Informatica, Politecnico di Torino,

Torino, Italy

4. Wang, K., Xia, G., Zhu, Q., Yu, Y., Wu, Y. and Wang, Y. (2012). ‘The SLAM Algorithm

of Mobile Robot with Omnidirectional Vision Based on EKF’. Paper presented at the

International Conference on Information and Automation, Shenyang, China

5. Burnner, C., Peynot, T. and Underwood, J. (2009). ‘Towards Discrimination of

Challenging Conditions for UGVs with Visual and Infrared Sensors’. Paper presented at

the Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia

6. Sheu, J., Wang, Y. and Huang, T. (2012). Implementation of Indoor Wireless Positioning

System in an Omni-wheel Robot Based on Gyroscope. Advanced Materials Research,

482(2): 126-135.

7. Søren Riisgaard and Morten Rufus Blas. 2005. SLAM for dummies. [ONLINE] Available

at:http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-

spring-2005/projects/1aslam_blas_repo.pdf. [Accessed 20 May 15].